 title of the panel, you have it there, which is Computational Engineering and Science Next. Right, and first I'm going to introduce the panelists, so you know Karen Wilcox, she already talked, so I'm going to do like a short introduction in case you just got here. So she's a professor in University of Texas, Austin. She is the Vice President for Research, also there, she spent some time at MIT before joining UT Austin, and she's also a professor in the Santa Fe Institute. So she worked before joining MIT, she worked in Phantom Works for Boeing, and she's a fellow of the Society of Industrial and Applied Mathematics, an American Institute of Aeronautics and Astronautics, and recently a member of the National Academy of Engineering. So then I'm going to introduce Brett Savoy, he's the Charles and Nancy Davis associate professor in Chemical Engineering at Purdue, and he graduated from Texas, A&M, not Austin, sorry. And he obtained his PhD and master's there, no, his PhD in Northwestern University. He works on developing physics-based models and machine learning methods to characterize and discover organic materials. He's the recipient of the ACS, PRF, and NSF career awards, and also the O&R Young Investigator Award. Then we have Eugenia Collerciallo, okay. So he received his PhD degrees in Hopkins University, he's a professor in the School of Biomedical Engineering here at Purdue, and he directs the ELAB laboratory, his research focused on artificial intelligence, deep learning, robotics, and 3D understanding, healthcare, and science applications. So he also received the presidential early career, he also he received the Presidential Early Career Award, and he's the thing I found really interesting and I want to mention is that he founded this FWDNXT, right, to deliver next generation synthetic brains for artificial intelligence. And last, Ale Strachan, he's the Railroad Professor of Materials Engineering at Purdue University, and he's also the co-director of the Nano Hub, NSF-funded Nano Hub. Before joining Purdue, he was Los Alamos National Laboratory, and his area of expertise is developing predictive models for atomistic multi-scale models using also artificial intelligence. So he has been recognized by several awards, for example, he received the R&D 100 award for software and services for Nano Hub. Before we talk about, I start asking questions to all of you, so you talk about digital twins at the beginning, right, and I think the title of our panel is What's Next, right, and I was wondering how you think we can train, right, and how students need to be trained to be, for example, working in the area of digital twins, what things are needed, if you want to elaborate on that. Yeah, so first I'll say, I think it's very exciting but also challenging time to be a student, because the reality is that the things you need to train for the world that's coming so much more than you can possibly hope to pack into certainly four years of undergrad, which is why then you've got to keep packing it in graduate school, but for sure, and you already heard some of my biases during the talk, linear algebra is so foundational to so many things, and I think really having a very strong foundation in linear algebra helps you in so many directions of things that are relevant to digital twins, whether it's optimization, machine learning, or the solvers that go into physics-based models. This linear algebra is absolutely foundational, so I think that that's a really big one. Computing skills, one of the big changes for me in moving from MIT to UT Austin is the ecosystem and the classes and just the culture around computing. My graduate students at MIT really struggled to get enough exposure to high performance computing and scale algorithms in their coursework. At UT Austin, we have absolutely fantastic classes, we have the supercomputer center that I mentioned, TAC, and I think that kind of exposure to computing at scale, no matter sort of what your field is, is also something that's just incredibly important. And then of course there are hundreds and hundreds of other topics, there's a lot, but I mean if I had to pick two just to make sure you don't neglect, it would probably be linear algebra and high performance computing, but that's an unfair question, Marisol. Yeah, I think I can ask the same question to all of you, but maybe we don't focus on digital twins and you want to focus on your area. For example, what gaps are in your area, and I'm going to ask this question to the other three, in applying machine learning or AI to your area of research, and how do you think you need to be trained to train the students and what they need? Yeah, so it's definitely not a fair question, but to piggyback a little bit on what we heard, I think. I'm really a big booster for AI by and for engineers, so I don't think anything that we saw today would have been possible with someone with a purely CS background. So the integration of the domain expertise into the development of these systems is I think so central. So how do you train up on all of this stuff without neglecting the domain expertise that actually makes these breakthroughs possible? So on the one hand I would just caution against dispensing with too much of what we would traditionally call domain expertise. We still have to really cherish that, but I think that we can accelerate the mode of doing engineering and science. So I mean, I think a lot of us here probably use large language models as part of their daily activities now, right? So it's accelerating a lot of the things that I used to have manual processes for. I'm probably using them in unorthodox ways compared to the ways that people trained them, but just like we used to push down programming to lower and lower levels, I think we are going to be pushing down artificial intelligence training to lower and lower levels in our curriculum. I think being able to use these tools effectively is going to be just as important as traditional programming. Do you want to give your take on this? Yes, absolutely. Well, first of all, I wanted to thank you for the very interesting talk and tell you that I actually agree with you, even though I'm a machine learning person. Yeah, I agree with you. And I think if you have a question, I think it's really the way to go. If you have a question? If you have a question, yes. And I also, the second that linear algebra and computers are very important, because even in machine learning, those are the main ingredients. Yes. And yeah, I think I'll leave it at that. I can chime in a little bit. And I think this is going to be a very boring panel, because we all agree. And so that's the worst possible panel you can have. So I'll be a little provocative. And so if we think about it, itching, okay, so first of all, fundamentals, whether it's linear algebra, domain expertise, the fundamentals need to be there. We're engineers, scientists. That's the first thing. The other thing is, and I think digital things bring it to the forefront, you really need experimental data and data science and models all together. And I think often in education, we're very compartmentalized. You have a lab course, and then you do lab, and you treat data, just put it on a spreadsheet. And then maybe you do a Python course and you don't think about the lab. And it's really one thing. And the last thing I would say, maybe controversially, is for education, this is challenging. These models are complicated. You need infrastructure. I would encourage us to think about digital cousins. They don't have to be twins. If you're being educated, maybe an OK model is OK to train students in the idea that data and models and predictive models can be brought together. And the models might not be the models that you need a supercomputer to run because you just can't afford it. Maybe a surrogate model would be a reduced order model would be a good example. But even if the model, no model is perfect, and so just don't let the excellent be the enemy of the good. And the digital cousins might be the way to go for education, certainly at the undergraduate level. Okay. I'm going to switch gears a little bit, right? So I have some thoughts about how to, so AI research assistance, right? So you can use high throughput experiments and use robots, right, to synthesize materials and see how these materials respond to different environments. You can use simulations, right? And you can come out with high throughput simulations and then use that to infer a reduced order model. So what are your thoughts and your experience on AI research assistance to do all that, right? And I think I'm going to start with you because we discussed this before. Yeah, no, thank you for the question. I mean, I think it's an interesting area that has been developing more recently, maybe, you know, because we're living on the success of this generative model, a large language model, and the capability, and maybe we feel a little bit cocky now. And we think, okay, maybe we can scale this to understand the more of our data, they could become some kind of a science assistant. And the truth is that I think it is going to happen. I think it's the future. But there's still like a lot of different hurdles that we have to overcome. And I think probably the first one is that we still don't have a model that can understand, you know, the textbooks or diagrams, equations or plots, even the things that you showed in your talk, the level we understand them yet, maybe because large language models, for example, at the moment, they just rely on text, right? And a lot of this research could be visual or even understanding data. Of course, you can say, yeah, I trained a neural network model, but there's always a goal in the training, and it's never general. So we don't expect something new. But in AI, the science assistant, for example, it would have to create something more that we haven't thought about, right? And so I think that's really a challenge. And since I'm personally bad with equations, so maybe that's why I'm in machine learning, I also hope that they'll be able to write them for me. So any other ones who have experience using AI research assistants? And your thoughts on that? I can chime in quickly. I think they accelerate our research. And again, you need the fundamentals, you need the expertise. But in my group, we run molecular dynamics. And this is an interesting story. It's a code called LAMPS from Sandia National Lab that a lot of people use, but it's molecular dynamics. It's not Python. And about a year back, a few of my students decided to ask GPT to write inputs to run the MD simulations, basically writing what you write in your paper in the method section and ask GPT to write the input scripts to run the simulations. And we ended up writing a paper documenting how well it does. And for simple tasks, it gets it right 100% of the time. And as you get more and more complicated, it gets you 90% there. So you still need the expertise. But we spend months training our students on these completely arbitrary input language that the creators of LAMPS generated. You need to know the commands, the order in which the commands need to be inputted. And that doesn't mean that you know molecular dynamics or the physics. It just knows how you just need to know how to set up the simulation. So I think in that sense, it is very productive. It can save us a lot of time. We even went back to my paper that I wrote as a grad student, and I'm old enough that LAMPS didn't exist back then. And we wrote our own codes and cut and paste from my paper. And LAMPS could actually do the simulation in one of my papers as a PhD student. And telling you it's easy because I did it. I actually asked LAMPS, GPT, I cut and paste and run it. So I think there's a lot of potential for it. So I guess we switch from... Do you want to say something? Well, I was just going to give just a slightly different viewpoint, which I think in answering your question, it's important to ask, are these menial research tasks that just have to be done? And there's something to be said for some kind of automation that accelerates them? Or are these research tasks that are actually... Yes, they're tasks in the past to achieving research, but it's the experience of doing them that is the educational value. And I often remark, yes, we go around the world, we talk about our research, we get awards for our research. My output is not research. My output is people. You asked me as an academic, what is my product? My product is people. They're the students I educate. And I think back and I realize I'm probably quite old fashioned, but teaching the controls class at MIT, no calculators and exams. And I made the students do things by hand, but even then could very easily be done by computers. Why? Because I wanted them to deeply understand what a pole zero plot means and how to manipulate it, not because they were going to need to be able to do that in their real life. And so I am a little fearful and I think it's probably just important to separate out the menial and low value where automation will accelerate research and improve the experience in the lab from the perhaps menial but really essential to getting that intuition and being a part of the educational experience. And if AI starts to replace research assistance in the ladder, I really start to get worried. Yeah. So you guys are safe for now. You get to keep your menial tough. Yeah. So I think somehow, do you want to- Sure. I couldn't agree more. So also when it comes to democratizing the research experience as well. So the things that cause the most friction are oftentimes the ones that are foremost in our mind, but those aren't actually what we consider research. The core, what is the human adding to this process? Oftentimes it isn't those laborious things. And so this can actually really clarify the role of the scientists and engineering tasks. And then when it comes to democratizing, you can see that in involving undergraduates. So the training of an initial researcher, there's often a ton of friction before they can ever even do a single meaningful thing. And I think you can shorten that time dramatically with some of these tools. So that's just another example of getting them closer to research faster. But it's not that you've gotten rid of the human being, but it's that you've compressed a lot of the friction out of the process using these tools. And that's the challenge in education. What are the skills that our students need to focus on which are uniquely human, or we think are uniquely human, versus the more mundane tasks that might not be as needed when we have better automation? I learned how to take square roots by hand. I don't use it. It's not a useful skill current. I'm sure they don't even know what you're talking about. Some of the young ones. By hand, they calculate a square root, not of nine. You mean with a pencil? With a pencil. So I think it's somehow from AI research assistance, we talk also about large language models that are used in speech recognition, image recognition. So how do you think these techniques can also be used in engineering and in science, and how they can be exploded? And I'm going to start with you. Something that's very interesting about what has happened in the past year is that we've developed these tools that we still are not sure what they're capable of. So that's a really fascinating, I don't know of any analogy here actually. We've created this object that we are still, the only way to figure out what it can do is by prodding it and experimenting with it. And so I don't actually know the answer to that question because we don't know what the models we already have today are capable of. And then as we were talking about earlier, as we start to get more information sources into these things, like actual images, not just sort of compressed image to text translations of them or actual tabular engineering data, I don't think we really know what they're going to be capable of. So an example that we're working on in my group is we've actually been experimenting with whether or not these things can generate good hypotheses. And when it comes down to coaching it, so you try to coach it and give it a description of this is an example of a good hypothesis. You have to come up with a pretty formulaic definition of what is a hypothesis in its relationship to a data set. And I've been actually quite shocked by how good it is at suggesting hypotheses. When I coach it and I say this hypothesis has to be testable and you give it instructions like it has to be testable in a relatively small research laboratory. So what's an additional data set that could be generated to test this? I've been really surprised at its ability to suggest quite practical things that I would expect out of a colleague, for instance. So that's one example that I think just hints at what might be possible, some of these in the context of research. Virginia, do you want to add? Yeah, I mean, I think it's a very fascinating field, I think. And yeah, you're right that we still don't know what all these things can do and we're still trying to identify all the capabilities. And it'll probably take a while. To me, it seems like it turns out to be a very good model of a human brain in a way, even though it's just trained on text. And sometimes when you interact with another person, we also have to find out what do they know, what can they do. And so we have to somehow use the same kind of test, I think. But the real question is to me is how do we go from this system to having something that can really generate, conceptualize the essence of a problem, the way that we do it, or engineering students do it. I've seen, recently I've seen video of Sora, this new generative AI that generates video. And I've seen, for example, it was like simulating water in the video. And it looks really realistic. And the question is, okay, what does it know actually about the physics of water? And in theory, he's only seen a video or an example of this, right? But in a way, that's similar to maybe our brain works. If you take a maybe not non-engineering person, when we see, we can think or we can visualize water in our mind, but now it moves. Although we cannot maybe replicate what every single particle is, but at least they have a physical perception of it. I think it's just interesting to find out how much they really know these models, right, about physics. I really haven't. No, just quickly to the students, you know, Alpha-Zero, these type of programs can be the grand masters at chess. But a combination of a computer program and a human can be the best computer programs. And so that to me is what's exciting about our area. So what type of combination of human intelligence and these tools, which are tools, would lead us to push our fields forward fastest? Okay. So in every single of your areas, right, so I think my question is going to be now, what is the most exciting application, right, that you foresee or maybe you're doing right now of machine learning? And I'm going to start with you this time. So talking about machine learning, I think one of the things that's most exciting is the ability to do field-to-field mapping. So we run these very large-scale simulations with molecular dynamic simulations, highly non-linear, and the simulations take the amount of time that Karen was talking about. And we can do very few, and we're interested, we're in material science, and if you know anything about material science, we care about microstructure and defects, so things that localize, strain, and you get very non-linear response when you apply an insult to a material that has microstructure and defects. So we spend this enormous amount of time to generate idealized defects, and we can understand, okay, if I have this boundary, I can understand the stress around it or the temperature around it, or if I have a spherical void, I'm a recovering physicist, so I like spherical voids and circles and stuff like that. But if I generate the microstructure that has very complex interacting defects, our brains are not very good at mapping that to whatever the stress field or the temperature field, and actually some of the authors of this work are here, but we're able to map the initial microstructure to the final fields that we care about with a type of neural network. And it's actually, we tested it, and it seems to be learning some of the mechanisms that lead to the localization of stress or temperature, energy, and in ways that our brains are just not very good at, to look at things in three dimensions and complex structures. So that allows us to now think about, okay, can I optimize microstructures in ways that were not possible, because I reduced the time it takes to generate one of these answers by many, many orders of magnitude. And even if the answer is not perfect, I can then go back, reduce the number of tests that I need to run a high fidelity simulation on, but doing it brute force with a high fidelity simulation would be impossible. Do you want to go next? The most exciting thing? Yeah, the most exciting thing. So, you know, I sort of feel obliged as an aerospace engineer to give an aerospace answer, but actually, I think the most exciting thing that I see going on is the revolution of medicine, recognizing that when it comes to biological systems, the maturity of the understanding of the phenomenon and the mathematical models, like I talked about today in engineering, it's just not there in medicine. But at the same time, there's such a revolution going on in sensing and data. And again, that even so the data that can be collected is indirect, imperfect, sparse, noisy, all these things. But putting it all together, including what machine learning can bring and recognizing that unlike in engineering where you're creating something from nothing, and if that something is something where people are going to fly on it or people are going to drive on it, it has to be essentially perfect. It has to be, you know, failure probabilities of 10 to the minus nine. In medicine, people are getting sick every day and thinking about how computing can lead to better outcomes and replace the basic trial and error and sort of general population experience base. I mean, to me that this is just so exciting, and I think that revolution is just, just beginning today. Mugenio? Yes, I honestly find very exciting the ability to create some kind of artificial intelligence that is fairly general. For example, all the things that you showed in your talk, I think it's like it takes an entire career for one of us and not all can do it, to really understand all of that you showed, right? And I'm trying to, for me the interesting part is how do we create some algorithm that can get closer and closer to that level of understanding material or creating new instruments even in the medical field that you mentioned. I think sometimes, like you said, we still lack instruments to measure the things we want even in medicine because there's so many scales like in material. And I feel like even maybe our brain is too small to comprehend all these enormous amount of data, right? That if we could get an assistant or something that can expand it or just help us and say, well, you know, you look at all this and there's a trend and I think that to me is exciting. There's a lot of problems, honestly, I agree with Karen sort of at the high level. I think there's a whole set of problems that previously as a recovering physicist as well that have an irreducible complexity to them that we don't have any good approximations and we know that there's so many layers of complexity that we're not going to have that good of approximations and that finally with empirical models we can get some purchase on. And you can do a lot of good with those models out in the world. So I think in medical applications, especially where at the end of the day you care about, is this thing reducing error rates on diagnoses, is this thing, right? I mean, so there's a lot of domains where you can have massive impact by sort of embracing the complexity. I would also say that in my domain, there's a lot of problems where there's almost a phase change with size. So problems that we sort of know that we had enough information going into it, but we can't hold all that information in a human brain. So a lot of design problems are like this where you say, so and so I need to talk to this expert medicinal chemist because they've got all of this lore in their heads and they sort of know what's going to be soluble. They kind of know, oh no, that's going to cause problems during scale-up. They've got all of this stuff sort of baked into their experience over years and years and we never had a way of getting that experience out of their brains, but I'm actually optimistic that a lot of what we historically have called sort of lore and heuristics and experience, we're going to get better and better at putting into these big systems. I think it's going to democratize and maximize the utilization of that knowledge in a way that currently that's all bundled up in sort of individual expertise. Thank you. I think that you all share what you are excited about. So now I want to see if you can share also what you're worried about with all these new technologies, right? I can start. I think the main is we meaning the public academic institutions are not controlling this technology and are not driving this technology. These are being driven, these tools that we're using are being driven by corporations and they're the ones have the facilities, data, the person power to push the field forward and we are using the tools that they're driving and of course a lot of the folks here are doing excellent work and contributing in our own domains, but we're not in the driver's seat and so we are dependent on corporations and where they decide to go. These LLMs can basically out of nowhere. I'm really worried that the race to embrace machine learning and AI and to go faster and to show great results for problems we could never solve before is going to come at the expense of investing in a portfolio of approaches and also at the expense of the collapse of the academic enterprise. I mean you heard in my talk some criticism. It is not okay to have models that have many more hyperparameters than you have data points and to tune those hyperparameters to get an answer that looks good and then publish it. It is not okay and that is what is happening today. It is not okay for so many reasons and at the very fundamental level, it's not okay mathematically. If you have an under-determined system you can get whatever answer you want. You tell me what answer you want and I'll give it to you and it will satisfy the equations. It is not okay and somehow you know you can't blame the researchers that are doing this because the academic system is rewarding that behavior through publications, funding, recognitions and this is not the fault of AI and machine learning but it's sort of coming and I think it's tied to almost a system that is out of control and I don't know what the answer is but I feel like we as academics all have to take a step back and say this is not okay and put a stop to it because if not in 10 years I really worry about where we're going to be. We are building a building on a foundation of sand. Yes, one of the one of the worries that I have honestly is that actually what we're producing here you know more and more technology or tools is slowly eroding the amount of work that humans can do. I fell in a way right and we've seen this over the last century or so but even so now for example now we have tools that can write for you that can write code for you so slowly we'll have less and less work to do maybe and okay part of it is I feel like it's what we want because maybe the most of us are lazy and you know I for example I would like to write the last program that I'll ever write and then not have to deal with it but then then I wonder okay what am I gonna do with all that time right and so how will I feel fulfilled and actually by the way we already see this now I feel like in the last 100 years because we're not fighting every day for food or you know we have to go and use all our time to get food you know most of us have to get can sit at home watch tv and do computers and then academics but sometimes I feel like we and because we have all this tool also we became less social so I feel like we're already we're already going away from our natural course and and if we also take away our job I don't know what's gonna happen with us so we'll see don't tell the students that they can work less yeah I think all of us are concerned more about the sort of cultural problems than some kind of doomsday scenario so I think those are overblown but I think the cultural problems are are probably underappreciated so there's there's this there's this asleep at the wheel phenomenon right so that with people who are using these things they actually find that on on problems where experts would ordinarily be successful when they're using an LLM as an assistant they'll actually do worse on those problems because they're sort of asleep at the wheel they're they're trusting it in a context where they would have ordinarily been using their brains and you can kind of extrapolate that out into a whole host of scenarios right so another another image that's maybe useful is we through all of our hard work and education on you know we've we've built up the infrastructure that made these these systems possible and then we might dismantle all of the all of the scaffolding and everything that made that possible we might actually you know dismantle it by becoming too reliant on them so I don't know how we how we grapple with that it's a very early days still but I think these are very serious problems especially as people who are in the academia and concerned about producing people on the next generation of humans so I think we have yeah we have 10 minutes until we finish so I'm going to ask the last question and so you have a couple of minutes each um so I asked you the first question was how do we what do we need to train how do we need to train the students right that are going to do the next thing the question now is what do we need to learn to be able to train the students on that right because most of us are not prepared to be teaching everything that we mentioned right uh you mentioned I didn't mention anything everything that you mentioned so how do you train us to do that I think we're working it out as we go so I'll I'll just jump in here I mean I will admit my my ignorance and so we have uh for instance in our group we have tutorials every week and basically I feel like um my current plan is to keep these going year-round several of my my students are out here but it's basically because I feel like the landscape is changing so quickly and I'm constantly learning things that I think they need to know and I am having them teach me things that I think we all need to know so we have these these going year-round and you know the idea that you're done being educated once you're done with your course work I think we all know that that's a little too too simplistic so we're working it out as we go yeah I definitely second that when I think it's uh yeah we have to keep training and and learn from other the other side the other people that we work with uh quite a bit um yeah it's uh landscape is changing fast so uh I guess I would emphasize the importance of uh education across boundaries and sort of into interdisciplinary thinking to education because the reality is that one collection of faculty are not and never will be equipped to provide the students with everything they they need to know especially in the changing landscape and you know if you think about an engineering undergraduate degree right there's some math content in there some universities have the math department teaching the math classes other universities have the engineers teaching the math classes you know the same could be said for computing classes for all these others I've come uh to appreciate just how important it is for the students to be exposed to faculty across a range of different departments and cultures because if I were if I as an engineer were to teach math I would teach it very differently to a mathematician and while that can be seen to be difficult or negative for the students it's a huge benefit when we start to grapple with these issues because seeing the way that other fields just think about the language they use the way in which they approach problems the culture they have towards things I think it's so incredibly important so you know I think your question was how do we prepare ourselves I think the answer is we have to be better team members with our colleagues all across campus and by the way not just on campus the experiential learning that will take place through internships we as faculty have to find a way to promote that even more than we have in the past so that the students are going out into the real world and getting exposed to some of these things that we just are not not equipped and so I think we need more of that we've always needed it but we need more of it than we ever have before yeah other than retiring to a beach somewhere which is which is an option uh would be so I think it's striking the right balance again I think we need to continue to teach the fundamentals right that's going to make the difference and the other things are tools right and energy conservation momentum conservation we need to as engineers understand those things at the same time how do we I think in our research we all experience these in part because we have talented grad students that are curious and explore things and have ideas and that's how we educate ourselves to a very large degree and translating that into the classroom is a challenge right it requires thought you know when we teach a degree it's not just a random collection of knowledge right but it's a body of work that kind of makes sense together so it's not like you can introduce you know willy-nilly machine learning and and artificial intelligence or ML or LLMs at the same time our our curriculum needs to evolve to make sure that our students have these tools and also know how to use them appropriately and when not to use them you know what are the right problems to use in neural network what are the wrong problems to use in our network so it's a it's a balancing act and and it's challenging to try to cram in more and more in four years because we can't let the fundamentals go right so it's it's not an easy it's about learning but it's also how do we change what we teach and how we teach I think there's an opportunity in labs where a lot of them we use very antiquated ways of handling data and and maybe that's a way to maybe incorporate more modern tools okay so I think we have just two minutes so if you would like to add anything feel free or maybe we can take one question from the audience yes so so um you know throughout history whenever we've developed tools every single tool is developed with a specific purpose in mind with a specific problem that it was meant to solve you know a hammer was meant to hit things screwdrivers must screw and screws whatever artificial intelligence right now I'm not really sure what problem these large general models are attempting to solve and it's funny to think about it because when we think about you know artificial intelligence we relate it to human intelligence as a benchmark and yet we don't expect humans to be good across the board in the same way that we expect general artificial intelligence to be you know we're testing it not only on you know how how much legal terminology it knows we're also testing it on its mathematical knowledge on its coding abilities on everything whereas with humans even you know we're specializing in our certain fields we each have our own problem that we're trying to become experts to solve so my question basically is a is there actually like a specific problem that these large general models are trying to solve and if not should we be trying to focus them and instead of trying to make this one size fits all model that can do absolutely everything and become the ultimate co-pilot should every single person have individual models that are designed to assist just them in just one field to fit just one problem I'll just share one quick fact here is historically there was the idea that you had to specialize and that a specialized model was always going to outcompete a broad model but in these benchmarks it seems that general knowledge tends to help you in particular tasks so that's one of the really surprising things about these benchmarks is these really broad models that weren't trained on specific tasks tend to do very well at specific tasks in a zero-shot manner or after giving them a small number of examples they can be very good at them and there is an analogy with humans though so I would actually push back with what you said about we don't expect one human to do a lot but what do we have we have a very general curriculum for human beings up until they specialize and it appears that there is a huge benefit for general training for these models before you maybe then go and fine-tune them on specific tasks so I think we're actually only for the first time training models analogous to how we actually do train humans which is with a very broad knowledge base before specialization I think that's actually what we're seeing right now any you want to chime in or another question from the audience yes go ahead thank you so much for all your insights so I had one question about incorporating fundamental knowledge into artificial intelligence because one of the main aspects is that we want to train neural networks that not only predict information but can also be explainable and can maybe help us advance science and fundamental knowledge so I was just wondering how we could maybe use machine learning in even advancing fundamental science and research and how that could be sort of a synergistic tool in also incorporating maybe ideas that are maybe abstract but maybe can be more quantified to these tools I think we saw a great example today right interpretable machine learning that you don't have to always use neural network and you know the question you're asking is is there's a huge amount of of research going into this which is there are many parts one part is how do you embed the physics and sort of the maybe the the simplest and some of the the approaches that we've seen is to include it weekly in the loss function but there's a lot of work going to look and see how do you impose you know the physics in a more fundamental way inside the models for examples through symmetries or invariance right I mean I showed the physics through the lens of I can write it down as a partial differential equation but we know that there are other ways in which physical principles manifest for example with symmetries and variances and so I know there's you know a lot of research into how you could embed that whether it's a neural network representation or something else so I think that's part of it you know the interpretability is another part of it and I do think this question of what's the purpose is very important because what does interpretability mean well it depends is there a human decision maker are the human decision maker making an ab decision or something more complex and I don't think it will one side be one size fits all so you know it's it's a really important set of questions and I think that really represents the frontiers of where at least the scientific community is engaging with AI right now and the questions that have been asked I think we are over time but I will give one more question if someone is really willing to do it yeah one more one thing that I think is interesting is that usually technologies come out and then um there's like a delay period before we decide certain legislation to protect them for example cars existed long before we had the laws to have uh seatbelts and so maybe and we can put it in the specifics of like science like how do you think um you know when it comes to publications and validating your models what are certain rules that don't exist yet in terms of you know policing people who are publishing papers using machine learning maybe in a very arbitrary way where it might lead us to some trouble down the down the line yeah maybe I can so I think this is this is a a important question to ask with regard to publications particularly I think it's a little unfortunate that the timing of this wave of machine learning and AI came at the same time that the publishing model for academics was being completely overturned and we were seeing the rise of the for-profit publishers and the pay to publish and so that was it was sort of the perfect storm so for publications specifically I think there are other there are other issues but um you know I think you could ask that question you could look at other forms of computational modeling and simulation that have been used and have been excuse me have um have had uh legislation and certification around them and their use there are a lot of examples of that again in engineering the nuclear nuclear engineering world and aerospace so I think there are our examples that we could look should look out for that just just quickly I I think the we we need to rethink publication the outcome of this type of research is the model the data and the paper should be secondary but we put it upside down because the incentives are upside down because academics are after citations on h-index and there's no h-index for sharing a model that people actually use or a model that people can actually train so reviewing a paper is extremely hard except for these basic uh type of checks that you can do by reading a paper if you don't have easy easy access to the model and to the data so to me the publication has to be the model it has to be open the data needs to be there so anyone can try it and it needs to be reused so when people should be able to try it and and see whether uh if you use it for anything outside of what what what it was uh trained on it's it's a complete disaster publications hasn't it's talking to my group the first paper was published is considered scientific paper was published in england in 1666 and you can go and look it look at it and it looks exactly like the papers were right today except now we have color uh titles that's the only difference okay that seems silly to me okay so i think as i said we are over time and so let me thank all the panelists here thank you very much