 So welcome everybody to the panel. Thank you so much for the very inspiring talk. And I think it will start a lot of conversations, not just in this hour, but they will continue at Purdue. And so for those who are just joining, a very brief introduction of our Purdue Engineering distinguished lecture speaker, Dean Yanis Jortus. So it's a dean of the White Air School of Engineering and Zorab Kaprelian Chair in Engineering at the University of Southern California. Dr. Jortus holds PhD in chemical engineering from Caltech and has a degree in chemical engineering also from National Technical University in Athens. You know, among his recognitions very recently, including at the same time as becoming a recipient of Gordon Prize, I understand that you also received a prize in filmmaking. You were recognized for a documentary that your executive producer, I think. And not filmmaking. I just gave the money for the thing to happen. That actually is one of the, I think that gives a really good topic, or a really good example of how we can bridge the societal forces in engineering. The documentary called Lives, Not Grades received an LA area Emmy in 2022. And then I'd like to introduce our colleagues, panelists. So Siva, Siva Raman, is assistant professor in industrial engineering. Siva has joined Purdue in 2022 from Texas A&M. Where she was at Research Institute for Foundations of Interdisciplinary Data Science. She was PhDs in electrical engineering. So we're bringing different fields of engineering to address those very big questions that you have highlighted. From University of Notre Dame and the master's and undergraduate degrees also from Electrical Engineering from Indian Institute of Science and PES Institute of Technology. Siva's research interests are at the intersection of control theory and machine learning for distributed decision making in large-scale cyber physical human systems, very relevantly, with applications specifically to transportation networks, power grids, and interdependent infrastructures. So thank you Siva for participating in the panel. And next, David Bernal, is assistant professor of chemical engineering who joined Purdue also recently in 2023, coming from Research Institute of Advanced Computer Science at USRA and jointly appointed at the Quantum Artificial Intelligence Laboratory at NASA. David received his PhD in chemical engineering at Carnegie Mellon and baccalaureate degrees in physics and chemical engineering from University of Los Andes in Columbia. David's research interests are in optimization software and theory, quantum computing, solution methods to problems in combinatorial optimization and chemistry, chemical and process systems engineering. Thank you, David, for joining the panel. And Khan Lee, assistant professor of chemical engineering who joined Purdue in 2022 from Polytechnic Montreal after getting a PhD in chemical engineering also from Carnegie Mellon and baccalaureate degrees in chemical engineering from Tsinghua University. Khan's research interests are in algorithms and software for optimization under uncertainty, machine learning for discrete global optimization and applications to sustainable energy system design and operation. So thank you all for joining this panel discussion. Maybe we'll start with a couple of opening questions. So kind of building up on what you have described of trust and purpose as really putting that forward as a framing for future engineers. Can maybe you start and then the panel shares how we see how we contribute to that engineering plus, what does this concept mean to each panelist and how can it be implemented in engineering, education, and practice, so maybe how your research and how your perspectives contribute to that engineering with purpose. OK, I'll repeat. So I think that we are facing a society, a global society, some significant problems that come. I think part of them has come from the fact that technology has moved so fast. And as a result, clearly the issues of, I mean, we talk about climate, that's an important part of that. But there are other things that have come out from inequality in terms of way people live in different parts of the country or the world. At the same time, we have to agree that many good things have happened. And I'm not a pessimist about that. Standard of living has improved across the world. Extreme poverty has been reduced dramatically. Extension of life has increased. Human suffering has decreased in many different ways. So at the same time, there is something undercurrent of new issues that have come up. And I think that the world also is going through a deficit of trust. And I don't know if this has come because technology has allowed people to express polarized opinions that make people, or maybe disinformation, misinformation that actually contributes to that. Sometimes, I mean, nowadays, and maybe in the foreseeable future, one would have to ask the question, is this really Israel or is it not real type of thing? So this trustworthiness has to come up and understood. So there has to be some arbiters of this truth. And I don't think we can trust today the existing politicians to tell us that truth. Because there are driven by the things. Maybe there will be some person that comes up and says that people will agree that very trustworthy. So I mean, professions like engineering, it is not, I mean, the conversation about engineering has not changed dramatically. I think many efforts has been made to change the conversation. And they have stumbled a little bit. One of the reasons, I think, is that we don't make us, we're not asserting ourselves in that. Because if you think about the technical competence that exists, it's unparalleled right now. And when you talk about, let's say, AI, we leave it up to others to explain it. We are the people that can explain what AI is. We know how things work. We know what's a neural network. We know how the computers play with billions of parameters to fit something. There's an optimization process there. There's a curve fitting in some way. And then the concept of having artificial intelligence or a kind of a consciousness and everything, these are important things. We have to take a step and advocate those. At the same time, people need to trust that what we're saying is right. So how do you develop this trust? I think it's a matter of, first of all, getting our students to understand that this is part of their responsibility to develop this. How do you develop character? I mean, character is taught in medical schools. And it is taught, let's say, in religion type of thing. Well, maybe we can borrow from what's going on in medical schools and say, how do you trust develop character in medical schools? You have to make sure you don't kill people. There's the Hippocrates oath. Maybe there's an oath that is needed for us to be able to get there. And I think that may be something to consider. Your source proposed. For me, coming from a background in systems and control theory, the systems viewpoint is something that underlies that whole concept. And studying systems that are inherently complex, non-linear. And if you start thinking about the implications of all that from whether it's a system like the grid or the transportation or society as a whole, it's a very humbling thing to study systems. Because you then start realizing that what you're doing is a little bit of something that has secondary effects, tertiary effects, so many things that are interlinked. And I feel that, for me, one of the things that I try to put into my research, as well as convey to students, is the scope and magnitude of the kind of interactions in the systems that we are addressing. And I believe that that should make a person humbled and a bit more centered when they start thinking about their role in this technological society and transition. Coming to the issue of trust in particular, I feel that it is our sort of duty more so today than ever to educate the public in an aggressive and relentless manner. Because there is a lot of mysticism, for example, around AI. If we look at it today, people are, AI is going to go conscious, AI is going to make humanity extinct. So we, as engineers, we see this as training a neural network. This is an optimization problem. This is doing a certain thing. So I feel that putting out the right version, the more nuanced and correct version of the truth out there, whatever effort it takes, we have to keep repeatedly doing that to kind of remove some of this misinformation, some mysticism that's coming about. And I feel like that would reduce the polarization that Dr. Yorzo talked about. Because once you understand the complexity of the system and the nuances involved, there can be no scope for binaries in this. Because all of these, they're not, you know, simple, yes or no, against or for, you know, concepts, they're a lot more, you know, complicated. And it is our duty to kind of educate the public in this, in this context. Okay, I'm so sorry. Okay, there you go. Yeah. Well, I'll go ahead. Many of the things that Ben Yorzo's mentioned, I fully agree with them. I would say that many of those exponential advances in society that we have seen, they have been driven by engineers. And certainly I think that this is a result of specialization in our, in our career. We have become experts in a very particular, and some might call it tiny corner of knowledge and some of the challenges that we are addressing or that we need to address rather quickly require more of a systematic approach. Instead, I think that a university is the perfect place to maybe take that step back and think a little bit outside of the box. You're surrounded by a lot of talented people that are thinking about other parts of these massive problems. So we keep talking about multidisciplinary efforts. So this engineering plus X, I'm glad that you use X because you can replace anything into X, right? Including other engineers. So I think that in order to, like a clear message towards two engineers, including my students that are in the audience, is that we need to address some of these challenges by using some of the knowledge that our colleagues in other buildings, in other labs, in our buildings, in our departments, in other branches of knowledge are doing. I think that this will also, now addressing the second part of the question, which is how do we incorporate this into a more, into how do we have society trust as more? I think that part of this mistrust comes from maybe a feeling that we're detached from reality. We might have the ivory tower kind of concept that is all over the place, that we are so engulfed in our corner of the word that we don't care about what happens to the rest. Well, it's on us in order to try to tackle these things. These challenges, we need to go out there, as you mentioned. We need to work on things that matter, work on things that might have effect on others' lives as soon as possible. So the challenge is out there. I think we know what to do. It will take some effort, but I guess that's where we are here. So yeah. Yeah, I guess I just want to echo what the other panelists and Dr. Yusos has already mentioned. So maybe I will talk about in two major aspects when it's generative AS impact, the other is sustainability. It has to be the two areas I have been focusing on since I started my career at Purdue. So one of the aspects, as Dr. Yusos already mentioned, generative AI and so forth are accelerating at exponential speed. And it starts making us questioning what will be the technical competence students would need to master these days. So I was very shocked, for example, last year when I made the final exam of Wyoming undergrad class, I was surprised, Chair GPT kind of scored 100% on my final exam. So I started, you know, questioning as besides technical competence, what would engineer has to be able to master? So I would argue that the engineering class would be a very important concept that besides the technical competence, engineers has to be aware of that, the intellectual curiosity and also the societal responsibility that the engineer needs to have. So the other thing particularly important for the chemical engineering discipline would be sustainability. I guess one of the misconceptions that public have with the chemical engineering as an occupation is that we made a lot of emissions, right? So it is also our job to, you know, kind of communicate to the public. We not only create emissions, we are going to solve the problem with the technique like reaction engineering, mass and energy balance and so forth. So I think in terms of education, we need to train our students such that they are aware of this technical, not only the technical stuff, but also the societal responsibility they can communicate to the public and really make an influence in the real world. Yeah, thank you so much. So it looks like there is an emerging sort of theme of trust and communication, right? This is a communication piece and it's actually very timely. We just launched a first class in engineering for public service and was looking up a definition of what is public service. I think going back to Thomas Jefferson, he defined it as public trust. And there are 14 general principles of public service that executive branch of US government actually follows as going back to executive order from I think President Bush, senior, and they are enshrined in law. This is basically how you build public trust, including disengaging from, like never deal with issues that you have personal financial interest and so on. So maybe one of those communication vehicles would be bringing more tech talent, engineering students into public service, into government, local, state, federal, including, as far as I understand, out of 535 members of 118th Congress, single digit is engineers. And it's probably on us, on engineers that we're not out there communicating with the public and building trust. And from what I understand, there's more talk show hosts in Congress than there is engineers. So maybe engineers need to be a little bit more like the talk host or the movie makers. Exactly. Or, but it's probably a role for everyone in there. So kind of the second question, speaking of the movie, maybe you can tell us a little bit more about that documentary. And I want to ask that kind of make it a question to all of us is you have this, the title of the movie is, the documentary is Lives Not Grades. And in that regards, how do we teach with an emphasis on affecting human lives rather than worrying about the grades? And I think there is a quote in the movie by one of the instructors in that class, I think it's based on a class. He says that this is about impacting people's lives, for real, not writing papers. So education both in the undergraduate sense, as well as PhD, engineering PhD, education, how do we make an emphasis on lives, not the grades or papers? Sure, I can tell you a little bit about how this came about. We wanted to teach a class that is human centric and looks at how do you solve problems of, you know, some human related problems of people in crisis. That was the idea. And that class was engineering oriented, but we also had students from outside engineering who came in. And the idea was, what if we look at, for people at crisis being, so in the Greek island of Lesvos, which is very close to Turkey, there were a lot of refugees who were in refugee camps. Because the policy of the European Union with respect to refugees is they are to be apprehended in the country that they are first arriving. And then they have to process in a different way as well. So Greece was the first place to do that. Now, the selection of this island had not to do with me. I was not invited to go to Greece. They didn't ask my opinion. However, they came to me and they said, would you be willing to support the trip of some of the students there? And I said, how much will cost? And so they gave all the things. Anyway, I said, yes, we're gonna do that. So the idea was customer discovery. So it's almost like a startup. How do you create a startup? One of the things you do is customer discovery to figure out what are the needs of these folks in order to create. So the original idea of the student was, well, let's try to develop some real technological solutions to solve these problems, right? So they were Gaho. A number of the students actually, they even have passports to get there. It was the first time they would go outside the country. So they ended up going there and when they visited this refugee camp, they realized that many of the ideas they had were completely off because the real problems of the people on the ground were very different. And so they did two trips. They went there to see the customer discovery then came back, developed some sort of technology solutions based on what they saw on the ground. And then they went back again to implement them. Some of them were very successful. One of the ideas actually became a real startup company called Frondida, which is a Greek word for care. Actually, it was done by a non-Greek people, but anyway that's how it's called. And it's still actually active. And they tried to figure out what impact the experience of the students was transformative because none of them has been dealing with this in the past. This class, by the way, has been repeated in other situations and the next class we try to do this is to try to address the homelessness problem which in Los Angeles is actually very acute. So they came back, the whole thing was actually filmed and then became a short documentary that was that PBS was interested. They published it, actually it's a web, we can look it up on the web. And then they recommended it for, they submitted it as an independent project at the Emmy Awards Los Angeles area. And lo and behold, they got an Emmy and a number of us got an Emmy. And I got an Emmy, I was like, say the president, like, hey. So that was the whole concept behind it. Show two things. First, it's easy to go and come up with solutions without understanding what the problem is. They were willing, the purpose they had was, how do we help these people? And I think that's where they went to do that. And second, it was very useful for them to understand how they can tailor their engineering competence in order to solve specific problems. And so the purpose was, let's solve, help these folks. And then how do you get these? So I think that was help them develop empathy. They have developed also some humility for them because they went there with these grandiose ideas. And all of a sudden they realized that this is a very different thing that they thought. Because, so anyway, that was the whole concept of the class. They did it last year by going to Ukraine to figure out the plight of refugees from Ukraine as well. It was not, so this was equally impactful for the students, although it did not result into any cinematographic kudos because they didn't go forward with that. But, so this was the whole concept that they followed. The fact that, actually this also changes the conversation about what engineers can do and what they're capable of. And I think the PBS that saw the thing realized that as well. So that was actually a good positive move in that direction. The program was also supported by the Engineering Foundation. So they gave us some money for that. And then the rest came from my office. So yeah, that's how it worked. I think Adra talks about lives, not grades, papers, approaches to infuse that purpose and trust in engineering education. So one of the things is if you have to convince an 18 or 19 year old person, we've all been that person to look beyond the grades. Then the question is to go back and see how the system is set up and how the incentives in the system are set up, right? So if I'm hiring a PhD student or I'm hiring a postdoc or I'm hiring a faculty here, am I looking beyond the grades? We have to ask ourselves this kind of critically, right? Am I really looking at what impact they've had? Am I willing to kind of look at their contributions in various aspects that go beyond the paper? And I think the honest answer, unfortunately, in the academy today is no, largely. It's changing, but still the answer is largely no. And I teach a subject called reinforcement learning where you kind of train AI models by designing rewards that help them achieve, train them to behave in a certain manner, right? So our rewards are basically collecting lines on a resume today, at whatever stage we look at it. And unless that kind of fundamentally changes, I don't know how far we can kind of influence students to think beyond the page, think beyond grades and so on. So in that context, initiatives like the one that you just spoke about, where you go and talk to real people, you see that there are larger issues in society beyond an A on your transcript, it really, again, it humbles a person. And I hope that when some of these people go ahead and come to positions of leadership, that they will take that knowledge when they are redesigning this system for the future. Maybe we didn't do it in the past, but maybe that awareness will help our engineers become future better leaders for the future and maybe kind of reform some of these incentive structures that we have put in place right now. Thank you. Well, I am certainly adding something to my watch list. So thank you. I, and this whole discussion makes me reflect that since this is so inspiring, maybe the way I will address maybe another part of the pipeline here, which is at the recruiting level, if this is so inspiring and we feel like so connected with this kind of ideas, maybe this is a marketing issue when we want to recruit people into engineering. Should we, as Stan was mentioning before, like people might not be aware that instead of us being part of the problem or even the problem makers, we can be the problem solvers. Is this gonna be, how can we communicate all this to our prospect students in order to recruit them into these challenges that if anything, they require hands. We need more people working in this grand challenges. So that is a quick reflection that I had inspired by this. I think life is pretty good, but it could be better. So I sort of disagree with the title, but let me watch it first and then I'll build a better opinion on this. Sure, I will just add upon what the panelists have already said. So regarding education of either at the graduate or undergrad level, I think beyond the papers, the grades, one of the things I think students benefit a lot is to collaborate with some industrial collaborators. So we have some very successful programs at Purdue. For example, at undergrad level, we have this so-called VIP program. I had a plumber working with some undergrad students together with the oil and gas company. They were using AI to solve very challenging problem the oil and gas companies are facing. So I guess the students will get a different perspective of how technology work at the undergrad level. Similarly, at the graduate student level, besides writing the papers that are publishable in the best profile journals, I think doing some internship industry also helps students grow a lot. I personally benefit a lot when I worked in industry for one summer. And not just about saying how the technology works in practice, but also, I mean, even to experience a bureaucracy and the inertia industry kind of give you a perspective how to make a real impact in the real business. I think that that is something we never learned within the university. So I think that that's something I would encourage students to do. And I guess we have very good programs at Purdue for students to grow in that perspective. Thank you. Well, especially since we have experts on optimizing under uncertainty, I want to throw this question. So it's often very difficult, if not impossible to predict the effects of technology. Even if sometimes the invention becomes very widespread and the immediate impact is positive. For example, take refrigeration, right? 1920s search for safe refrigerants, you know, CFCs, chlorofluorocarbons. This is, okay, chemical engineering connection. They were an apparent safe, non-toxic refrigerant. Refrigeration became very common. As a result of refrigeration, the incidence of some cancers went down significantly, like stomach cancers were very high. The life expectancy increased. And then in 1980s, we realized that it created a ozone hole, right? So some of these effects are very non-linear and very difficult to predict. And actually they go to this geological planetary scale. So, and on the other hand, there are some, sometimes we have this aversion to technology. Maybe AI is in that stage. And we sort of, you know, some of the considerations of, like AI is gonna replace so many jobs, it's actually based on the view that there is a constant demand, but we will create larger renew value. So how do we deal with this? Already, especially since we have quantum, I guess, quantum technology connection and AI, generative AI and more and compute. How do we, other critical emerging technologies, including space, how do we approach this very large uncertainty in the eventual effect of technology? Well, with respect to the, you know, the fluorocarbon situation, I think society acted relatively decisively on that. So maybe because there was not, maybe there was some leadership out there to do this. Maybe there was not a lot of interest in that. Maybe people realize that you can do a refrigeration without them. And so I don't think you can predict in advance the deleterious effect that will happen. But I think the honest, transparent thing is the moment you realize this to sort of try to address them and then get done. And so I think, so transparency in communication will be an acceptance of this would be very important for that. And this I think where scientific publication and integrity of that is very, very important. Nowadays, you know, I happen to be the editor-in-chief of a new journal of the National Academies called PNAS Nexus, which is a little bit propaganda here. The first journal of the Academies in 114 years. And I see all kinds of, it's multidisciplinary. So I see all kinds of abstracts and very fascinating in many different ways. But at the same time, you see an explosion of publications in many different ways in journals. And you ask yourself the question how serious, not serious, how rigorous are some of these so to be able to actually follow them. So there's a little bit of a question mark there. But in this particular case that we're, I think actually it's a good example that people actually reacted relatively fast. Acid rain is another example. We're actually, I don't believe we have this anymore as part of society. So we acted on that. Climate, it's a much less compelling story in terms of how we are acting to that. And so the hope is that maybe we'll wake up and do something about it. As a society altogether. With respect to AI, I think because of the fact that change is so fast and potentially you can have unintended consequences that are not predictable at this point, I think it's important to have some sort of a, not watchdog, but some informed folks who follow this and be able to come up with recommendations that can be applied. And to me, AI has another important element to it which is there is a concentration of the ability to use AI that will be to small numbers of people because it requires tremendous resources including energy for that matter. And the question is how do we make sure that this is socialized across the world so that the typical person can also use it and benefit from it in some way. So I'm not sure that this is democratizable as easily as other things in that, but maybe I'll be wrong. I think it benefits to have an informed and serious a group of engineers for the most part who can then understand the implications of this and the ethics associated with that, maybe some ethicists who should be there. That will be my recommendation, but at the same time, we have to think about the fact that different societies have different values and how do we make sure that this is uniformly applied. There has to be some sort of a, I don't know, united nations or equivalent approach to figure out that this is something to think about just like what happened with the atomic weapons which in some sense had similar possibilities for unintended consequences or at least from that perspective. It is fascinating, so I have been on record by saying that generative AI has been a triumph of human ingenuity and I believe it. The fact that we're able to capture this incredible amount of knowledge and have something that brings it in our hands in such a feasible and way to do things is incredible. However, because of the same principle, we have to think what exactly are these other consequences that can come up as a result of that. So we do need some serious thinking about all these other things as well. Even though hallucinations do happen. And yeah, it's a new world. Our challenge will be to make sure we don't fall victims of some of the false narratives that are very... Getting a lot of eyeballs about things that can become weird and all that and make sure that people do not get off the rails in some way that this is, that's out of there, so. Yeah, that's on. Some sanity. AI and quantum, or any other emerging? Well, you said quantum, I had to speak. No, but I think in general, like my opinion is that fear should not be on the way of progress. We need to be cautious. That does not mean that we should just stop pursuing addressing some of these challenges. I think, although we have this deal that I don't talk about the stochastic optimization, he doesn't talk about quantum, I will get into Tsan's area of expertise because some of the slides that you showed with the progress of time, how ideas can like unintended consequences can arise from ideas, even if you plan, if you try to foresee every kind of a scenario, something might get out of control. So just having like tight control on this and pruning those unintended consequences, that is a very structured and, I don't know, mathematical way of approaching it, right? Multistage stochastic optimization, we would call it. So this applies among many technologies to quantum, right? So just because some people might be cautious about this new kind of technology, we should pay attention, we should make sure that it doesn't get out of control, but it should not forbid us from trying to make advances in those directions. And I think that constant, you said reinforcement learning, right? So we need to constantly be measuring our system and making sure that it stays on track, that we implement those guardrails that you were mentioning earlier. That's my take on this. That's an excellent quote. Fear should not stand on the way of purpose and linking back to the virtues. And so it's one of the Aristotelian cardinal virtues is courage. So we should continue with courage pursuing purpose. So maybe questions from the audience? So I wanted to pull on this thread of, so this notion of using feedback to continually monitor a technology or the pruning which analogy, which I really liked in terms of using regulation becomes this tool that you use to protect against or correct against these things. As an engineer, completely logical, makes perfect sense. The, again, the problem in my understanding seems to be that number one, again, that was a process that, one, the people who are doing the pruning are often not engineers. And there's a huge disconnect there. And now there's a temporal disconnect as well because the technology is moving so quickly. And in fact, either, I think you get one of two situations where maybe there are engineers or scientists who would, you know, I'll use autonomous vehicles as an example where you, we have a, I would say, policymakers have not done their responsibility, you know, not met their responsibility to appropriately make laws or uniform laws across states. And so I think car companies and other companies sort of say, well, I'm just gonna keep going then. I'm not going to, you know, allow that to stop me from making progress, which I think you can debate, but maybe justifiable. Then the alternative is that you have engineers or scientists or technology makers who don't care, you know, just want to do what they want to do. And then you don't have policymakers moving fast enough to keep up. And so that mismatch, to me, is sort of completely crippling our ability to deal with any unintended consequences, regardless of whether people care or not about them. So there's a character and purpose issue. I think there are people who don't, I don't think we're teaching engineers to think about the consequences as much as they could. And then even if we care about it, we're crippled in terms of handling it. And I feel like I keep hitting against this wall and I would love to hear practical thoughts on how to address that. That's a big question. Maybe the way to look at this is that I'm not sure we do the same thing in other cases. So, although I've been changing my mind a little bit about that with respect to the way that current society works, which it's a bit dysfunctional if you think about it, it would seem to me that if it was about a matter of life and death, okay, maybe, I mean, at least in the past, let's go back 20 years maybe, you will get the expert person to come and make that decision. In other words, the politicians will stay aside and say, you know, here's the expert to do this. Today, that, even that actually may be a little bit under assault. Some people feel that politicians make decisions about life and death and not leaving it to the experts for life and death. So, I think there is a overall something that needs to be done across the board. And so it's a bigger problem. I agree with you that this is a, how do we come up with some specific practical steps on addressing that? And it looks to me that this is a bigger problem than simply convincing, I think there is an undercurrent of political acrimony, perhaps, that doesn't allow this to function normally. And we need to go back to that, because if you think about it, there is nothing real about it. The only real thing about it that you could say is that, and I think, so, you know, Yubal Harari, I don't know if you know him, you know, a historian, an Israeli historian, very, you know, he wrote, you know, a homo sapiens and all the other things, very interesting. He has mentioned that what we are going in, what science, because it's global, and because it's universally true kind of thing, is not suited for people that look for a tribal story. A lot of people, a lot of, maybe society, is in the form of tribes, and they love to hear a tribal story that pertains only to them. And science does not pertain to only a tribal, it pertains to everything. And I think some of it has to be attributed, this type of disconnect, I think, is attributed to what we see today in the world. So the question is, how do we get around that, and how do we make sure that people can have their tribal story, but get science above it to drive, because I think we all believe that science is the truth, I mean, well-documented. Tribal stories are more attractive, because it's the story that a mother tells their child, and so there is always this element to it. So there is an element there that he has to do with human anthropology, to some extent. It's an interesting question, and I don't know how one would address it, but to me, that is actually the fundamental thing that is going on right now. We have created different countries, we have different tribes in some sense, and everybody believes in their mythology that sometimes eliminates science from the conversation, and I think that whereas we are all believing that science is applicable globally, and there is no difference whether science applies here at Purdue, in West Lafayette versus in Johannesburg, South Africa, it's the same, so I think there's a bit of that. So unless we tackle that, maybe we're going to have difficulty going through it, so it's a matter of changing that. I don't know what you, Harari, is saying about this now, but definitely he articulated about a couple of years ago, and I was reading this and I said, yeah, he's right, you know, so. Thank you. So I was wondering if you could speak, since we're in a university and we're talking about public universities and private universities and research universities, if there was any advice you might be willing to give to leaders of universities to... To what? Leaders, to the leadership, to help... Presidents, deans? Presidents, deans, I know we have one over there, but also I'm interested from the... Because we also have assistant professors, right, so that you're going through that. So how should university leaderships change the way they run universities and the way they address these type of big questions? I'm not sure it is on everybody's mind. I think it depends on the state you're in and all of these things, but how would you advise them? Thank you. I mean, my position is that transparency and accountability to some extent has to be practiced pretty much every day. You know, there are... As you know, in many, and we saw it very clearly at the spectacle of the three presidents that presented there, a lot of university leadership is trying to follow a legalistic answer to pretty much everything. And I think what you saw there was an attempt to appear as if you were defending yourself in a court of law, where people... Everybody knew this was a court of public opinion and legal stuff was completely irrelevant, but you can tell the people behind. And I've been there myself because sometimes when you are in a sensitive thing, the lawyers will tell you, no, no, don't do this, don't do this. So I think the legalistic society that we live in sometimes forces people to sort of avoid taking a position of moral stature and then make the case for that. And I think, ultimately, that's probably a solution to this because universities are supported to lead younger people. And if the message we send to younger people is like, hey, sometimes don't talk legal stuff, keep it, keep it. I think it's not a positive reinforcement. And higher education is under assault right now. And so we need to make sure that we give the right answer to this by emphasizing the values, the true values that exist and should exist in a university system that has always been open to debate, open to opinion and the pursuit of truth. So I think, I mean, universities are very special places for that. And right now I'm worried that, a little worried that you see these attacks to try to move it into something that did not exist before. So it's something to... So this requires some courage as well. For the most part, good leaders will show that. And so hopefully that's something that people have learned to apply across the board. And I think this competence and character thing should be important. You need to be competent at that level and also you have to have the character to defend it. And so sometimes it's not easy because whatever you say is going to be very scrutinized and also perhaps dissected and then take this part and put it there and this part put over there. But if you're genuine, I think you should be able to withstand this criticism type of thing. That's my position. What about you guys? What do you think? I think a virtue is not necessary. So there is some... And you've mentioned that. I think some reflection and some, at least maybe... We are all engineering PhDs. There is a philosophy part. And philosophy is based on some system of virtues. And whether we agree or whether they don't have to be universal but there needs to be some at least reflection and understanding of what the systems of virtues are. I was listening to Jane Goodall the other day who was talking about what living with chimpanzees have taught her and the ability to preserve natural world as it is and all that. That's actually a very important message that comes from someone who devoted her life entirely on this and has no particular bond to pick here. So, yeah, I think it's... And the other thing I wanted to mention is that we live in a world where... I mentioned this double helix stitch which I think is going to be the case. The question is how do we do this in a way that is productive and also help us propagate a... or promote a future where humanity and the natural world are going to flourish in many ways. I think that's actually our challenge right now. And some people say that, well, when you deal with AI, for instance, and all that, it's actually part of our planetary system, not planetary system, well, AI computes... I mean, it's about instructions that are given to computers which are made out of silicon or whatever. These are part of life, you know. So there's not an exotic element that has come out here. So sometimes people will say, well, that's actually philosophically that's part of the planet kind of thing, which I suppose there is something right to that. How do we move to a new state that emphasizes and makes this a harmonious way to live? I think it's an interesting challenge. And I don't think we should let philosophers or whoever are to dictate that. I think we should take their advice and help, but I think we are capable enough to be able to do that and figure out how to do this right. So that's my take. I think we are out of the thank you for making us a little bit more capable of addressing those issues. I'm very optimistic about humanity. These conversations will continue. It was very fruitful. Thank you so much.