 Thank you all for coming to today's session on French cooking, no. I don't need to tell any of you the concept of AI while something that we've been talking about for decades has, in the last couple of months, just captured the imagination. I think especially with things like chat, GPT and Dolly, everyone's head is spinning. They're thinking, oh, wow, I could do this or this is what it would mean for my business or I'm kind of worried about that. And there's so many different angles to what's coming, what do we need to do as a society, how does this change jobs, how does this change education, what is going to happen, what should regulation be, and we're going to talk about some of these, but one of the specific angles is what does it mean for investment. And that doesn't just mean, but we are going to talk about what kinds of companies should we be investing, what's the time horizon, what's coming sooner, what's going to take longer. But also what role do investors play? There's been a lot of talk about where are the guardrails, how do we make sure the technology serves society. Some of that, obviously, when Europe certainly regulation comes to mind, but investors can play a really big role in saying, this is the kind of thing I would put my money in and this is the kind of thing I won't. I think there's a ton to talk about, fortunately, I have an incredible panel of folks who are going to do most of the rest of that talking and I'm going to try and talk as little as possible. And having just had a brief few moments to catch up with the panel, I think that will be no problem at all. So I'm joined by Jim Breyer, the founder and CEO of Breyer Capital, Hanzadeh Dogen, the chairwoman of Hepsiberata, which in Turkish means, remind me, everything is here, which seems very apt for the conversation we're having. Tom Siebel, who started a little company a while back and has another one these days that I'm sure he'll tell us a little bit about, and Lauren Woodman, the CEO of DataKind. Maybe to start things off, everyone comes at this discussion, the topics that I talked about with a slightly different perspective, briefly talk about how you're coming at this moment. I think we're all in the same moment, but we're all coming to it from a slightly different place. So maybe if you're comfortable, Lauren, we'll start with you and move down this way. Sure. And thank you. My colleagues, I think you're right, we're going to have a lively discussion up here. And I think it is a moment in time where we need to have a lively discussion. So DataKind is a nonprofit organization, and we're really committed to using data science and AI in the service of good, and how do we drive social impact from the insights that we can derive from data and data science and AI and machine learning and all of the tools that are becoming available. The challenge is, is how do you make sure that we don't leave behind organizations that have huge amounts of data that can actually advance and address some of the societal challenges that we're facing? How do we make sure that as AI is getting developed, we're not losing sight of the fact that there are very good commercial applications, but there's also very good applications in the world of trying to achieve the sustainable development goals or trying to address poverty or any of these gaps. And it is a complex situation. Nonprofits, by and large, don't create software tools. It's really not our strength. But we do have, and so we have to rely on investors and we have to rely on the tech sector, and we have to rely on regulators to think through sort of what the implications of that are. So I'm glad that there is this moment where we're having this conversation because there are lots and lots of issues to resolve and make sure that we do not lose sight of the good that we can do. Tom, you're obviously not, I mean, I'm sure you have a few dollars left over to invest, but you're primarily running an AI business these days. Talk about the role that the people who fund companies like yours play in sort of shaping that as well as kind of you've been early in this space. Why did you decide to plunk down the biggest investment you have, which is your time into what you're doing at C3? As a little context, this is my fourth decade in the information technology industry. I'm a computer scientist from the University of Illinois. Did my graduate work in relational database theory, got recruited by a small startup with about 20 people called Oracle Corporation. That turned out to be a pretty good idea. I was ultimately one of the guys who ran that business. I'm frequently asked by the media what my kind of biggest failures were professionally. It was just, it just occurs to me what it was. My biggest failure professionally is when we tried to recruit Tim Breyer into Oracle in 1986 when he just graduated from the Harvard Business School and we failed to do so. Okay, that being said. That being very kind. That is my biggest professional failure. So I was ultimately one of the guys who ran that business. In 1993, we spun out, we thought about the application of information technology and communication technology to the business processes of sales, marketing, customer service. This is when I met Don. Okay, and so we started a company called Siebel Systems and we invented the CRM market as you know it today. So that's about $120 billion business this year. It turned out to be, CRM turned out to be a reasonably good idea. That was repurchased by my friend Larry Allison in 2006. Now as we looked at the world in 2006, we saw a big step function of technology coming online in the form of elastic cloud computing. At that time, AWS might have been doing a half a million a year in cloud computing. Big data, the internet of things, and we thought these collectively would enable this new field called predictive analytics. So we started this company called C3.ai, where we're in the business of building enterprise AI applications for oil and gas, defense intelligence, utilities, healthcare, telecommunications, manufacturing, what have you. Rough numbers, when I went to work in the information technology business globally, it was about a $200 billion business today. I think it's sort of $7 trillion. When we started building enterprise application software in the 80s at places like Oracle and SAP, that turned out to be a pretty good idea. That's about a $600 billion market today enterprise software. And it is predicted that the enterprise segment of the AI market, that this will be also a $600 billion market in a few years. So this looks like a, this is the fastest growing market opportunity that I will have seen in my professional career. And Hanzade, you, in addition to just an amazing pink outfit, you also wear multiple hats as both running a company in Turkey and also now, thanks to the success of that investing, where does AI fit in both of those hats that you wear? Sure, thank you. I come from a very different background, because first, from a very different part of the world, Turkey. And also, I've started my life as an entrepreneur. I still define myself as an entrepreneur. I want to tell you a little bit about Hibsebura, the group to give you context of what I will say about AI. It's a leading e-commerce platform. We are the first and only NASDAQ listed company out of Turkey. And from day one, when we started the company, despite the ethos of technology disrupt, we said we will use our technology power not to destroy and disrupt and get rid of industries, but instead be a catalyst, an enabler, to the industries, including retail, including banking, to lead their digital banking transformation. So that is the culture, the value. With that culture and value, I am mostly concerned about, of course, we'll talk reliability, power concentration, wealth concentration of AI, but what we do, just to give you an example, what we do in our company, because it comes from our culture, one solid example. Five years ago, the woman merchants share in our GMB was less than 1%. And I said, this has to change. So what do we do? We start a big program called Take Power to Woman, and we give incentives, we train, but then it wasn't enough. So we said, OK, we're going to hack our algorithms, and we will positively discriminate woman merchants so that our buy box algorithms doesn't always stay biased because the data set is biased. It's always man merchants getting the buy box. If we don't interfere, AI amplifies that. Today, it's almost 7%, 8% of our GMB. It's a big, big jump. So this is, I think you have companies like ours who are not big tech, but who have data and who use technologies like machine learning and AI can make a difference as long as they make sure the model stays with their values. This is what I wanted to share as a start that we can continue the conversation. Excellent. Jim, one of the things I wanted to make sure is that we had a successful investor. And I don't know if you know this, but between the two of us, we have a net worth of about $2.9 billion. You've been investing for a while. How are you thinking about AI? Well, thanks for having me. Thanks for hogging the $2.9 billion. Touche, there was a time in 2015 I had been immersed for a couple of decades in information technology, social networking-oriented investing. And I was on the board of the Harvard Corporation at that point in time, an 11-person board. And we were about to launch a new medical school dean search. And our president, Drew Faust, asked me to be point person from the corporation on that search. Interviewed about 30 phenomenal potential deans from Hopkins and Stanford and UCLA, you name it, MD Anderson. And I typically ask them, what role will artificial intelligence and computation play in the future of medicine and the future of medical schools? And George Daley, the current dean of Harvard Medical School, was the only one who could eloquently say, without AI and computation, there is absolutely no chance 10 years from now. The best doctors, the not as good doctors, nurse practitioners, you name it, no one will be able to do the job that they want to do, we as patients want to do. So I received a call from a friend, Stan Druckenmiller, who had been at Memorial Sloan Kettering for prostate cancer. And he said it was a terrible experience. He's safe. It worked. But he said it was just a terrible experience in terms of wait times and data going back and forth. And so I made my first major AI investment in the world of AI meets medicine by spending nine months negotiating a license deal, a royalty deal, intellectual property rights from MSK into a new company. The medical personnel all came from MSK. A couple came from Yale. And my job is to go out then and recruit the very best ex-alphabet, Microsoft, Apple, Meta, the 30-year-olds who don't want to just optimize a search engine. And since that time, I've made 12 investments with similar models where it starts. And we'll hear more about it today with exceptional data that's very unique. Can't run good algorithms against data that's not unique. And it just so happens most of the unique health care data. It's not in the insurance companies. It's in our best hospitals, mostly research hospitals. It's in the medical schools. And so I've been on a mission for certain forms of cancer, such as prostate and breast, to do whatever I can to eradicate with these tools and these companies. Those types of cancers is in the next decade. And we have some promising results. But that's one of the reasons I love what I do. So I'm going to start with you on this question, Jim, but then I want the rest of you to argue with him, so I don't have to. What is the role of the investor in this? So we talked initially. I framed individuals are going to have a role to play, human rights groups, nonprofits, governments, and regulators. What is the role of investors? Should we be counting on the investors to say this is the kind of company we want? What role do you think you can play in investing in a world that has the AI systems? Obviously, clearly, we all want the health challenges and the sustainability challenges. But it also seems a little odd to rely on investors like yourself to make sure that it's fair to make sure that it's equitable, that there's not bias. But that doesn't mean there's not a role to play. Where do you see that role? Well, at the end of the day, the job of a venture capitalist is pretty simple. We try to have the ability to look around corners over the next many years and try to identify extraordinarily large markets and constantly be meeting many of the best individuals and co-founders in the world. We have one in the second row, Jack Hittery, the founder, CEO of Sandbox AQ. And Jack and I worked for well over a year spinning out Sandbox AQ from Google. In other cases where I've invested, Tom Siebel was the CEO. I shook his hand and said, I'm in. And then there are the cases like the spinouts I mentioned where never before has interdisciplinary communication and understanding been more important because if you're a great chemist, you don't know necessarily the first thing about machine learning. And if you're that great machine learning person out of Google, you better appreciate the great chemist and biologist. And so really the biggest challenges in these startups, which are in and around AI, is one, from grounds up, build what the ethical framework is. Don't try to tack it on four or five years later and have an interdisciplinary group of people who get energized every day about working together. Lauren, in your world, and one thing you want from the companies is the software that you need and not just your organization, but all organizations, you need these companies to do it. But in this broader question of what should we be relying on the investors for, help us think, again, what should we realistically expect and where do we have to say, that's not really the role of the investor and we need some other institution, whether it's regulators or someone else to step in. I don't think we get to wash our hands just because we aren't primarily responsible. And I want to give Jim an enormous amount of credit, right? I mean, as you were talking about this, one of the things I was thinking about is nonprofits can't necessarily see around that corner. That's not the role that we do. We see around different corners, perhaps. So we rely on investors to see around that corner and to think about that. At some level, they are the first line of defense around making good decisions around how these companies get constructed from the very beginning. It is not okay to come back later and say, we built this, it's 75% right, so let's tweak around the margins. We just can't do that. And frankly, things move so quickly these days and there's already such a gap between the sectors that you will never have the social sector be able to catch up if responsible investors aren't actually being an active part of that conversation. And so, do I think it all sits on their shoulders? Absolutely not. We all have a role to play in this. It's like any other ecosystem. I can't absolve people who do bad things and just leave it to the police to solve it. No, we all have to be good actors in society and investors are that first line of defense in saying, how do we actually construct companies, construct technologies, that as they continue to evolve, will it at least start it off in the right direction? Hunzati, what do you see your responsibility as? I mean, I am a tech enthusiast and I'm an optimist and I do believe AI can decrease the marginal cost of services to zero and make the quality better. So we can have more of every services. Everyone gets access to legal advice. Everyone gets access to diagnostic health and more at a lower cost. Or if we get it wrong, it can be the dystopia of our world. So it's that serious what we are facing. With this context, we have to expect more from investors. We have to expect more from corporate leaders, more from civil society and more from consumer. It can't be, you know, investors simply shouldn't invest in companies that don't have the right internal control mechanisms to serve their values. Consumers should be more informed, more alert, more demanding of what's happening with their data. Corporate leaders have to be more socially aware. This era of like obsession with profit maximization to be shareholder value maximization to be the only raison d'etre of a company is gone. And regulators should be more involved, more up to date. And we will see like the measurements will be a combination of many, many things. You know, it can be publicly owned algorithms, monitoring private algorithms. It can be equity ownership, social equity ownership of big tech companies. It can be a change in our social contract, the stigma we give to employment and statutes can change because many jobs will not happen. So it will be a combination of different things. And we should expect all from each part of the society. Tom, a lot of the companies, a lot of the big tech companies have set up these independent advisory groups for ethical AI. They've created frameworks for responsible AI. So it seems like things are going pretty well, right? I think this topic of ethical AI is very troubling and very important and doesn't get nearly enough bandwidth. Let's go to Jim's example. I believe the largest commercial application of AI will be precision medicine, hard stop. And so we have the capability today to aggregate, say, the genome sequences and the medical care records of the population of, say, the United States into a unified, federated image, hematology, radiology, pharmacology, health history, genome sequences, the works. And then we can now build machine learning models that are enormously efficacious where we will combine these systems with these devices, okay, and much of these populations in the future will be wearing devices or have embedded devices that will report on pulse, blood chemistry, okay, gut chemistry, okay, brainwaves, what have you. So we'll be able to serve, historically, underserved constituencies. We will be able to, it is within our grasp today for a population the size of France, the UK, the United States to engage, not in early detection, but disease prediction, we can predict with very high levels of precision who's gonna be diagnosed with what disease in the next five years. AI is genome specific medical protocols, AI of assisted medicine, this is huge, right? This is all motherhood and apple pie. And so we will deliver lower cost, more efficacious healthcare, okay, into a healthier community. Now, what could possibly go wrong? Well, let's think about this, okay. Now, the idea that whether we have a single care provider and this is a religious issue, I get it, okay, or whether we have a kind of quasi-free market system like we do in the United States, where we have private enterprise, the idea that these people who control these data are gonna act beneficially, I mean, you could get over that, okay, see Facebook for details, okay. Now, we're going to, so we will know which of us is gonna be diagnosed with a terminal illness in the next three years. I mean, do you wanna know that? I'm not sure I do. And how are these people going to use these data? If you don't think for a minute that they're not going to use these data in a rash in healthcare, get over it, because they are, okay. They will in the UK, they will in China, and they will in the United States. They're gonna decide that Tom's too old for this procedure or Don's too old for this procedure. It's not the best interest of the country, so get to the back of the line. So these issues are very, very troubling, okay, as it relates to, that will be the largest application of healthcare. There's a big right now that putting out a roughly half a billion pound procurement in the UK to revolutionize the NHS, which where we have queues of seven million people waiting for elective surgery. So this is very troubling, these issues related to priority. Anytime we have the intersection of AI and sociology, it goes real bad real fast, okay, where we wanna talk about AI and criminal justice. It doesn't, we have this problem of cultural bias that are in these data and they are in these data, okay. And or we want, I know we do a lot of work for the Department of Events and I was taught we do a lot of work for the Secretary of the Army and the Secretary of the Army was in my office and he says to Tom, we want to build a HR system for the Department of the Army. Well the US Department of the Army is roughly a million and a half people writ large by the time you get into the reserves and whatnot. And so this was a system that we're gonna use AI to decide who to promote, who to decide. And I said, you know, Mr. Secretary, we can solve this problem. We're gonna bust our backs for about six months and we can bring this application into production. But we're not gonna touch it. And my recommendation is you don't touch it either because the problem is due to the bias in the data, no matter what the question is, the answer is gonna be white, male, went to West Point. And I said, you know, in 2023, this is not gonna fly. Then you gotta read about yourselves on the front page of the New York Times. Then we gotta get dragged before Congress to testify and I'm not going with you. So it's, you know, these ethical issues are very, very troubling. And now we're looking to regulators to bail us out. And when we get to regulators, the only thing we have worse than the United States might be the EU, okay? Well, you know, where, you know, where now we have the solution, you know, is worse than the problem. So this is really problematic. There needs to be a lot more talk about it. And, you know, I don't know how I got off on this, Jack, but I talk about it a lot. And so be afraid. And anytime we interrupt, we have the intersection. When we're dealing with physics and we're dealing with machines, nobody cares. There's no, there are no bias in those data. This is just temperature, rotational velocity, torque, what have you. Okay, but when we're dealing with sociology, it goes to a bad place real fast. Be afraid. So be afraid is a great, great place to get to. And I think it is really important. You know, I'm, I wanna hold both pieces of the AI opportunity because it is, it is gonna help cure diseases. I'm hoping it will help in some of the problems where we don't have enough human time to solve them, sustainability being the big one. And I wanna hold that if we don't do this right, we're gonna make an even more unequal society. I do wanna spend a moment or two on some of those opportunities. And one of the ways this intersects with investors is investors really help us, or, you know, when they do their jobs well, like Jim, help us with the, what is the right time horizon? So I'm curious, you know, what is in the right time horizon? I mean, it seems like clearly generative AI is having a moment like, and I imagine we're gonna see, I'm already, my inbox is filled with pitches of companies that are not really doing generative AI, telling me they're teaching computers emotions and all these other good things. My sense is artificial general intelligence, you know, probably not where a lot of VC investors are saying, yep, in seven years I expect to get a big exit because big brother's gonna run everything. Where, what is in the investor time horizon right now? Where are the opportunities for anyone who came to Davos to make a little money? Well, I'll jump in quickly, but I'll start with, I think, Tom's profound points around bias. One of my favorite investments of all time was in 2008, a company called Etsy. And Etsy was a buyer and seller's marketplace, 90% of our employees were female. Higher than 90% were all the buyers and the sellers of the different goods. And a year into it, it was going well, but several of the other board members said, we're never gonna be able to get big enough if we're just focused on women. And I said, I tend to disagree. We're going to figure that out. And it turned out a month later, I was up at a Microsoft event and I was seated next to Jeff Bezos, who had tried to buy Etsy. And I talked to him about this conversation. It's 90% women, the sellers are women, the buyers are women. And he said, we as Amazon cannot replicate that. That's a jewel, don't ever stray from that model for Etsy, which brings me full forward. Silicon Valley has done a very, very poor job of promoting, helping, leading very talented women and underrepresented minorities. I feel very proud. I recruited Sheryl Sandberg in 2008 to Facebook. My mother who's no longer with us was a genius mathematician. And one thing I'm trying to get right this time with all the power I might have is when we're starting companies, there has to be that balanced set of viewpoints on day one or day two. Otherwise, I won't write the check. Super important. Wow. I'm sorry, where are you seeing some of the investment opportunities around AI? Like where, you know, there's a ton of talk, but you know, you're writing checks. Where are they going? First, thank you. Yeah. If we have more investors like you, I'm sure we will make a difference. And my friend Philip also at KKR, he's done a lot promoting women in private equity investors, so thank you. And investment opportunities. I am still, you know, with what you said, we should be afraid we need to make more talks and what you talk, so I need to just focus on investment opportunities. This is what I want to tell you. The current hype on generative AI, I am slightly worried that it might distract us from talking, researching, investing in context-driven AI, in causal inference, in, you know, in common sense in AI. So that is, I am slightly worried. What I, like I see huge opportunities is, if, you know, business models that can efficiently scale human judgment inside with AI. That's where I see big opportunities. Not easy, but I think any business models that can solve that problem will create significant value. This one is actually for anyone, and I'm curious what you think. My gut is when I see all this really amazing technology, you know, Tick Chat GPT for instance, but I think it's indicative of a larger trend of, you know, these AI systems that they're incredibly powerful, but we also, we don't know how they work. We don't know how they think. Is it reasonable to think we should be investing in AI that is transparent and explainable, or as an investor, is it just more expensive and ultimately someone's gonna do the same thing? You know, we do have this confidently wrong problem right now where, you know, if I put something into Chat GPT, it's gonna be incredibly impressive. It's not gonna say, I don't know. I defy you, I mean, there are a few examples where it's been told to answer, I don't know, but in general, it's gonna give you an answer. Might be right, might be wrong, it's gonna be a great answer. Is there an investment opportunity in AI that does explain itself, does cite services, does show its work as I was taught to do at school? Absolutely, you know, I would say most applications of AI to, you know, fundamental business processes, stochastic optimization of supply chains, supply network risk, predictive maintenance, smart grid analytics, integration of renewables. Every one of these, where we're using supervised and unsupervised learning models, they come with an entire evidence package, okay, that explains exactly what were the factors that lead us to this decision that suggests you should replace this engine on this jet, okay, in the next 100 hours, or you should replace this transformer in Northern California before it explodes, you know, burns 2 million acres, kills 300 people, okay, and believe me, this happens in Northern California like every two days, okay, I mean, it's unbelievable. So when we're using supervised learning and unsupervised learning models, absolutely we have an explainable evidence package of every one, it's when we get off into deep learning, okay, and AGI, where they become unexplainable. Now, are there ethical applications for unexplainable AI? Yes, okay, let's think about target identification. Is it a 737 or is it a MiG? Okay, okay, is it a melanoma or is it not? Now, if we can prove mathematical, to four nines, okay, that this, okay, that this deep learning model is mathematically correct to four nines, I mean, hard stop, it's ethical, okay, I mean, okay, so this idea that all AI needs to be explainable, which is on the front page of every technology company's inside of their annual report, and none of them really believe it, okay, there are applications for deep learning and unexplainable AI that are entirely efficacious and ethical, but then there's a lot of others that are kind of problematic. I would offer, there's one company specifically, I backed three years ago, called Elemental Cognition, and the founding team was the founding six members of the IBM Watson, if we all remember IBM Watson team, and they had become frustrated at IBM and had become frustrated with exactly this issue of, we can get answers, but we have no idea how the black box comes up with the answers. So what they've been doing the last couple of years is continuously building technology where a patient receives a prescription for these three drugs by the doctor. That AI can go in and say, here's why we chose these three drugs and not X. The good news, it's improving and tracking wonderfully. The bad news, the computer in terms of the kinds of reasoning and answers it gives is currently operating at a second grade level. We think we're gonna get to third or fourth grade in the next couple of years, but for many projects, particularly around medicine, and these are being tested in three or four of the best New York hospitals, it's making a profound difference for the doctors, the nurses, and the patients. But here's where I wanna challenge the group, because to Tom's point, is it better to have another two nines there of accuracy, or is it better to be able to say at a second grade level, here's why I made this pretty good decision with some things, like this is really the debate, or is that a false dichotomy? Do we have to choose? It sounds like in many cases, we do have to choose. I'm not sure. We can't choose, I'm sorry. We need it to be both. If you could prove with mathematical certainty that it's right to four nines, I mean, I don't think there's an ethical issue. That's better than a human being's gonna do it. Is it a meager, is it a 737? Is it metastatic or not? And if you can do it to four nines, I don't think we're anywhere near any ethical laws. I will comment on this idea that we have that these companies feel like they need boards of ethicists, guys, I think that is a cop-out, okay? I mean, that's what your mother was for, okay? If you don't know the difference between right and wrong, okay, there's something wrong, okay? And if you need a board of ethicists to tell you the difference between right and wrong, there's, I mean, that's how a little creepy, guys. But just from the comments on that in the earlier point, which is there may be scenarios where it is mathematically correct, and I'm okay with that, right? I would certainly like four nines as it's metastatic or not. Just better than your doctor. It's better than my doctor can do. But there's also needs to be some transparency around that so that I have the opportunity to know how those decisions are made. I am okay with the system saying, not really sure how we got to this decision, but mathematically we feel very strongly about it. You know, we're four nines, and just so you know that's where we are, and frankly, as your doctor, it's better than what I'm gonna be able to tell you. And I wanna add, oh, well, I don't wanna cut you off. No, no, and then, I don't know, I was gonna make a comment on your next point and then I totally forgot about it. If it comes to your judgment, you're right. I will jump in, yes. But I wanna, like, are we okay if it's four nines for you and three nines for Lauren, and we don't know why? It's a great question. Four nines is better than a human being can do. Okay, hard stop. But does it matter, if it's not explainable, we're not gonna know, we might know it's not working as well for women, you know, at melanoma. It's a perfect example. Do you think this doctor can explain, okay, why he or she believes this cancer is metastatic? Let me tell you, that doctor can't explain it. No, but I am concerned they're gonna be able to tell it better on white skin than dark skin, and yes, if they can, you know, and that can be a problem with explainable or not. But if it's not explainable, we're not gonna know why and we're never gonna improve it. And certainly what we've seen with facial recognition and other things is, because that's where the data is, they're gonna be better with certain groups. Facial recognition is not something that we're anywhere near four nines, okay? In other words, we're not anywhere near something that everybody knows how to game. You know, so like MITRE Corporation, when they came up with that little palpin, they will game every facial recognition algorithm that's out there, and again, I thought you as the CEO of MITRE. Okay, and so that's easily gamed. And well documented. Yep, I would come at it. We need both the very deep, excellent analytics. For me, it's not acceptable as one person for whether it's Google or Bing, whatever we wanna use, the results get spit out and there's no explaining the reasoning behind. When you're doing searches for where to pick up DoorDash, that's one thing. When you're talking about a cancer patient where the doctor already has limited time to care for that patient. And the nurses are overstrained and they're heroic, of course, having abilities for the technology to outline here is why this prescription is being recommended or here is why chemotherapy at this point is in our view the right way to go. And we are starting to see the emergence of some absolutely blow away technologies which will get right at the heart of reasoning. And then the final point. And I think they're really where they're gonna play, Jim, and I know you're an expert at this, but they're gonna assist that doctor in making a decision. So this will be AI where we're informing. We have a better informed visit. So are we all in agreement? And then Lauren gets the last word because I've now cut her off twice. I have to just one word on the biases. I mentioned how important it is to me to have female leaders, underrepresented minorities in each one of these startups. For the data that I'm licensing from Memorial Sloan Kettering, M.D. Anderson, UCSF, I as the investor insist we want data covering different ages, male, female, underrepresented minorities so that we're starting with data sets that are heterogeneous. Lauren, you got the last word. Well, that's a big burden. So no. No. One, again, thank you for insisting on that data because my level of trust of a system that I don't fully understand increases dramatically with four nines and it increases more dramatically if I know that the data that trained that system is representative and diverse. My comment was that I had forgotten earlier was your question about, your comment about external ethics boards or supplementary ethics boards. And it gets back to the first question that we all sort of tackled. It is not enough for companies to leave the wrong and right decisions to a mother or to an ethics board. We all have to be accountable in this system. And again, Jim, I'll give you enormous amount of credit for being accountable in those investment decisions you're making and Tom and the decisions that, you had a conversation with the Secretary of the Army. It's a lot less expensive to not do it then than to go sit in front of Congress and that's a smart decision to make because it wasn't the right decision. I don't know where all of this is going. I don't know what the answers will be. What I do know is, shirking responsibility for making responsible decisions about the tools that we are developing and the applications of those is something that all of us carry from the very first investor to the end user. And as this conversation continues, I hope we continue to have these debates with lots of different perspectives because just like the data, if we're only hearing from one or two groups, we're gonna end up making the wrong decisions. Well, I can't sum it up any better than Lauren just did. Thank you all. Before we go, Kay Frith Butterfield from the forum is gonna give us all, I'm sure, some homework and how we keep this conversation going. And as we always like to do here, hopefully take it from a nice conversation and important conversation. I hope you agree this was a great one and turn it into some action. So thank you all for attending, but before we go, Kay, join us. Thank you so much. And thank you also to the panelists for a fascinating one hour. So before you do go, Responsible AI has been figuring on the forums agenda since 2017. And if this panel reflects one of the pieces of work that we're doing with VCs and investors to think about where responsibility lies in that continuum. So we've kicked off really well today with all the comments. So if you're interested in our work and interested in joining this particular work, if you're an investor or VC, please join us. You can find me, Kay Firth Butterfield, on top link. But there are lots of tools that we've already developed and lots of other work that we're doing if you're interested in it. Please join us. Thank you. Thank you. Thank you.