 So listen, this is an absolute tremendous honor for me to host this panel with these esteemed colleagues. I think about Davos as being, again, the greatest collection of thinkers and doers. And Frankie, the group this week, I think, is just that the talent is absolutely staggering in capabilities. I have the opportunity to spend time with these panelists to talk about this subject of artificial intelligence or AI. And I think we have decided that we are going to make this a rousing discussion, one where we're really going to talk about the issues that we face as technologists and issues that we face as society and as humans and how AI is going to affect the next 20, 30, and 50 years of our lifetimes and in generations beyond that. The way I like to characterize this, and really from the time of the invention on the first computer, men and women have envisioned computing systems that, in essence, will take the best of what we think, the best of who we are, and deliver real-time solutions that are more efficient, more effective. Frankie can save lives and change the human condition. And the desire to leverage this full capacity really hasn't changed since we were first sparked by that vision. Often it was sparked through television shows and film. But what it has changed is we have scientists and technologists and leaders who have actually now developed a number of new computing platforms like artificial intelligence that make this type of computing a reality. Platforms which now have led us from the unimaginable to the imaginable. It's an exciting time. And these new computing platforms will continue to drive productivity as they have over the last 50 years through enterprise software and computing platforms into the ecosystems and economies where they are embraced. And by the very nature, they will cause disruption in the existing systems. And our challenge as participants of the World Economic Forum in 2017 and beyond is to let our mission of assembling these global leaders to guide and help us develop and implement elegant solutions to these highly complex problems that are posed by implementing these newly productive tools. We must bring a comprehensive and well thought out approach to managing this creative destruction that is inherent in embracing these ever more powerful tools. And frankly, we must use these tools to repair the deficiencies in this capitalistic system and restore this social contract with the people on this planet and deliberately march toward a greater harmony with our planet's natural systems. It's a great task. It's a noble goal. But we're all part of this greater narrative of globalization that began some 1,500 years ago in the aggregate that has, in essence, improved the condition of mankind. And the narrative is likely to continue, and it's incumbent upon us and leaders assembled here and across the planet to strengthen and reimagine the global cooperation to the benefit of all people and to our planet itself. And frankly, I don't think there's any greater tool in AI than when coupled with great minds, great leaders, and humanistic thinkers that can solve these challenges. I personally have the challenge and the opportunity to lead an organization called VISTA Equity Partners. We manage capital for pension funds, doctors, for teachers, for civil servants. And frankly, they charge me and my team with ensuring that they have a financial stake in the fourth industrial revolution. And so what we have the opportunity to do on this panel is to talk about how this fourth industrial revolution is being manifest through AI. So with that, I'd like to now introduce my panelists. They are very esteemed. They are wonderfully thoughtful people, and they are humanist, which is one of the reasons I'm proud to be a part of this panel. I'm going to start with Ron Gutman. And Ron is sitting two, three people to my left. His focus on health care technology dates back to his graduate school days at Stanford. And he started a company called Rock Health, which he ultimately reached over 100 million people, and he sold that business in 2009. His recent company, HealthTap, is one of the most innovative companies in the deployment of AI in the health care sector. He believes we've reached a point where AI is capable of extending the reach of human health care providers to offer health solutions to people who would otherwise not access them. You've probably seen him on his TED Talks, which have over four million views and counting, and has been translated into 46 languages. And Satya, I don't know if it's been using your translator to do it, but everyone seems to be thrilled with some of those TED Talks. Next we have Joy Ito. Joy is a director of the MIT Media Lab, the world's leading academic institution on AI research and development. He's been in the front lines of technology as an entrepreneur, an investor, and an academic. His seminal book, Whiplash, discusses how new technologies change our economies, governments, and societies, interact with each other and with technology. And because he brings such a robust perspective on the conversation, and he's uniquely qualified to paint the whole picture by telling us, what is AI? Who are the people that are actually developing it? What are the deficiencies? What are the opportunities? And how he gets to shape those young minds of developers were really interesting in hearing some of those thoughts. We also, of course, have Satya Nadella, who frankly takes a very holistic approach to AI development. He believes that AI will influence all aspects of our lives, the democratization of artificial intelligence. How do you accomplish that? How do humans interact with computers? How does everyone have access to the applications, the infrastructure that's developed? And how will these services be provided? He's led Microsoft's major investments in deep learning with researchers and developers attempting to understand and actually mimic the way the human brain processes applies data and information. And I've got a question for you at the end of this, that I need to hear your answer to, which I think will be quite provocative. And of course, we have Jenny Rometti. She's the chairman and CEO of IBM. And when I first started talking to Jenny, I had to let her know IBM has been one of the most innovative, transformative companies on the planet. And under her leadership, she has now taken on one of the most remarkable corporate transformations that I've ever witnessed and many of us have ever witnessed. And her leadership is now putting AI in the central part of the next chapter that her and her colleagues are writing. And in particular, the focus on cognitive computing, narrow AI, which augments human thought rather than replicating it. And how does that impact the markets that we work in, the jobs that they affect and the people on this planet? So we're interested to hear a lot of her thoughts on this. Just to give you a sense of size and scale, IBM's Watson will touch approximately one billion people this year through its business solutions and access across various industries. So I'm looking forward to this conversation. So I'm gonna dive right in. And Jenny, we're gonna start with you since you've now caught your breath. Yeah, thank you for that. First, IBM, again, a company long admired has been successful in reinventing itself for decades across multiple industries. And AI, now that is a central focus of the platform going forward, how did you and your team think about the development of AI? What was the right platform? How did you come up with the guidelines that you're gonna now use and push forward with your business customers? What are some of those guiding principles? Yes, so, and I appreciate that introduction and at 105 years old, we have been through a number of different transformations. And as you think about each one, there's something that provides the foundation and the basis of it. And this goes back many years ago. And there's actually a reason to why AI, we actually call it cognitive. And the basis is that we would be so overwhelmed with information that it would be impossible for any of us to actually internalize it, use it to what its full value could be. But if you could, you could solve problems that you've mentioned that are not yet solvable. And that would be cognitive overload, hence the word cognitive computing. And so that became the basis that we thought, this idea that for all companies and for the world, data would become the basis of competitive advantage, but you could not make use of that data unless you had technologies like artificial intelligence, what we call cognitive, that you don't program. They instead, they understand the reason and learn over all kinds of information. So that became what we consider the cognitive era and what we've placed our big bet on. And what you say, so what happens now? In fact, I think I could share, I think some of the lessons we've learned and therefore some of the principles we've come to. As now what I consider us the AI platform for business with, as you mentioned about, we will touch a billion people this year with this. Learned a few things. One is something maybe we'll get a chance to talk about later. With any new technology and new era, I think we all, particularly here in this setting, have to think about how do you develop trust for it? And one of the ways to develop trust is around transparency and a set of principles. And this morning I shared with our teams globally something called principles for the cognitive era or principles for AI. And again, we'll talk a little bit more about them, but they have to do with understanding the purpose for what you're developing, transparency and how it is when it's used, how it's trained, and who owns the insights out of it, and then as well your obligation to develop the skills of the world around it. So to me, again, we can come back to that, but for a business then itself, I think there are three things that we've learned in working with businesses that are important. One is understanding the purpose of when you use these technologies. And to us, the reason cognitive and not artificial intelligence, I say it's augmenting intelligence. For most of our businesses and companies, it will not be man or machine. It will be all of us working and I've watched this, whether it's with doctors, lawyers, underwriters, call centers, it's a very symbiotic relationship that you're conversing with this technology as it understands it. So our purpose is to augment and really be in service of what humans do. The second big learning we've had is that in businesses, industry domain really matters. So as an example, we're gonna talk about healthcare. One of the big areas we've also invested in. So we've only picked a few areas because this is an open platform and there are millions of companies developing their own things. But where we've done and taken the development on healthcare in Watson is what we call our artificial intelligence, I should have said that. 300 million records, but every institution has its own data to bring to this. And therefore in this world, the second thing is important is you wanna have industry data and really to unlock the value, it's your decades of data as well combined with this. And so these systems will be most effective when they're trained with domain knowledge and in an industry context. And then the last big thing that we've learned for a business is the business model. So any of you, either a new company or an older company, you've accumulated assets. That is your data. Back to data is a competitive advantage. And so we believe strongly that as a business you need to be sure of the insights that you get from your data belong to you. And that applies as well to how these systems are trained. And that's how we built this with that in mind. So the learnings have been across that. And as a result, I think you see some trust growing in these. So I just, I'll end on, in healthcare as an example, the work we've done around oncology is now rolling out, whether it's India, China, Thailand, Finland, Netherlands as an oncology advisor. That's been trained by the world's best oncologists, the 20 best centers in the world, Memorial Sloan Kettering, the Cancer Institute, clinical trial matching, Mayo Clinic, genomic sequencing, Illumina, as well as Quest, available to in the United States, really 70% of all patients that have cancer. So you get this reach and when those principles are followed. And that to me I think is the great promise. And on one hand we'll do everyday things, help teachers teach better. On the other hand, the reason this is worth fighting so strongly to have it roll out right is you can really solve problems. I mean, India has one oncologist for 1,600 patients. You'll never get there, as you would say, without this kind of technology. Satya, you have built really a wonderful business model around taking complex cutting-edge technology that frankly was available only to the few and giving the ability for many to access it. Talk a little bit about the AI approach, the cognitive approach and really the democratization that you're leading at Microsoft and to what extent that democratization creates, again, the opportunity for all to participate in what AI's promise is. Yeah, I mean, for us, I mean, I always think about our sense of purpose, our sense of identity in some sense starts with the first product that Microsoft created which was the basic interpreter for the Altair. So I always ask myself, what's the moral equivalent of that in this AI world? And that's really what informs, at least, our efforts. We obviously build a bunch of AI services, take Skype Translate, which you referenced in the beginning, which is pretty magical. In fact, recently, we had 100 people talking 10 different languages simultaneously being able to communicate with each other. Or the personal digital assistant, Cortana, I had sent a piece of email in December saying I'll follow up with two people in January. And lo and behold, it comes January and I'm reminded because of a commitment I made an email. These are the everyday tools that we want to build. But the key for me, though, is not any one tool we build, but it's the technology underneath it. How do we make it broadly accessible? Very much like how Jeannie was talking about because that's, I think, really the true benefit of AI. So in our case, one of the things that inspires me is the state I was born in in India and the state I now live in in the United States, both are using statistical machine learning to be able to improve the high school outcomes and use the scarce state resources smartly. Now that, to me, is democratizing AI, putting it in the hands of school administrators everywhere. It is about putting AI tools in the hands of oncologists and radiologists in Cambridge so that they can use the cutting edge object recognition technology to be able to not only do early detection of tumors, but also to predict tumor growth so that the right regimen can be applied. It is about being able to pick examples like that in nonprofit or in public sector, but not just stopping there. For example, one of the things that has truly been amazing to see is how individuals with the right tools can make a difference. There's this one gentleman out of our Cambridge office who's visually impaired. He decided that, look, we have some real breakthroughs in the last year, or especially around perception and object recognition and computer vision. So he's building glasses for anyone with visual impairment to be able to recognize not only people, but their emotion in real time. That, to me, is democratizing AI, but it's also to large businesses. So it's not just about these examples. So if it could be a bank that is able to give credit to the unbankable, more so. In fact, there's a startup in Kenya that was able to take, essentially, their solar grid, which is in the cloud, and predict, based on $1 per day incomes, credit rating. So essentially, they've created a credit rating where none existed before, or Rolls Royce that's creating an airline efficiency system, or Volvo doing driver safety. In fact, before autonomous driving, the key thing is let's not have distracted drivers. And so Volvo is using computer vision to make sure that you don't have distracted driving. Or Cortana itself, our agent finding its way into Nissan cars or what have you. So there are many, many of these examples. And to me, the key right now in this next phase of AI is how do we put tools so that others can create intelligence in every walk of life? Yeah, Saty, one of the questions, a little bit of follow-up, let me go to Joy, is when you think about these tools, what are the governing principles around these tools? You can't just give these tools to everyone, or can you? And to what extent can those tools, ultimately, for the greater good, be weaponized for not so good? Tell us a little bit about what some of those principles are and how do you actually guide a business that is a tool provider, so that you're actually on the right side of call it justice in this context. Yeah, that's a great sort of question. I mean, Jeannie started addressing some of these things, which is, what are the principles? So let's start with sort of even the pragmatics. I think Joy, I'm sure we'll talk more about even the ethics of it, but the way I came at it was to say, let's first start with just like good designers in the past, like take user experience. We've had design guidelines. Before we get into the ethics, the law, let's start even with a set of pragmatic principles that can guide AI creation for us or anyone else using the toolkit. So one of the first things we talked about was, I wrote a piece I think six, seven months ago where I said, the first thing is, let's make as Microsoft a decision that we're not trying to create AI, which of course learns like humans, but its purpose is to help humans do what they do better. So it's augmentation versus replacement. That's a design choice. I actually think that you can in fact come at it and say, well, replacement is the goal or you can say augmentation is the goal. In our case, I want us to make the decision that it's augmentation. One of the challenges with that though is who is making that design choice. And that's what leads to joy, right? I look at you as I call the wellspring from which most of our AI designers come. In some respects, you get an opportunity to meet these young minds, teach these young minds. And so we need to understand who are these folks? What's their demographic? Where did they come from? And how do we now help them understand the importance beyond this being a technology, but the importance to society and their role in shaping the ethics and the governance of this technology? I will say, I'm at the Media Lab, which is where some of the oddballs who are interested in AI hang out. And we have a computer science and artificial intelligence lab, which is where they make a lot of the core work. And in fact, I think most of the real research now is going on in industry. It's a historical moment, mostly because you pay and provide these people the resources. So it's not necessarily being driven by academia, but they usually come through academia. And this will offend some people, and I want to just say this kind of as a generalization, but I think that people who are very focused and very good at computer science and AI tend to be the ones that don't like the messiness of the real world. They like the control of the computer. They like to be able to sort of be methodical and think in numbers and complexity in a machine. So sort of by definition, you usually don't get the traditional liberal arts, philosophy types. Now you have those, they're there on the edges. And also because it's social, the way you get into computers is because your friends are into computers, which is generally white men. And so when you look at the demographic across Silicon Valley, you see a lot of white men. So one of our researchers, Joy Winnie, she's an African-American woman, and she discovered that in the core libraries for face recognition, dark faces don't show up. And these libraries are used in many of the products that you have, and if you're an African-American person, you get in front of it, it won't recognize your face. And so she discovered it because probably there was no one who had a dark face in the place where they were building and testing it. And so one of the risks that we have of the lack of diversity of the engineers is that it's not intuitive which questions you should be asking. And even if you have the design guidelines, some of this stuff is kind of field decision. And I think IBM and Microsoft are both doing a very good job of working with people in the field. But I do think one thing that we need to think about, and this is a very much a media lab point of view, is that when the people who are actually doing the work actually create the tools, you get much better tools. So like television and newspapers, the creators of the tools and the users sort of separated so that the evolutions flow down. But if you look at video games where the creators were into the technology, they went from coin-off to CD to online and you saw a continuing business model and technology evolution. So right now what I'm looking for from both of you guys and all of you guys is when Vizical came out and introduced a spreadsheet, it was a runaway success not because it solved the accounting problems, it gave the accountants a tool to express their creativity on a machine. And we don't have that yet. AI is still somewhat of a bespoke art where you have a super-duper engineer who listens to the customer, tries to understand, but the customer can't imagine the tool yet because it's too difficult. And I think whether we're talking about ethics or whether we're talking about the workforce, one of the key things that I'd love the both of you to think about is instead of delivering a solution, which is the first phase of getting people excited about it, I think you need to integrate the lawyers and the ethicists and the customer into actually getting an intuitive understanding of the tool. I think that's important. Jenny, absolutely interested in hearing your point of view on this. You know, you engage with some of the largest corporations on the planet who are going through this digital transformation embracing AI. How are they embracing it? Do you guys provide governing principles for them or do you basically provide the tool and let them come up with the right principles? Yeah, no, this is a very, very important point in the learning, I think. So anyone in this field, this is built for the people, by the people, with the people, if you want to say it that way. And my experience on all of these, whether it was doctors, underwriters, teachers, with best lesson plan, that that work must be done with them. We've tried it the other way. It does not work, just as you described it. And I think this is a really critical, critical point. And so when you mentioned that point about principles, right? And Satya started, this first principle, which I believe strongly is that you must have the right purpose for these tools as we build them. And purpose means it is augment and in service of. And particularly because there's so much fear about jobs and we'll come back to that job replacement and the like, but that odds are, you can't, there are some jobs that will wholly be replaced by automation, but most of us will be working with these systems is how most of its interaction will be. And so you being clear on your purpose, but the second one on this transparency, if anyone's to trust these systems, as you said I look, I don't see a picture, you must be clear, I believe as you build these with when, so if someone's using a system, tell them that's artificial intelligence, what, when it's being used. The second thing we've been very clear is that tell them how it got trained. This is probably the most important point, which is, was it trained by experts? Was it trained in a way that I can trust what these sources are and what data was used to train this? Because we're going to end in a world, these can be trained very differently. And back to that point about being in purpose of, the human needs to remain in control of these systems. Knowing it can always, if I can put that bluntly, pull the plug, the human is in control is the purpose. The second one is that, this point of when is it used? What, where, how was it trained with data? And then who owns that insight is the second principle and that third principle is I want to come back to it, but I also want to get our friend's skills. Jump and ask one question though about pulling the plug, because if you're like, we have an autopilot on a plane, and you're not allowed to overrule it now, or if you have a, it's kind of hard to pull the plug once it's in a mission critical role, right? I agree, I mean that in a slightly different way, and I think this is an important point. Because there's a lot of talk of, can they completely replace what we do as humans? And both by our principles, as well as the state of science, they are not going to be, we have this self-awareness, consciousness. That isn't the case, right? So I mean it in that context that you're able to. Ron, let's talk, Ron, you're an entrepreneur. You built this wonderful business. It is, you know, I said you've bet the farm to some degree on inputs from over 100,000 plus doctors, and there is no more complex, at least today, diagnostic environment than healthcare. Talk a little bit about the vision, how you're going to leverage AI, and talk a little bit about, you know, how you think about managing ultimately the biases, the treatment, the self-regulation for the doctors who are inputting, for those of us who used to get our medical degrees on Google when we had a sniffle, right? So tell us a little bit about how this works now. Absolutely, so to Jeannie's point, right? So yes, we start with doctors, but we actually are here to serve patients and serve use cases, right? So we need to start with the needs, right? So we're talking a lot about frameworks, we're talking about, you know, platforms and all these kind of things, you know? In the startup world, we care about what's the need? What's the underlying need that we're trying to solve? And the underlying need that we're working to solve is people every day have symptoms, right? To your point, you know, they go to Google more than a billion times a year asking questions about their symptoms, and it's time to go beyond search engines, you know, to answer questions about, you know, when you have a set of symptoms and you have a certain context, which, you know, search engines are not good in creating, you know, what's going on with you now. So the option today is to go to a doctor directly or to their emergency room, or to go online and search for information drawn in a lot of it. And we started a journey with actually, with just like solving the problem in the basic way, connecting hundreds of millions of people all over the world with real doctors and helping them ask any question that they have, right? And they asked the question and then doctors, we assigned the doctors, we created a technology that sent the question to the right doctors. And then we also created peer review in the process to make sure that doctors know that other doctors will actually review their answers that they're giving to patients, right? So in the background, we're built a very, very large training set, but in the front end, we just help people every day answer questions about their health and well-being. So the ecosystem of the doctors and the peer review is how you quote unquote, create a transparency. On the one hand, a rating system on the other and a learning system that hopefully will tune into whatever the issue is in an ethical way. Yeah, and the exact purpose of it is to create a place where people can manage their health from query to cure. And again, back to the use case, it's not just to learn about your health. And then go to a doctor and do this. Is there a place that we can use digital devices, right, a mobile device to manage our health from end to end? From the moment we have a question about our health, right? And we can get insights about our health using AI to moving to actually connecting you to the right doctor, right? Because even there, there's a decision, right? Because now we're just going to a doctor. But what if we could go to the right doctor or we can go to the emergency room? So, you know, we receive more than 24,000 notes on health up, taking us for saving people's lives. I'll give you an example. Someone came to us complaining about tooth pain, right? And they said, I have this tooth pain for a few hours in projecting to my jaw and then to my neck. And I'm kind of concerned about it. And it went to a bunch of doctors. And again, we're using a likelihood approach to things. We're not diagnosing anybody immediately. But doctors answered the question and said, while looking at your symptoms, if this is the kind of pain, there's some likelihood that you may have a heart attack. Sometimes pain projects from the top down and sometimes from the bottom up. The guy went to the emergency room. He had a heart attack. We saved his life. If he went to a search engine, if he asked the question, he would get dentistry answers, right? So putting things in context, right, is a very, very important things. And getting the doctors involved in it, getting enough doctors, more than 100,000 doctors involved in it, serving more than 5.4 billion doctor answers to create a large training set and then finding the patterns using AI is the foundation. That's a great example because it worked. But what happens if they said, no, go get your tooth pulled, right? And now what is your responsibility? This is one of the issues I think about, especially for you, Satya, because you are delivering, you are the tool maker. You deliver this tools and what if the wrong insights come or the interpretation? What responsibility do you have? I mean, this is one of the harder challenges. I mean, one of the things is, how do you take accountability for the decisions algorithms are making in a world where the algorithms are not written by you, but are being learned? And quite frankly, in the unsupervised world, which is more or less the current state of the art, label data, having the human in the loop with label data, having ethics, our law even govern. For example, you can easily say, let's make sure that there is no bias in label data. That actually is human inspection. So there are ways at least to manage that in this unsupervised world. The issue is, as we make progress, because one of the fundamental breakthroughs that has happened in the last, I would say a couple of years, is learning from our previous AI winters, we I think are in the sort of the right ladder this time. Even though we are in the supervised world, we are going to unsupervised. Already the state of the art of deep learning and reinforcement learning is these adversarial networks where we are literally generating label data, not through humans, but through networks. That's when it becomes even more complicated. That is where whose black box do you trust? What's the framework of law and ethics that is even ex-post being able to govern the black box? Who's in control of that? These, I think, are the real challenges in the next couple of years. How do we address those? What are the institutions we need to form, participate in? Is this industry partnership with government? Is it a global partnership? Is it a partnership with our customers? Tell us a little bit about, beyond what I call transparency, which is one way to just say, well, it's yours, your insights, your data, how do we now actually bring some semblance of order in a way that protects us as individuals, protects us as humanity, and protects us to some degree from the rampant or an errant machine that's learned from another. Well, I think you have to, it isn't just transparency. A minute ago when I was speaking about these three areas, these are these, we call them principles for AI, principles for a cognitive era. They guide what you do and what you, and I think it's frankly, our responsibility as leaders out there that are putting these technologies out to guide them in their entry into the world in a safe way. And so you can build differently. You can tell people things, and it isn't just transparency. You can tell them, and this should be trained in this kind of way. And so one way is that purpose transparency in the skill building you do in the world. The second way, there are partnerships that should form, Satya and I are on one called the Partnership for AI, which includes not just organizations, universities, government that you have this discourse because there will be some regulation and rules. And there is a point that with some of these decisions, people should be involved in them. Even though a machine could do them, it maybe shouldn't. And so that's back to then the ethics piece. So I think we are still at the very beginning of that robust dialogue. So some of what we are talking about is very knowable, but there are others that is really not knowable at this stage of where we're at. And so in addition to principles that I think are really important that I'd advocate governments and other companies here really try to adhere to the same kind of principles, I think it calls for us all to do these cross-industry and government partnerships to talk about these difficult cases. And do you want to have frameworks, principles, regulations around some of them? We were talking, go ahead, Brandon. Yeah, I wanted to add, like in our case, what we were very keen on from the very beginning is to bring the doctors with us, right? All the way from the development process to also QA. We actually are QA-ing what's coming out of this machine with thousands and thousands of doctors at any given moment in time. Because they feel that they have the ability to look at what the machine, so they created the data, but they're looking at what the machine created and they look at it and they augment it. So it's basically the support, it's basically the partnership between men and machine that keeps improving it all the time to the point that the experts are actually comfortable with what's coming out, not the patient, not the user, but the expert. And they're participating and keep improving it all the time. So that was very important. But that's a self-governing effect by an elite community. Yeah, and we did a study that we can talk about more later, but if a general public asked, we asked the question, if you think that a self-driving car should sacrifice the passenger in order to save a number of lives. And this is just an interesting base question because everybody, the majority of people said, yes, the car should sacrifice the passenger, but I would not buy that car. So it shows very clearly that the market is not the right way to make certain decisions. And so Lawrence Lessig had this very wonderful diagram where he has law, technical architecture, norms, and market as sort of the four forces that shape what we do and what gets made. And they all affect each other, right? You can make laws about technology, you can make technologies that affect norms. And I just finished a very intensive course that was a collaboration between the Media Lab, Harvard Law School, and the Kennedy School where we had people from industry and government and we had law school students and Kennedy School students and engineers teaching each other their art. Because what's important is that the lawmakers understand deeply what choices they have and what's going on because there's another thing is with regulation, you can regulate the research of and you can regulate the deployment of. And they're also very two different things. And what you want is you want thoughtfulness on both, right? And I think you can't just leave it to the market because I think you'll end up with solutions like that where, and we can talk about cars later, but I think it's that understanding each other. And so that's I think the role of these partnerships in AI, some of the research we're doing, but it's beyond just like a panel discussion, you really need to sort of intensively go in because most engineers don't understand the law. Most engineers don't even know why government exists, right? And so that's a pretty important thing. I mean, you know, picking up on that, one of the things I feel that in parallel, just even inside of the software engineering, if you think about the software engineering state with the art around AI, it's a lot more nascent compared to say the software engineering practice we have even around security. And we've got to invest a lot in there. So for example, isn't the cyber physical systems autonomous driving or even, you know, being able to give advice to patients or what have you, where you're having real impact in the real world, you can't make mistakes. You can't say, well, I'll learn from that mistake and get better. That means you've got to create simulation that is the real world, because really you're going beyond perception when you are making calls based on your predictions of how the constrained world works, you got to create real simulation environments. And one of the things is, so that means software tools and the tooling around it has to be very sophisticated. And one thing that we recently did, which I was very excited to see is take something like Minecraft and make it an AI playground. So we literally, instead of taking your AI algorithms and putting it into the wild, put it into the Minecraft world as a closed loop system to learn. And that's one tool. So I think the software engineering infrastructure is probably the first place where we will have to make a lot of investments because today it's a lot more black box than it needs to be. But I could I just add a point of, I think this idea that being a black box is going to be an issue and that we should really work for it not to be a black box. Because I've witnessed one, whether it was engineers who were using these tools to help them with predicting how their projects would run. Unless they could understand how that was working, they didn't want to believe the outcomes. I watched this early with the doctors. The first thing the doctor would say is, well, okay, I realized that Watson's read 300 million documents. It's been trained by the world's best oncologists in the entire world, 20 of the best cancer centers, but still show me, what was the degree confidence on everything? And show me what was the data that went into every one of those conclusions that these systems drew? And I think you're going to see that behavior from any professional usage. The underwriters, I watch it. Show me the pieces that went in there. Give me the percent confidence. And to your point, and if you're treating a patient where the doctor, look, I know that the chemo may be the best thing, but perhaps I'm someone that between my hair loss and other things, I don't want, you know, there are other judgments, a very simple point go in. And so I think the way these tools are built from the very beginning that allow us to interact in that way, or it will stop that development from solving what are some of these biggest problems. No, I agree. And I'll tell you what's interesting to me is, you know, in spite of, I'll call it some of the state of development and to some degree the reluctance of acceptance, this is still coming at us like a freight train. And it is coming from all dimensions. You know, there's a slide that I may put a bit later that shows, you know, the number of AEI, call it platform companies are now developing just like Ron's. But what really gets me is the big challenge we have is the impact on society beyond the technology, the benefits, but the people. Yeah, we're gonna look to augment their opportunities and make them more efficient, whatever it is that they are doing at that time. But the real question is, what about jobs? What about displacement around them? What do we do about that? Jenny, you've talked a little bit about this new collar concept. Talk a little bit about them in the context of AEI. And I wanna spend some time talking about what it is, is our responsibility to the people in this planet as we really start to enhance the opportunity of using AEI to enhance the educational opportunities and development opportunities so that we can actually, to some degree, manage and to, if we can, this inequality that has occurred from the technology that has created massive amounts of wealth for just a few. So let's talk about that and let's talk about what we need to do about it as an industry and as a group of participants. This is a topic I think is probably not one more important for all of us in every country. And it's what's led to some of the negative discourse around every country in the world of this inequality and have and have not. And so it is a fear that these kinds of technologies and what they would do to jobs. And what you just said a second ago, on a macro basis, look, there's five million open jobs in America, three million open in Europe, 80% of Japanese companies can't find people. At a macro level, this sounds good. Even when you read reports that say, we need all the technology we can to keep up with demographic change that you actually need it. Put that aside, on a micro level, when you have been displaced or when you feel your skills have not kept up, and this to me is the issue of our time. Skills is the issue of our time. And I don't believe a government can solve this alone. I don't think companies can, but I really think there are things we can private public partnership do together. And some of it's to bust some of the myths out there. For one, there will be new jobs created by all this. That's proven every time in history when there's been a dislocation of technology, but with the new jobs met new skills. So when we came out of farming, you had to learn to read. In the industrial era, it led to mechanical skills. This is gonna lead to these kinds of skills. So what could you do in a nutshell? I think there are three things we could do, all of us could do. The first one is as you do training a recognition that the skills needed to succeed in this world are not all high degree skills. Meaning you could with a less than a four year university degree, you can participate in this AI economy. And therefore this is where the name new collar came from. Not a blue collar job, not necessarily white collar, quit trying to look backwards, go forward. These could be called new collar positions. And that in many parts of the world, take a four year high school or wherever it's called, in maybe it's six years. And we've now scaled this to 100 schools around the world. They're called Pathway to Technology. Give them a curriculum that's relevant. Give them mentorship and be sure they're teaching what you're hiring for. We have 250 other companies working with us on this, Pathway to Technology. And they are now in not only the United States, they are not only in South Africa, Australia, but coming quickly behind that is India, Netherlands, oh boy, where else? Korea, a whole set of other countries. And over 100, and these kids are coming out, young adults, with these skills that work in this new economy, in this data economy. And so I think one way is take very seriously and understand this is a once in a generation change, a skill type again. It's not just an incremental point. So therefore fundamentally change education and it is quite doable, as I say, we're scaling with nothing here up to 100 already. That's one. The second real quick point is, I think it's incumbent on all of us to do retraining. Those of us in companies, it is our responsibility and obligation. We do a great deal of that. And the third is, I think, I've watched this play out now with a number of companies, these technologies can actually help people that don't yet have the skill do their job. So Bredesco, 59 complex products in a call center, they use Watson to help them do their job. They otherwise wouldn't have been able to train for it and qualify to do that job. So I think those ideas of assistance, retraining, and fundamentally changing the skill set to this new collar would go a long way to both address what are not just issues of AI, they are frankly skills, issues, and jobs is the underlying issue across almost all of the issues that are discussed here in this whole trip. Joy, you have a comment. And then Ron, I wanna hear, you're on a front line of this and Satya, I also wanna hear from you because I've seen some of the augmented reality, I'll say augmented intelligence that you do with certain fast food restaurants, et cetera. So I mean, it's really an interesting dynamic in practice, but Joy, please. So I really agree with what Jeannie said and it's very heartening to hear. I think the biggest impediment for developed countries is the educational system because I don't think it's a step, but it's a step that has a curve. So what AI can do next year versus two years, it's gonna constantly change for the next while. And our educational system isn't dynamic. They create a course and they teach it for at least a couple of decades. So that's a failure, right? Said by a teacher. Yeah, I mean, and that's sort of a definition of a curriculum in many ways. And then the other problem is that you're teaching skills and knowledge when you probably need to teach learning, how to learn, how to be creative. And we're basically, if you think about the fact that if you look at any test that your kid is taking, if you can imagine a computer being able to pass that test, why is that kid taking that test? I mean, there's definitely things that they need to learn, but what you wanna do is start to figure out what a computer won't be able to do and teach that to the kid. And it's usually project-based, peer-based, creativity and so on. And I think I totally agree about the less skilled people being able to do more skill jobs, but that will displace people. So you will have accountants now fully capable of doing a lot of things that we currently ask lawyers to do, or you'll find the pharmacist completely capable of what traditionally would be your general practitioner. Then you have these incumbents, right? They have licensed, have spent all their money getting these fancy degrees, right? So for these developed countries, you've got the jobs of licensed practitioners who will be displaced. So in fact, I feel like it's gonna be a lot of the white collar work that's gonna get displaced, at least by the AI part. There's obviously the robots. And then the newly emerging sort of middle class or lower middle class that are gonna be empowered are gonna have this glass ceiling that's gonna be fake, right? Because they're empowered. So and this is, I think, going to be easier to fiddle with in places that are emerging like India and Africa. And in the U.S., I think we'll start looking at them. Already we do when we do development work, we look at them with envy because you can just sort of build a new school and just say, we're gonna have it all project-based with, and because even like class delineation, one of the biggest problems is every kid is different. We talk about precision medicine, precision education. Every kid is different. Some kids learn through projects, some love textbooks, but we're all cookie cutter. Because we've been developing an educational system that's trying to create factory workers, which are like robots and soldiers and people who are obedient. And we need more discipline. No longer fixing this context. Ron, tell us about your experience in dealing with the doctors' constituencies that you were engaging with your platform. Yeah, so I think what I'm most excited about with AI is helping these doctors practice on top of their trade. So they don't need to deal with the day-to-day annoying things that they just repeat themselves all the time that they actually don't like doing and focus on the things that are more complex when the machine can help them deal with the thing that they actually don't want to deal with. So you're taking people that actually have these fancy degrees and actually put them to work at a place where they are trained rather than spending 80% of the time doing things that the machine can do and free them up to do things that are a lot more important. And just to your point, allowing other professionals to take some of these easier tasks and just move the entire stack up to the top of the trade. So that's one point. The other point that excites me a lot is the whole notion, particularly in my field, it's what happens next with AI in creating new jobs that don't exist today. So in our field, just to give a pragmatic example, think about the world in which we have sensors, we have wearables, we have data that comes from the individual from multiple perspectives, comes to one place and enables us to basically not only just take a few symptoms and put them together in the context of a personal health record, but just add more signals from many other places to understand the individual even better. But here's the interesting that comes now. We're moving from reactive medicine to proactive medicine. Here's where AI comes to work in a really exciting place where doctors instead of being a place where patients go to when they're feeling symptoms, all of a sudden this enormous opportunity we have to start seeing the signals before they even occur and then send them to the right level of care. It may be a doctor, it may be a nurse practitioner, it may be a population manager that looks at some of these signals and start helping you before you even have a manifestation of the symptom. This is a new ecosystem of jobs, not only on the doctor level, but even on the very basic level, people that watch the signals that come from the machine, the machine gives them some direction and they know how to triage people to the right level of care. That's a lot of new jobs. That's really exciting for people from an outcome perspective, but also from a job creation perspective. So let's think beyond what happens today and where we can take it and then train people for this particular opportunity rather than just think about what's happening today. Makes sense. Satya and I talked earlier about a dynamic. There was a report that came out yesterday about there are eight people who have as much wealth as half of the planet. And if you look at it, arguably two thirds, three quarters of them all made their wealth in some form of technology, technology platform. And if we think about the arc of the narrative, how long it took for that to occur, AI is gonna accelerate that and can accelerate that income and wealth disparity. Let's talk about how we need to direct AI towards managing that disparity. And frankly, giving the opportunity for those who don't have experience with these productivity tools to actually improve their station in life and become part of this new fourth industrial revelation. Because I do believe with you, Jenny, that's the topic of our times. That's the problems we need to solve here. So I'm curious to hear what you have to say about that, Satya. Yeah, I mean, a lot of important points have been made. Let me just sort of try and frame this as follows. One of the things that at least I am grounded in is the overall world GDP growth is not stellar. It's not like we actually have great economic growth today. So we actually need technological breakthrough. We need AI. Now you bring up the issue, which is is the surplus that's going to get created because of breakthroughs in AI that will help us solve maybe the hard problems of drug discovery or climate change or education. Is it only going to the few or is it going to be more inclusive growth? That is a very pressing challenge. And by the way, this challenge has come before, even in the industrial revolution, we've had this. You talked about the new social contract. The last time we were faced with this challenge, in fact, some of what we take for granted as the social contract today, the safety net, at least in the developed world, the labor movement all came about because of that safety contract being broken in the industrial revolution. And we had systems, at least, which brought us back to an equilibrium. I would say that's what's needed. And that's probably one of the pressing things. So now, in that context, I think Jeannie went to the place that is where we all need to go. What is our responsibility, especially as industry? The first thing I think we should do our very best work in helping train people for the jobs of the future. None of us can sit here and predict exactly all these jobs, but that entire lump of labor fallacy, as economists call it, will be disproven, which is there will be new jobs. It's not a fixed amount of labor that's needed. But the question is, how do we know what are the skills? And this is where I think we need some new breakthroughs. We can, in fact, I'm inspired by what's happened in Switzerland and Germany where they've had these apprenticeship programs. They've had a better, like even if you take Germany after unification, how they've managed to grow and creep equity and retrain the population has been pretty phenomenal. So I think we can all learn from these examples. And in our case, for example, one of the things that excites me about LinkedIn is to create that economic graph, which is what are the jobs, a real-time feedback loop between skills, jobs, and people so that the economic opportunity for every individual can be maximized. Right, the emergence of that new ecosystem in that context. No, I'm gonna take one question. So whoever wants to ask a question, get ready for it. But before they get that, I've got one question for you folks. We have spent a lot of time talking about how we're gonna get machines to think like man and so they can learn from each other. What is the dynamic associated with when we actually understand the genomic sequence to the point where we can actually change that and have man think more like machines? What's the danger? How do we stop that from occurring? Should we stop it from occurring? Those are the questions ultimately that get posed for us. You all have perspectives on that. So, you wanna start? You can start. Oh yeah, I can start anyway. I think we've already gone, started down that path by treating companies as legal entities. I mean a company is sort of a, and so I'm very much like you. I think we have a collective intelligence. Companies are a collection of intelligence beyond the CEO. It has a legal representation, it pays tax, and it has this thing. We kind of know how to deal with it. Some of them are a bit out of control, and I'm worried that some of them are more powerful than governments, but we think about it. And I think machines will start to become either parts of corporations but built into the collective intelligence. They may augment humans. They may augment corporations, but I think the corporation is really a kind of AI already. And so how do we think of the corporation becoming even more non-transparent? Because we see people now calling on AIs to help decisions in boardrooms to the point where someday boardrooms may be rubber stamping decisions by AI. So I think that's really a way to think about it. I think the word you'll always remember is augment and it is not one way. So I think that is the key point of this and that it is this collection of knowledge that will go back and forth. But I would still, I would end this on, when I weigh the pros and the cons, meaning of what does have to change versus the benefits of what is outweighed, I do believe thoroughly that we will solve more of these issues with what the real issues in this world are about skills, about hunger, about healthcare, that that will outweigh the benefit, will outweigh the downside. And it is our job to safeguard that that downside does not happen. The one thing I'd say is I think a lot about in a world where let's say there's abundance of AI, what is going to be scarce. I think it's common sense and empathy. And to some degree, one of the things that we should probably emphasize is that very human qualities that are something that we are innately born with, how do we emphasize that much more significantly in everything we do? In fact, it might bring humanity to be its very best in a world where there is abundance of AI. So it's back to smiling, right? We started our, might talk about smiling. So yeah, I think about it more as a partnership. We think about it either we have humans making decision or now we're going to have machines making decisions and it's either or. But to Ginny's point, I mean, I think there's something about the partnership between humans and machines together, but not just in like a cerebral way, a utilitarian way, but in an empathetic way, right? It's not like the cold machines that are just doing a work and they're the human beings here, but how you can make it more harmonious? We talked about engineers and how they think about the world and everything. I think we need to think about it in a very different way. I think we need to think about it as a partnership, as something that we can do together. And how do we train the machines to be empathetic? How do we create more conversations between humans and machines? We're trying to work now on voice activated interfaces and others that will actually make the machine and the human more connected because we're not gonna type the information into the machine. We're actually gonna converse with the machine which I think creates empathy and then it creates the partnership that creates the value on an ongoing basis. Okay, I'm told I have no time for questions. So with that, what I'm gonna ask of you all, please just give me two or three things you'd wanna sum up and take away from this session and from AI. Jenny, let's start with you. That this is the, this is an era. It will be the competitive advantage both for companies, but it will solve the unsolvable problems and it will be a partnership between man and technology. Great. We've talked a lot about what engineers need to do, but just like the internet wasn't something you leave to your engineering department, all of you need to understand AI beyond just what you heard here. You have to kind of intuitively understand it so that you can help shape it too. Because I don't, I think it's, to be a partnership, the other side has to come halfway. And so I think this is the year where AI is no longer a computer science problem. I think we're in the early innings of these things, right? So we're learning, right? So things will evolve till we're really, really at the very beginning of it. And I think that this is an opportunity to come together. I love this panel because it brings multiple perspectives to what we need to do, need to understand the technology, the ethics, the security, the privacy, all the very important things that come together and we need to come together as a community to educate each other and create the infrastructure because this will change the world in a very significant way. I would say that it's, I think, our responsibility to have AI augment the human ingenuity and augment the human opportunity. I think that's the opportunity in front of us and that's what we've got to go to work on. Dad, I think you all are right, but I have to say one thing. I think we need to direct our best tool, AI, towards education to manage the disparities across our planet in different dimensions. So thank you all very much. I really appreciate the time and hopefully you all have enjoyed it. Thank you.