 I'm joined by just an incredible panel. I didn't introduce myself. I'm Ina Freed, Chief Technology Correspondent at Axios. What I thought to do instead of waste any of our precious time on introductions is just have everyone instead sort of talk about, when we talk about AI and jobs, how do you come at us? What are the first few things that come to mind for you? And that can serve as both an introduction and a starting point for our discussion. So I'd love to start to my left, Lauren. Thanks. I'm Lauren Woodman. I'm CEO of DataKind. And DataKind is a nonprofit that develops data science and AI for the use in a wide variety of social impact or social impact sectors. I look at it from sort of two issues from two different ways. One, you know, frankly, just having spent a long time working in nonprofits and education and policy and humanitarian response. I worry about the disruption that is potentially coming and whether or not we are training folks to be successful in the next generation. I have no fears of what's going to happen. I mean, this is the technology progresses. Technology can be used for good. I'm an optimist when it comes to those types of things. But the transition periods I always worry about and are we preparing government, society, community to do that? And are we prepared to help support people through that transition? The other thing I think about a lot is in the sector in which I work, are we actually helping the organizations that make communities thrive? Are we helping them think about how their work is going to be disrupted and how we deploy these tools against those problems? Because they could be incredibly powerful and incredibly impactful in helping us make progress. And here, you know, your company kind of bridges two of the areas. When we first started talking about AI, sorry, automation, we thought it was everyone else's jobs that were coming from. And all the folks in this room, we were like, automation's wonderful. It's going to make my company better. And then all of a sudden, we realized, oh, it's, you know, AI is coming for our jobs. Two, in one sense. Talk about you're kind of leading the charge on that front. Talk about what you're doing and what your company does and some of the things that AI is already doing. Thank you. Glad to be here. Mihir Shukla, CEO and founder, Automation Anywhere. It's an AI and robotic process automation company. We have 5,000 customers in over 90 countries. We today run about 100 million processes with AI and that is growing double digits every month. So if you think AI is coming, it is already here. So what kind of things that we do that weren't possible before? Maybe that would be a good way to level set it. So if you think about the world, there are about a billion knowledge workers. These are the people who are sitting in front of computers, either processing mortgage applications or healthcare claims or supply chain requests. All of us who sit, you know, consider amount of time in front of a computer doing our job. And the way many of us do that job is we get data from emails and everywhere else and then we have about 18,000 different applications. We input staff and make some decisions along the way. All of this was not possible to automate because it's just a huge universe of 18,000 applications and too many variations. What has changed in the last many years is with the help of AI and robotic process automation, now software bots are able to operate all applications. They look at an application, their forms and fills and they can execute a process end to end. So a mortgage example, a mortgage application that used to take 30 days could now, you feed all the data, it will take all the data, make decisions, type it into all applications and in three minutes you're done. So if you step back about 15, anywhere from 15 to 70% of all the work that we do in front of a computer could now be automated. It is truly a watershed moment that's happening. Now the good part is that I don't know if anybody who wakes up and says my mission is to process invoices or have that play. So when I meet these people who are using this, they are delighted. It is a human bot partnership where I offload some work to the bot and then I do what I do. So I'm very optimistic about the future. I think transition would be, all transitions are challenging and a key in this transition is reskilling. One quick follow up for your customers. Are they mostly reducing the number of workers they have doing those things? Or are they finding new work for those same workers to do? That's a great question. What is happening is when you process mortgage application instead of 30 days in four minutes, you end up processing a lot more of it and you gain market share. So this is about doing more. The other thing I would point out is that out of 100 million processes that we run, we estimate about 20% of it is the things that we never did before because either it was not technologically possible or economically not viable. In another three to four years, I estimate that to be 40%. That means 40% of the new things that are new products, new services, better quality of service, better quality of life. We forget technology is not about doing things but existing things better, cheaper, faster. It's about doing things that we never did before. Great, Eric, I know your work at San Bernardin is all about sort of human-centered AI and sort of what is the role. Help us share, you and your colleagues are doing a ton of work. What should we know about this topic? Sure, well first, I agree very much with what Mihara and Lauren was saying, but first let me just congratulate all of you. It's nine o'clock in the morning on a Friday the last day of Davos and it's standing remotely here and it underscores what you said at the beginning about how important this topic is and it reminds me of a similar session a few years ago when we looked at deep learning and Mark Banyoff had one of his amazing parties and I left a little after midnight, I think. He was still there and then he was on the panel with me and I turned to him and said, well, so what time did you finally leave? It was an early morning panel and he said, without missing a beat, oh, I just came right over, I didn't go to bed. So I don't know how many of you guys did the same thing there. I hope you at least changed your shirt. But this is a similar, as Ina was saying, a similar inflection point in terms of the power. The deep learning revolution really set off all this interest in AI over the past decade or so and now I believe we have a similarly important set of changes with generative AI and with what we call foundation models at Stanford. Andy McAfee and I have been talking about this exponential improvement in the technology and how our labor, our institutions, our skills, our organizations aren't keeping up and there's a growing gap. I'm not sure exponential is the right metaphor anymore. I'm beginning to think it's more like punctuated equilibrium because this is like a burst forward of capabilities and they build on top of the other ones. It's not like the other ones have gone away. There's always a wave of concern and fear about job loss and whether or not there's gonna be mass unemployment in fact unemployment is at a record low right now and so it's not really eliminating tens of millions of jobs or anything. What it's doing is it's affecting job quality and changing the way that we do the work. One of the things that we're looking at at Stanford, as Ina mentioned, is keeping humans in the loop and the way that that can be done in a way that makes the work more fulfilling and maybe gets rid of some of the boring routine work of filling out invoices or whatever and people can focus on some of the more interesting human centered parts and connecting. That doesn't always happen and I think that there's an opportunity there to use the technology in lots of different ways and one of the things that will be interesting to see over the next decade is to what extent we do keep humans in the loop and work on creating higher job quality and not simply looking at doing more of the same more cheaply or driving down wages. Either path is possible but I think one of the most of us would like to see as the path we do going forward for the next decade. You're just, so the implication what you said is it's a blue collar, right? Now what about white collar? Because the real issue I think is how far this is gonna go? How far up the value chain? Well, and that's why I'm so glad you're on the panel, Martin, because your industry is actually the one that I think is gonna be changed the most dramatically in the next company or one of the ones that's gonna change first. I wouldn't single out that. You're gonna spend a lot of fear out there by saying what you just... Well, let me tell you some of what I know will be possible and then I wanna hear how you're already using it. So I know today you're gonna be able to use the combination of the image and video generators and the text generators to have a story idea, a commercial idea and say imagine you're doing a commercial for BMW. A BMW is driving through the snowy streets of Davos when a deer comes and the car breaks and sends an email to blah, blah, blah. Anyway, not only will it be able... It'll be able to... You wouldn't make it a storyboard. You wouldn't make it a storyboard. You're a copywriter, I don't think I'm the best. Not only will it be able to copy, sorry. No, but I think just... So we're a digital disruptor, I guess. We started four years ago. We have 9,000 people in 32 countries. Very tech focused, about 50% of our revenues come from tech. We're always looking for disruptive technologies and disruptive ways doing it. And just to come back to what you just said, the sort of conventional wisdom is it will affect copywriting, art, the creative processes. And we've had an example actually, I think it was about two years ago actually out of Russia where there was this design style that became very, very popular. They thought it came out of a small design agency. Human, it had turned out to be a bot. So a famous example. I think the really most interesting area, and it's by a sort of picture up on that question of white collar, blue collar, is I think the biggest impact that is not gonna be, it will be on the creative side of the business, but it will be in the data and analytics and digital media side of the business. So it's an $800 billion industry, a media industry of which digital is currently two thirds, particularly to go to three quarters by 2025. So media planning and buying, which is a process or set of processes which are algorithmically driven or lend themselves to algorithmic analysis, is an area which is very human driven. In fact, if you went back a few years ago, when the platform started developed like Google and Meta and Amazon, the conventional wisdom was, and when I was running WPP, that we would be disintermediated by those platforms. It didn't happen, and the reason why was that Google's business was not in employing people, or Meta's business were not employing people, and they didn't want to go into a service business because they were very labor efficient and capital intensive. That's the big change. Now, you will be able to automate the media planning and buying process in a highly effective way. It's become more digital, which lends itself to it as well, and there are more permutations. You know, this morning you hear about Netflix, you've got Disney coming in, you've got Apple and Microsoft. It's gonna be a very highly competitive area as well from the technology. One of you, you've already got Microsoft in there. You've got Google, for example, here, talking about how their models now are even more sophisticated than you had heard in the context of open AI. So my prediction would be that the media planning and buying business, which is the guts of the so-called holding company, profitability, and base, will be disintermediated very significantly. It may take about five years that the people on the panel will know much better than I do how long this will take. It may take five to seven years, but it's gonna revolutionize, so you will not be dependent as a client on a 25-year-old media planner or buyer who has limited experience, but you'll be able to pull the data. That's the big change, and that's, I think, the real issue that everybody is focused on and worried about, because we might be displacing, you know, mortgage applications or speeding up the process and creating better intellectual capacity for workers as a result, but the real issue is how far up does it affect people at senior levels in agencies? And I think the answer is gonna be it will. Yeah, no, I definitely agree. Eric, I'm curious, you know, what are the things that we should be watching out for where the AI may be good, but not sufficient, and there's two areas that come to mind. One is, you know, the canonical example in this has been the idea of a radiologist, and the computer's really good at spotting certain kinds of things, but a human with experience is really good, so really the best combination is to have an experienced radiologist and AI. And that sounds really happy, except where are we gonna get the experienced radiologist in a few years when there's so many fewer people that have a career of experience? So one issue I'm particularly concerned about is that AI may be as good as humans at a certain point and certain tasks, but you need humans with lots of experience. And then the other that I'd throw out for, and well, I'll throw out that one and then I have another one. Sure, well, that's a great example, and I think there's a new division of labor that's emerging. And the radiology example, I mean, it's a classic one, Jeff Hinton about five years ago said we should stop training radiologists, AI can do that job better. There are more radiologists now than there were when Jeff Hinton, brilliant guy, invented much of the deep learning technology that we're all using, but he was very wrong about what's happening with the labor market. And a big part of it is exactly what you're saying, that there's still important parts of that job that humans can do better. Now, it's not so much reading the images that they both have to do, although there's still some room to be added there. It's that if you look at the exact job of radiologists as we have, there are 26 distinct tasks that a radiologist needs to do. Reading images is one of them, a super important one, but they also consult with patients. I don't think you'd want a robot to tell you whether or not you have cancer at the end of the diagnosis. They coordinate with care with other physicians. They sometimes administer conscious sedation. That's something I'd want the robot to be doing to me. So those other tasks are things that the human needs to stay in the loop for. And we looked at 950 occupations. We did not find a single one where machine learning ran the table and could do all of them. In each case, there were human parts of the occupation that humans needed to do. So what does that mean? It doesn't mean that machines are just gonna mass replace whole occupations at a time. Instead, there's a harder but more interesting task ahead of us which is restructuring work and redesigning it. So this is the great restructuring of work and dividing things up and saying, okay, the machine can now do parts of it, the invoices or reading this part of the image and the human needs to do other parts and we need to reconfigure it. That's a job for CEOs. It's a job for HR managers. It's a job for a lot of other people. And that has always historically taken years if not decades to play through. When you looked at electricity being introduced, an amazing technology, but it took about 30 years before you saw significant productivity benefits, not because electricity was a fad and it was unimportant, but because it required a reinvention of the factory and reinvention of the office to take full advantage of it. We're in a similar period now. Now, one of the things that's striking and different is that that set of tasks that we analyzed just in 2017, it wasn't that long ago, with radiologists, it's a whole new set of tasks. Now, because of what I mentioned earlier, generative AI foundation models, they're affecting a lot of creative work that I used to put at the end of the line, but I was talking to a CEO recently and they were using it to help to come up with the KPIs for their next board meeting. I was on a panel, I think it was yesterday, all kind of blurring together and the CEO of Vimeo was telling us that she was using it to write the press release and she had a little bit of brain block and she said it was as good or better as what her team had been doing. In each case, though, as you're suggesting, I would not advise just blindly turning it on and walking away. These systems have too many flaws. They don't understand truth very well. They can hallucinate facts that aren't really there. Are they gonna improve over time? They are going to improve dramatically over time and I think there's gonna be a constant evolution of them. Certainly right now, it would be just downright dangerous to use them without having a human in the loop but I think even going forward, we're going to develop a new job, you touched on this, the job of prompt engineering. How many have heard of the word prompt engineering? A few people, you're all gonna be hearing about it soon. Prompt engineering is the idea that when you work with one of these large language models, you can write different kinds of queries and it turns out that depending on how you write the query, you get dramatically different results. Even the inventors of these technologies are surprised at some of the things you can get them to do if you ask the questions the right way. And that really is the piece of human creativity and it seems small but it's significant and for some of your creative type smart, I do think they're gonna be using these technologies far more broadly on the creative side but that creative piece is gonna be, instead of having to, first of all, it's great for ideation. I've been talking to AI artists and what they love about it is they can say, do this and they get 100 examples and I think your creative types will love that. But listen to what Eric said. He said at one stage, he said, well, this is for CEOs or head of HRs. So you have this upward drift and you have this concentration on the top layers who will be protected or will not be as affected. Oh, ChatGPT is a great CEO. Well, this is what, but this is the really interesting area. How far does it go? And I don't think, as we see artificial general intelligence or developing over time, not AI itself but as it becomes even more sophisticated, the real question that we're all nervous about or all focused on is how much of a replacement? Now, you said we're in full employment now. That may not be the case over the coming years. And then I know, Lauren, what's the, I'll be brief. I'm fascinated by the idea of artificial general intelligence. I also think it's quite far away even though we can mimic it. Lambda, some people thought that there was AGI inside of the current models. I don't think there is. I think most people don't. And it still a ways off. What we do have is human level or superhuman level in certain specific categories and tasks. And there's a big agenda for researchers like me at the Stanford Digital Economy Lab and for CEOs and executives to understand where the strengths are and where the humans have a comparative advantage and then sorting out that new division of labor. And? And keeping them working together. Now, there's an instinct that I often have, a lot of people have, that when a machine gets very good at something, it's going to substitute for labor. It can. And sometimes that's valuable with the invoices. But more often throughout history, the technology has been a compliment and has allowed people to do more and new things. And one way to can tell whether something's a substitute or a compliment, if it's a substitute, it tends to drive down wages. It replaces what humans are doing. If it's a compliment, it tends to drive up wages. Over the past couple hundred years, wages are about 50 fold higher than they were a couple hundred years ago. So most of the tools we've invented, by and large have been compliments. I would just say though, income inequality has radically grown. Exactly. Most of the benefits have not gone to the average. The past 10 or 20 years, it's been a little different. And we've seen that there's a whole set of people that it's been substituting for as opposed to complimenting. And I think that's probably, I think that's the grand challenge of the next decade is how can we use these tools other compliments when it's substituted. I want Lauren to weigh in on some of the equity pieces too. Yeah, so there's a couple of things that are coming out here that, you know, there's a little red flags in my, you know, in my non-profit brain go off. But I, and I'm, so I'm gonna try and weave a couple of thoughts together. The process automation piece, right? That is a, in many cases, a substitute, right? And makes things easier. I worry that there is a commercial pressure to jump to those things very quickly. And to your point exacerbates the inequality in the systems that already exist. And you said, you talked about mortgage processing. We did some work around, you know, sort of financial inclusion. There's only 45% of Americans that have the documentation ready to go apply for a mortgage. 82% are actually credit worthy. So, you know, in a place where you're automating those systems without human intervention, and I realize we might not be there yet, but without that human intervention, we don't create those opportunities for people to get in the inequality may grow. Eric, you made a point about radiologists in its classic example. I don't want a computer telling me that my scan was difficult, but I also don't know that I need a doctor to do it. And we were talking about this beforehand. Like doctors are, AI is really bad at telling you you have cancer. Many doctors are pretty bad. Oh, I had that experience. So we were talking, and I wonder if there isn't a job title, a new job called Empathist, and I'm gonna trademark that. And in fairness, chat GPT apologized to me the other day. You know, and so, I mean, it's... Already better than the doctor. Yes, I'm not saying that it's developing empathy, but I mean, you know, and the coordination of care can be automated. I think there's new kinds of jobs, and emotional intelligence will become increasingly to the forefront, something that humans depend on other humans for. Lauren, if I take the example of mortgage processing, actually, one of the reasons why we can only offer a loan to a limited set of people is because that's the only one commercially viable for you to do it. Now with the help of AI, and actually one of our customers did it, now you can expand your market by double the rate, and with AI you can process many other data points, and now they found out they were all credit worthy. It just wasn't economically viable. So this is a perfect example where it can make, it can create more equitable society if you use the tools right. And best case scenario, the mortgage processors are spending less time on the paperwork, more time to work with the people, and I'm not convinced it's gonna be the best case scenario, but also they have an interesting other pressure, I imagine, and correct me if I'm wrong here, which is, suddenly, because they are so much more efficient, they need more mortgages to process, so they need a larger market. Is that, maybe you can talk really quickly, and then we are gonna go to the audience for questions. Can you talk about what are some of the surprises of jobs you were surprised that you were able to do as much as you were with the bots, and what's one where you thought the bots would be great, where they weren't so good? We saw bot enter into practically every industry, but what surprised me is a couple of areas. So I'll take the, during COVID example, NHS in the UK called us, and so we are getting killed here, working nonstop, and so within 48 hours we created a bot that would monitor oxygen levels on all the instruments, and save two hours of the nurse's time every day. At that time it made, I like to think it made a difference between life and death, so that was, when we designed this technology, we never thought we would be monitoring oxygen levels. The other thing we saw during COVID was, we had a lot of people had a supply chain problems, and so one in turn wrote a bot for one of our customers that monitored inventory levels across 32 warehouses and 325,000 items, and moved inventory levels between warehouses. Before then it was just easier for you to order more. Now that was not the option. Saved 200 million in inventory cost. These are examples of sheer wastage that we are able to save. This doesn't change anybody's job, it's just more money to go around. Anything you're surprised that the bots haven't been very good at? Those are the human jobs, right? The being able to connect the dots, being empathy, care, nurture. I don't know of a AI system that knows what the right questions are, and that's a human job, right? What are the relevant questions? One thing in it before you go to the end, just to sort of tease out what Eric was saying. What are the areas that worry, I mean you're doing very advanced research in these areas, what do you get worried about? One of the things I think we've all begun to be worried about is the ability of these tools to generate information and disinformation at scale. As an economist, if you set the price of generating disinformation to zero, the quantity tends to go to a very large number. So we're all going to be flooded shortly with enormous amounts of incoming tweets, posts, text messages, press releases, et cetera, that are bot-generated and it's going to be a flood of sometimes very interesting information, sometimes completely made up false information, and we have to find a way to navigate that. And there are going to be bad actors that do it. There's gonna be people who aren't really bad actors, but just can't resist the temptation to do that. And we're gonna have to come up with some control mechanisms to sort that out going forward very shortly. And arguably one of the most important things in a society is the flow of information and getting truth to the right people and avoiding people getting polarized. By the way, it's not just disinformation, it's also polarizing information. We're already seeing some of that happening. And is it fair to say that the Chinese have, I mean our experience, the Chinese have leadership in this area, is that your, do you agree with that? Well, Chinese have some, they're very strong AI in lots of these things. Many of the fundamental breakthroughs are actually made in the United States and then perfected very large data sets in China and other countries. And also this is gonna get democratized. Right now it's in the hands of a very small number of companies, a few of them are in the U.S., some are in China, but as this technology gets rolled out, and one of the things that I worry about and then I promise we're gonna go to questions is the idea that one company may do this very responsibly, but somebody else is gonna do it. So Stable Diffusion has put out its technology as an API, open source, anyone can do it. So some of the fairness things. My take is if I take it to a higher level, I think we will have to figure out a way to use this technology and here is why. World operates on growth, all institutions operate on growth. In the last 60 years, we got to the growth by two reasons, either productivity increase and population growth. Yes? I think everybody here at WAF have been talking about there is no more population growth and not for the next 20 years. So there is no population growth. That means in the next 20 years, we have to double the productivity. Now let me put this as a question. In the last 60 years, we had internet, computers and mobile devices to increase productivity growth. Does anybody have an idea how to double the productivity of that in the next 20 years? If not, there is just not enough to go around in society, our social security, healthcare. None of that works without growth. And we also have some problems that we can't wait on sustainability. There is a direct correlation between technology is one piece. You could have a new metal, new energy, but technology is one piece of it. Driving productivity growth and that growth being able to provide enough to society so that there are no riots on the street. For me, there is a clear connection. Otherwise, this doesn't work. All right, I am gonna, this crowd, you've been generous enough to get up early on the last day of the conference. So I really do wanna open it up. I mean, if for some reason you don't have questions, I'll be happy to ask. Just a reminder, I asked chat GPT what a question is and it's a short statement where you're asking something of the panel followed by them answering it. So anyone want to be the first? If you just raise your hand, I believe they'll come around with mics. If I'm not mistaken, someone right in front and then a woman far right up here. Thank you kindly for your talk. Now, you all talked quite extensively on how AI is able to aid us to effectively maximize our potential. Now, my question is where can we work together with AI, especially on white collar jobs to increase the potential and to further? I would, I can take that. I think 95% of all jobs, current jobs, the future is human beings and AI-powered bot working side by side. Where I'm offloading some work to my digital coworker and say, here is my, just continuing the example, mortgage application. You process it while I do something else. When the answer comes back, here is the next set of work. You do it while I do something else. Just like we all work with a computer, 95% of us will work with our digital coworkers. That's just the future of work. I think, did you have your hand up, Camilla? Yeah, he's coming around with a microphone. And if you can say who you are and where you're from. Camilla Cavendish from the Financial Times. One of my side tussles is advising a med tech company. So I'm aware that with radiography, which you mentioned, there's a kind of stage at the beginning where the radiographer and the surgeon are really interested in the AI and they kind of keep an eye on it and they check it. And then there's a second stage where they just get complacent and assume that it's always right. So I just wanted to come back to Eric's point about keeping humans in the loop. How do we stop people becoming demoralized? Because actually there's a danger that we lean too heavily on the technology. And even if you're finding clever ways to try and involve people, how do we actually stop people just becoming demoralized? And autonomous driving is a perfect example of this. If the computer can do everything, that's great. But if the computer can only do 95% of things in driving, that's really dangerous. And there's other professions where that's the case. Eric, any thoughts? Well, mainly I just agree with the point that this is a pernicious problem. And I was actually gonna use the driving example because one of the problems, Google had a safety driver to watch the system and it was right, like 99.9% of the time the driver would start to fall asleep and start paying attention. So then they added a second safety driver to watch the first safety driver. So if this is not the path towards driverless cars when you have people watching each other. And I don't know if we wanna have a second radiologist watch the first radiologist, you know. So that obviously is not a scalable. So the driverless car took two drivers. Genevieve. Stop creations. I was gonna say, I recently went on a self-driving car drive of nine hours. I had a companion with me. It actually worked out fantastic because the way I thought is that there were two level of redundancy. So it was safer than me driving alone. And I was able to pay more attention on conversation, on the music that was playing. The quality of conversation was better as a result of it. Now if I was alone, maybe some of the concerns are valid. But if you don't have anybody next to you to talk to, that's a different problem. But otherwise human interactions were better and it was delightful. Well, if I could just add an epilogue to that briefly. So there's another approach. I was talking to Gil Pratt yesterday, head of Toyota Research. They're kind of flipping it around. They use the autonomous system as a guardian angel. They keep the human at the front making the decisions. And then it's the job of the system to watch if the human's about to crash, it intervenes. And that approach can often work better. Similarly, Cresta is a company that does call centers. And a lot of us have been so annoyed when we call a call center or whatever and we're interacting with a bot because they just aren't good enough. There's a long tail of problems they can't do. At Cresta, again, they keep the human at the forefront but the AI gives some hints and suggests it's, hey, don't forget to mention this other product or you haven't talked about pricing yet. And what we did at AB test, we did a set of research with them. And we did AB test where we compared the human working on their own, the bot and the human and machine together. When the human and machine were together, using the system where the human was in the front, it did dramatically better in terms of productivity, customer satisfaction. And interestingly, it closed a lot of the wage and skill gap as well. The workers who benefited the most were actually the less experienced, less educated workers. They got the biggest boost in their productivity. Genevieve, I know I have a question. So Genevieve Bell, Australian National University. All of you have talked in some way about a notion of collaboration or relationships between computational objects and humans. We've talked about humans in the loop. We've talked about collaborative things. Eric, you've got a notion of job tasking and pieces of the task. I guess my question is, if we think of that loop as being a bit like a supply chain at some level, one of the things we have learnt over the last three years is that most loops are actually incredibly fragile. So what are the skills we are going to need to give humans in order that they can actually be collaborative in these collaborative relationships? It strikes me that this is a very different skilling conversation than we've had in the past. So what are the pieces that we need to give humans in order to be able to take advantage of this? It's a great question. The prompt engineer is one asking the right question, but what other skills do we need? Because the best workers are going to be the ones that are able to harness this technology, Lauren. Yeah, I think prompt engineering has had a lot of conversation here this week. I think some of the things that humans do well around critical thinking and analysis and synthesis, those are the pieces, I think, where we are going to have to have humans to evaluate what comes out. You know, like everyone else in the world, I went and played with ChatGPT, and I said, you know, make me some suggestions on how to respond. This is a terrible, I need prompt engineering, but you know, tell me how to respond to the next Ebola virus outbreak. And what was really fascinating is, you know, some of the suggestions weren't that bad. They were very consistent with what we did back in 2015, but what they missed was the connection to community resources, how people fell to those types of things. Anyone with some experience in working in that kind of environment, and I realize that's an extreme environment, but any, you know, would say, how do we engage local community leaders, religious leaders, you know, all of those types of things. Those things is where you have to think about, in this scenario, you know, the realities on the ground plus the, you know, just the tactical actions to take, those are places I think where we're really going to have to think about what the reskilling looks like. In the past, a lot of times our reskilling has been, you know, take people who used to be coal mine workers and teach them how to be, you know, X-ray techs, radiography technicians. Those are wholly different things. Now I think it's a question of, and we've talked about lifelong learning for years, but what does that look like? And are we actually doing that as part of what people are doing in their jobs because the technology is changing so quickly? Are there other skills? Yeah. I think we're underestimating, I think it's going to be enormous amount of fear that flows from this. So when you talk about skills within organizations, you know, we're encouraging people, obviously to be much more agile in a high technology environment, but the human resources area is going to be really taxed by this because we're already starting to hear the worries that people have and that's going to be an area of great concern. Talk about fear and reskilling, connecting I think there have been in the history, many people with doom and gloom view of the technology when computers came and internet came. I think perhaps the mistake everybody's making is assuming that human beings are capable of doing what we, only thing we can do is what we do today, right? It's a huge mistake. It has been, they all have been wrong for the last 2000 years. I think if you're going to short human and their ability, you're going to lose. That's where my bet is. But we have to tackle it nevertheless, but that's where I stand. I think reskilling is the most important part in this and especially 3 billion people who are not connected to internet and they don't have an access to digital economy. I think we can't close the session without talking about that. I'll share our experience. Last year we did 2.5 million training courses and some of them were women in Africa, in Mississippi Delta, some of the poorest part of the US and in India and in various parts of the world. And what we saw, it shouldn't surprise us that humans are amazing, but it's still, in a good way, surprises us. In three months, about 85% of them went from either flipping burgers to 150k jobs in AI and automation in three months. This is what human beings are capable of. So it's about time we... Look, talent is evenly distributed, opportunity is not. The role of every technology is to make that possible. The other thing we haven't talked about is we sort of got into it a little bit when we talked about China and the US. We haven't talked about the geographical differences because AI is going to drive... They're going to be countries where there are surpluses and they're going to accomplish where there are deficits and you're going to have a huge difference in terms of development between various countries. I mean, the obvious one is China and the US, Russia, India. I mean, we're going to see very significant differences in the skilling and progress of AI, I think, between countries and between regions, and it's going to increase the divide, the digital divide. And I think education here is going to be a huge thing. Which countries, you know, public education systems really adapt to this opportunity. You know, we had New York City schools say we're going to ban chat GPT from the networks, which to me is the most ostrigy thing one could possibly imagine. Whereas, like, it's like, you know, we didn't stop teaching math because we didn't ban calculators. We said, okay, we're going to have this tool. Whereas I've heard from a couple of professors how they're harnessing it. Two great examples I heard this week. One, professor says, here's the topic. The first thing I want you to do is run it through chat GPT and we'll start from there and then we're going to improve your essay. Another one said, you know, run this question through chat GPT and your assignment is to tell me what chat GPT got wrong. Talk about education. I had a similar experience. The other day I have an 11 year old who is a nature lover and we have our discussions on the dining table. And I told her that you can't tell me anything that I can get out of Alexa, Google or chat GPT. And then conversation becomes very interesting. And she said, dad, you have to ask me a different question. And I realized I have to step up. And I said, okay, how do you fix the ecological crisis in Patagonia? And the conversation was very interesting. Now we are all learning, but I encourage you in various occasions, start having a conversation that you can't get from any of them and see how it goes. I mean, I think that really is. It will make us all human. That really is the job thing in a nutshell is yes, some jobs are away, but that's what we need to do. We have an ecology crisis that we with our existing technology can't solve. Hopefully we and the bots together can solve it. Eric, I see. Yeah. I couldn't agree more with your approach towards using chat GPT, not trying to hide from it but embrace it and do more and better writing and art than we've ever done before. It should be a time of flourishing. Also I want to underscore what Genevieve brought up earlier about humans. One of our natural strengths is flexibility and we also talked about emotional intelligence and machines don't have that breadth of knowledge and so we're going to start relying on that more and more. But finally, I think that you touched on this just in your last point there, that the humans and machines together can help solve that problem of what sorts of new skills are going to be needed. One of the things we're doing at Stanford, we have a project called Work2Vec where we take all the jobs in the economy and we convert them into vectors. And the way we do that is we take hundreds of millions of job postings and resumes. They each have embedded in them some of the activities and skills that people are looking for and the wages and you can actually project them into a space where the different jobs are imagined, different points in this imaginary space I have in front of me. And you can see which ones are similar to each other, which ones are far apart from each other and what skills are needed to get you from this point to that point. Which ones have skill adjacencies? And that becomes a roadmap for companies as they are hiring, as they are reskilling, as they are deciding what new things they want to train their workforce on and where the gaps are, where the surpluses are. But this is something that used to be done just by gut feel. Human capital is a $220 trillion asset in the United States and it's bigger than all the other assets put together, gold, oil, buildings, equipment. But it's one that historically we haven't measured very well. But with machine learning and big data we actually can start understanding how all these skills relate to each other. So it's a new frontier that we can use to map our path for taking advantage of what these tools can do. Lauren, you get the last word again. Well, I was just gonna say, and that may be the tool that we need to help all these HR professionals to help us think about how we transition to where we need them. They have to be, they now have a tool that helps them do something instead of with their gut feel, they can actually look at their skills and their workforce and the ones that are outside of their company and understand how they connect to each other. We actually did that at our company. We obviously drink our own champagne. So we... Only in Davos do people drink their own. Yeah, not in Davos anymore. I like that saying. We used the bots to create an individual career development plan for 2,000 employees. And that was only possible because we took everything else automated and say, HR is about people and career development. And with an assistance of a bot and a human being, we all wish we did that for every single person. What is next? How do they grow? There is just not enough time in business to do that. Well, on that point, this is a huge opportunity for HR managers, for companies to unleash trillions of dollars worth of assets. Imagine just taking that 221% better. But perhaps even more importantly, think of how many people are not in the right job and they're living lives of quiet desperation. They probably have some capabilities that could fulfill them much better, but they're not being matched to it because there's just not the infrastructure to put them in place. And I think that's the real value, is getting people to live up to their potential. Well, I think that's an incredibly optimistic place to end our conversation. Obviously, there's so much more that needs to be said. I have a feeling next year we will not just be talking about this Friday at 9 a.m. We will be talking about it throughout the forum. I want to, again, thank you all for coming. I know your time is incredibly valuable. Yeah, thanks to the panel. Thank you. Thank you. Real pleasure.