 So many people have been speaking about the climate crisis, but the real question is why is it that we're still not acting at the scale and speed that is necessary? For 150 years, we've built up a world based on the assumption that we can exploit the planet for free and it translates to very dramatic impacts happening right as we speak. The climate crisis is a threat multiplier, which means it exacerbates existing inequities in our society. We need to remember we're on the same planet and this is the planet that we need to make sustainable for the whole of humanity. Making much faster progress toward all 17 sustainable development goals is the best pathway to adjust the future for all and public-private partnerships will be absolutely crucial to this transition. We know that this transition will require a fast adoption of a lot of new technologies and the question today is how to find the appropriate way to find this technology. Younger generations are demanding a sense of purpose. They want to look at companies and say, I am investing with you all for this reason. The solutions are there. What we need is governments to regulate, to invest and we need business to act with values. History will look at us, people, politicians, corporate leaders. These times require not only solutions but speed. There is nowhere else to look than the mirror. We are the ones that need to do this. And welcome to the World Economic Forum's panel on automation and augmentation, the AI workforce. We've got a fascinating group of people here to talk about a fascinating topic. 45 minutes will definitely not be enough for what we're going to get to today. But let me start out by introducing our panelists and then we'll go through a brief introduction of some fascinating new research that the World Economic Forum has put out and then we'll dig into this in some depth. First we've got Bob Moritz, who's the global chairman of PWC USA. Next to him is Paula Ingebirre, the minister of information, communication, technology and innovation in Rwanda. And then at the end we have Mark Garnberg, who's the managing director of Zeta Venture Partners. Folks, before we get started, I'm just going to quickly summarize some World Economic Forum research that just came out, talking about this. We'll see a graphic on the screen shortly, no doubt that we'll summarize some of this. But we'll start out by saying that according to the World Economic Forum's Future of Jobs report, 25% of all jobs will undergo some measure of change in the next five years because of AI. There will be new job opportunities, but there will be disruptions and losses and net it looks like there will be a positive impact. LLMs will benefit jobs that prioritize critical thinking and problem solving. And we're going to see on screen now the kinds of jobs that will benefit that have the highest potential for augmentation and the highest potential for automation. And we can see how these kinds of breakdown things as the WF report puts out and even it's routine, repetitive, language based, those tasks will definitely be disrupted. The things that prioritize critical thinking, complex problem solving, those are the things in which AI can really serve an augmentation function. And I think one of the big takeaways from this report was that, you know, governments, companies, everybody needs to be proactive. Being reactive is possibly the most dangerous thing that anybody can do in this kind of transformative moment. So let's dive right in. Mark, why don't we start with you. You, your VC focuses on AI. That's the thing that you invest in exclusively, I believe. Set out the opportunity set. Well, what are the opportunities in this moment as you see? So we started 10 years ago and what's interesting is that for 10 years the technology has been growing. We've been investing in machine learning, deep learning, now generative AI, both infrastructure and applications. And what's interesting is that the market just took off after last November. So ChatGPT came out. It was the fastest growing product in history. I think 100 million users in two months. And now it's captured the mindset of people. And that's thrust us to bring this technology in. So one of my favorite stories, by the way, is I was on a panel at MIT with North American Stainless, which is in Kentucky, and they had not brought AI in. And then their workers came after ChatGPT came out, their workers came into the office, the machine line workers, and said, hey, where's the AI? We're at home working with our children on their homework using ChatGPT. And so that's, I think, thrust this forward. In terms of the opportunities, I think for investors it's in infrastructure. It's in applications and in data infrastructure as well. And the applications are radically changing. So for example, we've been investing in AI native life sciences. So we're now not discovering drugs, we're designing drugs. So I mean, I think there's 400,000 proteins, no demand. But the pool of potential proteins is like 10 to the 1,300th. And so we can now create drugs using AI, essentially a protein language model, by the way, that predicts the next protein. We can use AI to design very specific drugs for forms of cancer. So I'm very optimistic about what the future holds in those areas. I apologize before we came on. You also voiced that kind of sense that we should be optimistic. We sort of over-indexed on the fears. Can you go into that a little bit further? Yes, thank you. I think a couple of things. I hear all the fears. I think it's also safe to say that we need to be very reasonable enough with what exactly are the pros and cons of even starting with AI. So some of the fears are around job displacement. What are those jobs that are going to be displaced? And I think you give us a great layout of what are those possible jobs that are going to be replaced and which ones are up for more efficiencies. I think the key question is really going back and understanding if these jobs are going to be displaced, how do we scale the workforce going forward so that they take full advantage of those that are going to be augmented. The reality is no one in today's digital world no longer have to go back to do a university degree to be able to perform certain tasks. So how are we thinking about rescaling and upskilling of our workforce? How are we thinking about how the jobs are going to be disrupted, whether it's in industries, whether it's in healthcare, agriculture, to mention but a few. And then just going back and understanding how long is it going to take us because depending on a trajectory that any country may be having or a corporate business may be having some may be disrupted as soon as in the next two years others it may take five years to go. And so the key question is how do we leverage the time left before full disruption happens to really scale the workforce so that they can respond. One thing that I find and I'm quite optimistic about it is the ability to democratize access to opportunities. And so even when you look at the jobs that will seemingly be displaced the ones where we're worried about the effects of automation the reality is that if we're able to put on a positive lens in how we then see more and more people that are going to be able to be given that opportunity. Just to give you an example, about three years ago at the onset of the COVID-19 pandemic we had, and you were working in the VC world we had a startup that had to come up with an AI tool that helps radiologists to analyze. And why was it interesting for one? It was interesting because a country that has a 13 million population we only had 13 radiologists. So imagine the workload that these 13, it's basically talking about one radiologist for one million people but also their ability to help them if any of them has a workload of let's say 100 cases to review how will they understand which one is the most urgent case to be looking at because it will only depend on the symptoms that are displayed if someone is in a critical condition then maybe they're going to be prioritized but however if you had all these tools at your disposal as a radiologist then you're able to even support someone before they get into a critical condition. So I believe really that there is definitely fears but rather than be caught up in fear I would rather focus on how do I skill the workforce to make full advantage of the benefits that come with automation and augmentation of emerging technologies like AI. This leads kind of perfectly into, Bob you sort of worked on this enormously. Can you dive in, again before we sort of came on you were talking about what PWC had gone through this incredible transformation can you just walk us through that? Yeah really quickly the reality is as Paul and Mark said is we have to put this into context. Technology, not just AI, was already moving us in this direction. It's just a really accelerated, as Mark said, relevant to the last year or so, relevant to the generative AI component pieces of this. And we've got to be more mindful of the type of AI technologies that we're talking about here. Are there ones that are autonomous in nature that will eliminate the routine? Are there ones that are autonomous that will actually eliminate the creative and everything in between? And then there's others that are going to augment the work that needs to be done and can we will not necessarily reduce the needs for job and labor in mass but it's going to change radically how that labor is skilled and how they actually can adopt to it. So it's not only a replacement issue of the jobs that are at risk it's actually all jobs that are going to change dramatically. So the 25% statistic that you talked about at the start I think is underestimated when you look at the challenges that are there. For PWC we historically have been very much a people led business and the reality is you could no longer do all that was needed to tap into the IP, structured and unstructured data necessary to do the jobs and maintain the quality and the relevance that was needed. So how do we get our people sufficiently proficient on the use of technology generally? And that required us to step back and say look for an employee at PWC let's minimize the risk of you feeling like you're threatened let's figure out ways to help you up skill let's give you the opportunity to use those skills in new and different ways in terms of creating small innovation or large scale innovation and let's change the nature of the work you do and make you part of that process not that it's going to be led by the corporate office or from the center but from a bottoms up perspective and if you give people permission to use the technologies AI and others like it and give them the tools to do so and the safe environment to learn and fail fast that's the recipe for all the upside and the positivity that's actually out there right now that we're seeing in governments for communities and in the corporates as well. You know obviously we started out with a very positive note there are opportunities but of course you know understandably I think reasonably several people are fearful and you know you talk about PWC employees like employees around the world are fearful of being replaced or you know being sort of downgraded any number of things how do we start a conversation with those people as to A bring them into the fold to you know actually create a system in which you are actually you know you're talking about working together because I think there are understandable reasons to be fearful and so Paula if we start with you you obviously have a sort of a community in the sense of like constituents who have to be addressed workers how do we how do you start to have that conversation how do you even begin to divide that strategy? I think the starting point is one what are we trying to address I think once you are able to garner excitement and support around what are the benefits what are the advantages of that then people will be able to comfortably come out and say but then what happens if for example we're talking about banking you have let's say 300 tellers and it's going to result into not even having one so then the question becomes and this is where employees centric or citizen centric solutions come in very handy because you're not only you know selling to them the promises of technology but you're also telling them yes look here in four years this job will not be existing but here's how I'm going to work with you in a journey that allows you to get the right skills so that you're able to still remain relevant to the workforce to still be able to contribute and once you do that then people will be much more comfortable because they know yes what I'm doing today may not make sense in the next four years but I'm getting the skills that I require for the next four years when these jobs actually come on board now I think at the end of the day what is urgent is collaboration and really having very honest conversations I think the tendency that happens especially with emerging technologies is selling the benefits but trying to ignore the fears that people have trying to ignore or minimize the concerns that they have and once you start minimizing anyone's concern you've lost them they're definitely not going to be on board so even starting with we know here this is the main concern let's tackle its head on as we think about how we transition because we must transition and it's not like we have a choice of not transitioning otherwise we get left behind and they need to look at the bigger picture how does this support economic development how does this improve wealth creation at a personal level at a household level and so really working that journey but also ensuring that people understand minimizing their concerns and risks but rather curating solutions around how they become part of this futuristic hope that you're selling to them can I build on this for a second this is good change management first step is you actually have to describe the environment that we're operating in and that goes to the local citizen that feels threatened and there is an inevitable reality of the trends that are coming some may see those trends today not but you've got to help in the broader education for context number two you got to make sure that as was rightly said this is not being done to them but rather with them and you're co-creating something that is going to be beneficial for them and within that you've actually got to get to two very important themes which is how do you develop a level of trust between that citizen that worker that the future student or that future teacher whatever the case may be as well as describe the outcome that's going to come from that change and you've got to have trust in the belief that that outcome is actually going to be beneficial for them personally as well as a country, a community, a corporate or otherwise and those are the things that are super important as we think about you know how do we bring people along for the journey and the good news here is the WEF is creating the kind of awareness around these trends now the question is how do governments, business and communities take some of those trends what does that mean for us locally in my community because each community around the world is going to have a different set of facts and circumstances of how big that change is the positives, the negatives, the risks, the opportunities and everything else associated with it and it's got to be done locally where you're closest to the people on the ground Mark you've obviously invested across several different companies you've seen several different case studies of this I mean are there best practices, things to avoid that you've noticed that companies do well do poorly as they handle this with their staff and the people they interact with the startups are selling into but you have to remember that you're not creating a new worker you're creating a co-pilot and basically that co-pilot like in the case of the radiologist you're creating thousands of radiologists that happen to have a co-pilot now to work with so you're creating that co-pilot who can get you up to speed much faster it is the first time in history where we're seeing a change at the decision level as opposed to at the worker level or worker level and so we're effectively not really changing the jobs of the people that have minimal education but the ones that have the most education are actually going to change the most however what we can do is that we can take new people and bring them up to speed very quickly so in some sense we focus on the idea that generative AI is going to hurt people effectively you know replace them as skilling but it really is a re-skilling tool it can take people without experience in a position and get them up to speed very very quickly and be competitive so if I'm in a company I have to mentally think my job skill maybe is good for five years at most so what do I want to do next and if you allow them to have that co-pilot they can aspirationally keep moving to the next level the onus then becomes on the business to make sure they have a culture to accept that the other thing I would say is that there's the whole idea of workflow design which I think is going to be imperative in this wave and I'll give you an example there's a study starting at MIT which is the work of the future for generative AI led by Julie Shaw at Armstrong and they went to a company and they had 800 HR experts okay we want to bring in this co-pilot of bots to do customer service and they just brought them in and they found that the community didn't use them so they had to change the workflow so the workflow change was you don't have an HR manager now you can go to this bot but all the HR managers were turned into specialization experts so you'd go to the bot first and then you'd go to the HR expert it actually improved productivity improved people's NPS of the system got answers to them much faster but it was because of this workflow change that they had to do if they just brought it in the technology would not have succeeded so I think that with the technology is where we have to go to make this work and maybe just to take an example of education I think that's one industry that potentially is going to be disrupted I think the examples we see like the points we're making here around co-creating with the people the schools that are making, taking advantage of these hybrid solutions are the ones where you see the approach of, they've created a fund a very small fund that allows teachers or the faculty to be the one that goes out to look for solutions that they think will help them in the classroom to deliver better learning outcomes schools where the administration comes in and says here's a tool that everyone has to start using and so because you've given them the opportunity to be part of that then suddenly even the willingness to use and the resistance goes away and so you see better uptake being able to get the advantages that come with it so I think in all of this it's really just going back and saying how are we working with everyone that's going to be affected positively or negatively and figuring out a solution that will work and vary from one context to another what we're focusing on here is the combination of not necessarily just what's going to happen to work and labor more broadly but how does labor more broadly get educated over time in a much different way than ever before and we have to recognize and also become myself rather than the panel the way I learned is going to be much different than the way the next generation learns and the next generation had the ability through the internet to happen to anybody any place around the world and now what the AI is doing and the technologies are doing themselves is actually pinpointing that to be very specifically augmenting that learning just in time not three years earlier for whatever is going to be needed for that interview, for that job for that promotion or otherwise and so we've got to be looking at both of these things not necessarily just what's going to happen to the labor force it sounds like all of you are also describing leadership will have to just fundamentally change how they lead organizations as well because there's going to have to be a release of some measure of control that you're going to have to allow people below the sort of the top tiers to just make decisions to empower them with tools and suddenly and have to live with the consequences, does that sound about right? Absolutely I'll take this one just from a PwC perspective you learned quickly when you go through this exercise and the change management that you have to change the culture of the organization first you have to get leadership behaviors to be different in a big time way and you've got to actually create an environment where it's safe to do these things so everybody feels like they're part of the teamwork necessary to make this come to life the two biggest lessons I will share with you that we came out of this with was a giving people time to learn so a very quick story when I brought my leadership team together we were up in Toronto what I had was 250 people in the audience what they didn't know is I had five people in the audience that one was a first year a third year a five year experience person not a leadership team member and after the end of the two days talking about this stuff I brought that group up on stage I said what did you think of the strategy and their response was we love it who's going to do it because I've got a day job to do so when are you going to give me the time and the capacity to learn and to apply and to maybe fail and learn along the way so point number one is find that capacity and get them part of the process number two middle management, senior management has a tendency to get in the way they're too vested in what's happened in the past so you've got to get them out of the way of the process and again if you need to you either have to train them to be different leaders or you've got to pull them out of leadership roles and replace them in fact to what you're saying Bob that they can't be territorial they have to think in terms of the company because what they're really doing by letting another group in the organization but it's much better for the company absolutely and I do want to talk about one other piece which we haven't covered yet we've talked about communities, we've talked about educators we've talked about the labor force let's talk about government yeah absolutely have to be learning this as well in terms of how they operate and interact with the society that they're responsible for governing Paula could you pick that up you're speaking obviously from that point of view government's challenges are very different from businesses in so many different ways I mean how do you think about that I actually think the challenges are not any different because at the end of the day it's the same citizen whether you have a corporate duty to serve them but even as a government you have the duty to serve your citizens and I do agree so it starts one how do you build a culture where there's continuous learning but also where you can balance between a top down and a bottom up approach of how do ideas come through and figure out the best of the ideas that need to be driven for them now I think also just to be realistic is what you find in most governments is sometimes the capabilities that you have in the corporate world from what you see within governments and so then the question becomes how do we build policy leaders that have the ability to work as if they're a corporate at the end of the day because you need to be able to balance between how do we grow how do we not become very heavy handed in managing processes because I like to say I have a portfolio that looks at technology and innovation but we also we've been realistic to the extent that we say sometimes this technology so you know there's a very huge debate around regulating AI and like we're not going to regulate what we don't know so how do we then balance between having a heavy hand and stifling innovation to rather figuring out ok how do we allow for some of these things to happen a. governments resource constraint we don't have all the money that we need both in human you know resources but also financial resources so how do you create an environment that allows for all these ideas to come through support the cause that you have a mission to serve your citizens at the same time understand the so let me even give a practical example when the process of building a digital ID for the country now it's very exciting corporates love it it's going to be the rail through which they can deliver better and more efficient services to citizens and we'll definitely have all these AI models built around the digital idea to ensure that we have more better services that we give but then you have also the worries that citizens have then you have questions around data protection about security and privacy so that balance is always a tricky one and I can tell you very candidly that you never find a one-size-fits-all their trade-offs to make along the process but I think what it requires for us as policymakers is how do we then become agile enough how do we become humble enough to understand that sometimes these things we may not understand and that's where concepts like sun boxes test beds come in very handy because in a very small controlled environment you're able to test something you're able to listen to people within Rwanda we've created policy labs that allow us to then bring the citizens together the people who are building the technologies the solutions and that way we have like a wide view of perspectives coming in as we build something we'll test it the proof of concept but also we're constantly preaching that it's never going to be perfect until we iterate many times and we figure out what's the best solution but it takes time it takes time it takes understanding that changes are constant especially in a world that is heavily driven by technology and being comfortable with that change and knowing that this continuous learning is not only going to apply in the corporate world but even for us in the public sector need to be very adaptive to these changes if the example we're using Rwanda or otherwise it comes back to two major themes number one the blurring between ministers of education and labor and ministers of technology communication data blurring very much together number one and number two is the need for the public private partnerships to actually bring this to life and part of that becomes really important because to your point the capital is going to go to the places where you see the mesh of those two things actually bringing that potential to the table and therefore you'll get that investor base to bring that capital life in the places that need it the most and that's where you'll actually bring or call it a better opportunity for those that may have historically been left behind not to be as propensively left behind today so we can actually get that more programmatically into the system I think the better off we're going to be so let's take it from there to thinking a little bit more deeply about regulation and how we create these kind of regulatory frameworks you know Paul you started by saying that we shouldn't regulate things we don't understand and so maybe you know for a while we don't regulate AI let's go from there Mark do you I imagine you think about this a lot because this is a policy risk from an investor point of view what are the kind of questions you think about when it comes to regulating AI do you think it's too early do you think it's necessary do you think governments are capable of doing this where do you start with this well it's an interesting thing because people talk about the risk of AI they talk about business risk they talk about societal risk they frankly talk about existential risk the United States in general has been more laissez-faire on regulation Europe's been very strong on regulation China's been relatively strong on regulation and I think that's all going to have to be resolved but in terms of being in the U.S. I think it's a combination of the bully pulpit being able to say how people should be allowed to move around not to get in the way so to speak but also you know in terms of creating an infrastructure that more people can get access to so one thing for example while they debate this in Washington and by the way I've seen this on both sides where in the afternoon they'll talk about the dangers of bioterrorism and in the evenings they'll talk about the wonders of creating brand new drugs and it sounds schizophrenic but it is where the world is today in its mentality but the one thing that we can do and I think the biggest risk is that there will be a very few set of companies that can actually innovate because they have the resources available they have the compute resources available and in order to move this forward we're going to need a very wide ecosystem of infrastructure of innovation and that and so for example there's a bill in front of Congress now to create the national AI resource research research resource 2.6 billion dollar bill to be able to create a compute resource that's equivalent to what a Google would have or an open AI has so the universities can use that for research so other companies can use that for research and the importance of that is because that's what's going to lead to these guardrails these new concepts of guardrails trustworthiness ethical AI responsibility so that they can bring that back into the products that we're developing so we need a very wide set of users that are innovating particularly in universities to be able to bring that start that side of AI in because I guess the danger sorry go ahead please question about regulation again why you know there's always this debate around regular AI regulation is what are we regulating is it the technology or how the technology is used for different industries now one may argue that the way you regulate how you use AI solutions for primary healthcare use will be different for how you regulate AI solutions that are being deployed let's say for disaster management what's the sensitivity of the data sets that you're using and all of that and so just figuring out and I think we need to be comfortable that probably there's never going to be a one size fits all model of how you regulate this but just really trying to figure out what are the risks that come with this and maybe a lighter approach maybe starting with even standards what are we looking for it's responsible use it's ethical use and so how do we create those standards that allow for better deployment and adoption of AI solutions in the you know in a scenario where it's difficult to really understand that to really give confidence that you've been able to regulate comfortably the use of AI in the different solutions and so for me that's what I'm looking at the other thing is also the evolving nature for sure part of the effects of AI is going to be the complexity of cyber attacks right that is going to come with how we deploy so how do you deal with that is it something you want to regulate you want to build capabilities how to handle and respond to that so ultimately even as this regulation debate happens the key thing is really understanding what is it that we are regulating for is there a better way to address this not through a regulation and still achieve the intended purpose and I think those are the conversations and that's the agility that is required of us as policy makers to sort of figure out what's the best way as we move forward with this composition of regulation that no pretty much figured out my two panelists have summarized this really well you have to go back to the concepts of regulation what's it there for some was talking about having sufficiency of access you got to actually think about the antitrust issues that come in in terms of maybe too much concentration of power and understanding of those kind of things and then when you go to the specifics around AI technologies to your point how is it being used is it going to be for public use or is it for an individual or a corporate and those will require much different things and then you take another look at it if you're playing three dimensional chess which is who coded it how was it coded and how good was the code relevant to the responsible standards that we put in place number two what data is it actually using and buyer beware in terms of what data is being used because that could give you a false positive or an answer that can is not necessarily appropriate for the end use of how maybe that user ultimately intended to use the leverage ability of the technology that's been created so it's going to have to go to standards first in terms of getting these principles on the table and the hope is although we'll never get there is we can get a lot more convergence on those standards on a worldwide basis because the AI is not limited by the geographic borders anymore if you're going to open it up for all information around the world and then last but not least is going to be the security security elements of all of this data privacy and security elements that today we already have but you're going to get the steroids in a big time way over the next decade or so as this rolls out and that's where national security will also come into play that's going to be a big challenge for a geopolitical perspective I guess yeah I mean two of these sort of underlying questions I'm curious if people have thoughts is one to your point I mean yes the United States may be they say fair but American companies have large investments in the European Union and the European Union has shown itself on big tech for example to be very much more activist and it's policing of antitrust and to some degree so I'm curious you know how much does it really matter what one individual country's regulations are versus the sort of the fact that these will be transnational sort of AI platforms transnational impacts and then secondarily how much do you think at the moment that our existing legislative base already covers what we need it to I mean some of the things you're describing surely existing antitrust laws already deal with some of these kind of concerns is I mean I imagine sort of the people in the business community are fairly sort of averse to additional regulation and so would be sort of supportive of the notion that existing laws do what will be necessary I'm curious what the panel thinks of those two points you know one individual country's regulations largely do not matter they absolutely matter yeah they absolutely matter and just from a business perspective we don't even have to talk about the AI let's just talk about climate the reality is climate today we're in climate week this week you're seeing different rules and regulations relevant to how we use the planet leverage the planet and engage with the planet it's causing businesses to do three things it's causing them to be focused on the individual laws and regulations at the local level that increases cost and complexity to what they do and how they manage themselves and how far they are trusted and how much risk they are at if they're not in compliance with those laws and regulations number two it's causing them strategically to say do I want to be in business in those countries because of that cost and that complexity and number three I want to get out of the middle of the arbitrage and the geopolitics that as a result of it comes from all of this fighting in terms of what my data privacy laws may look at from a EU, a US, a China or otherwise perspective those same thing is going to happen from an AI perspective so again that's the realities even though we can all wish for one size fits all from a regulation perspective it's never going to happen and that's where companies have choices to make in terms of what markets you want to go after and how do they apply it relevant to the companies they go after and they could add on that because if you can look at two perspectives for governments one everyone wants to be at par with the technological development so if we're not talking about you know this conversation about harmonizing regulations and you know at the end of the day we'll continue to play catch up and no one wants to be in that place but then there's a different perspective for companies that are building solutions what does scale look like because I like to say even for the African continent if we have 54 countries and each one has their own legislation you have a startup that you need to support and they need to scale and access 1.4 billion market but they have to navigate 54 different legislations then probably never be able to scale but also at the same time we're talking about a technology that has a high rate of change that if they must navigate a year of regulations every time they want to enter into another market something disruptive is already happening and so really I think being able to have this harmonized legislation matters a lot in how best we can take advantage but also how we can scale impactful solutions I think Paul is right in the sense that this starts with use cases as opposed to the technology because we've had all these problems forever but now we're able to you know it's on steroids now it's happening much much faster and much greater numbers and that's really what we're dealing with so let's figure out what that use case is say that that's something that companies shouldn't be doing that's dangerous to a company the other thing is that it's just very basic regulation things that we have to have for example I should know if I'm talking to a bot or if I'm talking to a person I should know if something I'm reading has been written by a bot or written by a person I should know if something that I'm reading is probably accurate or not you know I think the biggest fear I feel is that we have an election 2024 a billion people around the world are going to vote probably half the material they're going to read is going to be fake so how do you watermark that how do you get that information out to people how do you just do the basic blocking and tackling that regulation needs to do and there will be some regulation for sure I think we've sort of been adjacent to this topic for a while but I would love to talk about inequity and the sort of fears of the inequity of the fallout from this transformation you know the WF research showed that there will be some categories of jobs and seats some that don't the ones that don't more rote more repetitive tasks and also you know there will be different geographic impacts you talked about this earlier as well I mean how do we first just to start out from a step back perspective how do we even begin to think about this inequity because you know already we've seen in the past several years that widening inequalities in certain countries have led to political fallout you could argue that happening in the United States and the United Kingdom in various forms and partly several European countries what is your level of concern about this inequity and what is your level of confidence that the sort of leadership of businesses and governments are prepared for what is about to come Bob why don't you start yeah a couple of things here first you got to look at those inequities from a centralization of power and responsibility for the rest of the world versus the citizens and here's what I mean by that look around the world you've got a number of companies a small number of companies that can only have the computing capabilities the IP and the capital to actually do this in scale and the question is going to be are other countries going to create those capabilities have that investment or are they going to be disadvantaged because the economy and the benefit coming out of those companies stock market valuations asset prices or otherwise gets benefited that's at a very macro level and that's going to be important as we think about sort of these inequities go to the other side of the equation which is the individual citizen do I have sufficient connectivity to the internet to tap into those technologies from wherever they may come and if I don't I'm at a competitive disadvantage I have no chance of prosperity because I'm disadvantaged from learning from job opportunity and my own self-fulfillment and economics and stability and any contributions to my family and the communities of which I live in so it goes to now what do corporates do what does regulation do and what does government do and each one of them has a slightly different point of view to take in terms of where we got to go for example when you think about this concept of AI and technology skilling be one of those the Indian government right now is trying to make sure that we've got connectivity to 98% of the population no matter how remote that may be likewise many countries in Africa are trying to do the same thing to make sure at least at the starting point we've got the right infrastructure to allow to deal with the outcomes that impact the citizens this is going to require the capital that we're talking about and this requires that public-private partnership that's the only shot I think we have at truly trying to minimize the risk of those inequalities that are out there you couldn't have said it better Robert I think there are four things driving this inequity that we're discussing you did touch on compute capacity I think the numbers are showing us that 15% of countries globally own 100% the compute capacity that exists so what happens to those countries that don't have that kind of capacity but also need to do a bit of heavy lifting to you know to have that kind of capacity the second one is talent I'll just give you just even looking at Africa where we have about 700,000 developers and only 10% of that fraction are AI developers so 700,000 software developers for 1.4 million market billion market and only 10% focused on that so how do we build that talent because I feel like talent is the biggest thing that can sort of even close on that gap you talked about infrastructure, broadband infrastructure we have 2.6 billion people that still remain unconnected over 80% of that portion is in developing countries so how do you close that gap of access, basic access and then the last one which you talked about is capital not many countries have the funds the capital to invest in R&D muscle and if you are not investing in R&D muscle you are not going to be able to build context specific targeted type of solutions that respond to your problems and so those 4 things if we are not addressing them head on then I think the gap will just continue to widen over time now if you have the money you can fix the infrastructure you can sort of invest but R&D text time you can you know focus on compute capacity talent for me is what we are going to take so much time but again money follows talent so if you don't have the people most like you are not going to attract the venture capital so it's really a chicken and egg but I think those 4 buckets if we are able to look at them in terms of if you want to tackle the issue of inequity head on is how we address the 4 buckets in parallel Mark do you have any additional thoughts on this I think it's imperative on the governments of every countries around the world to bring this infrastructure in to basically try to close this digital divide try to find a way to use their budgets to do that because the thing that I'm optimistic about is their youth will rally around this technology it makes them brilliant very quickly which could never have been done before so if you have that infrastructure in place for them they can make your country wealthy beyond its dreams and it's just a matter of them embracing what you put in front of them can I build on one thing that Mark said here I want to make sure that we are holistic in thinking about infrastructure because if you think about these large organizations that have the computing power they need access to energy they need access to fresh water to cool down the many servers that are out there now let's connect the dots to climate and energy transition as well so when you think about infrastructure it's not just about having the capital to put in the coding system for broadband use it's actually all of the other things that are needed and that's where we got to look at the whole system and that's where it connects the dots between inequity, education technology and climate all together as we think about the world that we're dealing with right now and if we bring it we're sort of running out of time so I have a thought that sort of niggles at me all the time which is you know I have a young daughter and she's I'm hoping that she will one day be employed and so how do we even begin to think about training and educating five, six, seven year olds what are the kinds of things, what are the principles that you think about as to how we should build an education system that is ready for what is about to come and preparing them for a workforce that will look nothing like the one that we currently occupy Paula why don't you start I mean this is something that must occupy you a lot I mean I'm trying to see how I frame my answer one creativity and problems solving skills are very essential for every young agent so I think we shouldn't lose focus of that in how I think it will be an important soft skill in navigating this very complex but also uncertain future that we're all talking about now when it comes to technologies like AI I think the foundation and basic skills are going to be around math so how do we teach math because for a six year or seven year old daughter it may be complex to think about AI but if you start with the math skills it's really critical to how we then over time along the education continent they're able to you know translate these skills I think it's very important but ultimately you mentioned how do you hope your daughter can be employed I think I hope my daughter and my son can create employment opportunities and it's really the problem solving skills it's the creativity that will allow them to sort of navigate this Bob you run a 25,000 what do you think about that? Two basic themes learn to learn the world is going to change so radically that you've got to be much more in a mindset of always learning and then applying that learning so when we talk about these technologies not just about the coding of them so the math is important the coding and payability is important but the adaptability and the use of them is equally as important but the ability to throw them out tomorrow makes ones coming the next day is going to be equally important so the adaptive skills I think are so important and then the basic concepts of a global IQ and agility is going to be really important in the world that we're living in because if we're serious about where this could go and I do think it's very optimistic and very positive in terms of the impact it actually shrinks the world even further than we have and you got to actually make sure that we're mindful of that world that we're living in and second you got to be able to move with speed and demonstrate you can actually navigate that world be successful and continue on I think the other thing is that we talk about this being an AI revolution but it's really a UI revolution they speak our language the computers speak our language now for the very first time they speak the language of a seven-year-old so a seven-year-old now can get in front of a computer and basically their curiosity can take them anywhere I think it's the most amazing time in history for children to be able to learn That's a fantastic note to end on, thank you very much Thank you to the World Economic Forum for hosting us this was a great discussion, thank you very much Thank you