 Good morning everyone, welcome to Tech for Good, empowering AI leadership. I'm sure you've heard so much, you know, for the past few years, about how AI is going to transform your business, transform your industry, and change just about every aspect of public life from schooling to policing to medicine. Indeed by some estimates, artificial intelligence has the potential to improve the efficiency of businesses by 40% by 2030, unlocking some 14 trillion in untaught potential. But of course any powerful technology can do harm as well as good. And so when AI and machine learning is being used to make more and more decisions, more and more critical decisions, it is vitally important that we consider how those decisions are made and what the impact of those decisions are on real people. And you only have to look at some real examples to see why this is so important. We have algorithms being used to decide whether people get or receive bail. We have algorithms now being used in hospitals to help doctors diagnose cancer. And we have people experimenting with using AI algorithms to help curate children's education. And this obviously isn't just an academic issue. It's vitally important for business leaders and politicians too. And since AI knows no national borders really, it's important that we come to some sort of international agreement as to what even constitutes AI for good. And so personally as someone who covers technology, I think this is a very poignant issue because I don't think you can ever really consider technology in isolation of people. From the earliest Flint tools to the latest self-driving cars, those things don't mean anything really without a person involved. And just as any technology can be used to do amazing things, it can have unintended consequences, it can be used for harm as well. So I think it's incredibly important and I think as we've in recent years become fascinated by the idea of this amazing idea, long distance idea about AI being something other than human beings being so powerful that we've lost sight of the fact that it is intimately related to us. At the same time, I worry that some of the reaction to the ethical issues being, or the ethical questions being raised by AI are sometimes resulting in something of a backlash that may mean that the good side of the technology is sometimes held back. So without further ado, let me introduce our four fantastic panellists. To my left, we have Satsuki Katayama, Minister of State for Regional Rejuvenation of Japan. Next is Chen Liming, Chairman of Greater China Group for IBM Corporation. Next is Joanna Bryson, Associate Professor of the Department of Computer Science at Bath University and a well-known expert on AI ethics. And last but by no means least, Anand Rao, the Global Leader for Artificial Intelligence at Price Waterhouse Coopers, or PWC, I should say. OK, so I want to start the discussion, and please be aware we will open the floor to questions from you. So think about what you'd like to ask these esteemed panellists. I want to open by taking the title of the session, Tech for Good or AI for Good, and asking everybody what that means to them. So Minister, first please. Thank you very much. Thank you very much, moderator. My name is Satsuki Katayama. I'm a Minister in charge of Regulatory Regional Revitalisation and Regulatory Reforms. The reason why Japan is trying to make a law called Super City Law, Super City means Super Smart City, because in Japan the notion of smart city has been used so widely in the last decades without using any high technology of AI. Just economising energy or less time consuming things are called simply smart. So in order to change the notion, we put super. And you know, any democratic countries, the law or legislation are planned and presented to the diet and the reason for good, no evil reasons. Any government will present a law or any legislator don't do that. So the reason that we do that is to maintain Japanese-aising society's quality of life eternally, taking a full advantage of AI big data. And in addition to that, last weekend the first international forum that had been deemed important is the G20 Minister's Communique was held in Osaka, Japan on June 29 concurrently with the G20 Osaka summit. As you know, leaders of 20 countries decided upon some item with regard to AI ethics. And they agreed upon starting data free flow with trust, but not the contents. That's a very interesting phenomenon. And the forum was participated by experts from countries with proven record on some smart cities, including European countries, including European Union and United States, China and India so on. And this, the very first forum in the world has been supported by World Economic Forum, taking all of you. And just before coming here, I spoke with Dr Shuaab that we continue this conversation on the next dabble and we still go on. And the basic concept of our super city law is as follows. As you know, a range of verification, client tests have been already being conducted in such field as self-driving cars or automated cars, bus, cashless society and remote education, remote operation of medical operations and so on. However, measures to link all the initiatives implemented in multiple fields on a data linkage platform has not been existed yet. And apply the result for the daily life of people are still very much limited, mainly because of the fear of damage, because we still don't know who are to be responsible. And probably mainly because of regulations, even in United States, that's what they say. So, R.O. includes two provisions and provision on the project to establish and improve data linkage platform to correct, organise and provide data for multiple advanced services and provisions to give entities implementing such projects the right to ask the national and local governments to provide them with the data owned by the governments. And provisions on the special procedures to be taken to foster the integrated and comprehensive implementation of a regulatory reform across multiple fields based upon one single plan, whereas regulatory reforms tend to be made individually so that multiple advanced services will be provided concurrently and in an integrated manner in a so-called super city. And furthermore, it would eventually be important for citizens to reach consensus on the SDDs. So our goal is from the beginning links to SDDs. Everybody deem as good. Well, that's fantastic. We'll come back to some of those issues because I think that's wonderful. So, yeah, that touches on so many issues that are going to be important for the future of technology and how it affects people. So, Chen, would you like to give us your take? For the purpose of diversity, I'll speak in Chinese if you do not mind. Gwante will say... So just now Will, in his opening remarks, said that the concept of tech for good, what does that mean for all of us? I think that tech for good represents a beautiful vision for all of us. It is a direction for us to working towards. However, tech for good is far from being a reality. So this is the first thing I want to point out. Yesterday, a friend of mine out of curiosity visited a facial recognition digit in the exhibition area. And the experience was very sorrowful for him. And I want to share with you the results of his facial recognition experience. So this friend is Chinese, but the reading of the machine said he is Caucasian and he is not a very nice person. He has very low happiness and low attractiveness and also very low social networking ability and also a very high level of aggressiveness. So this is the experience of my friend with facial recognition technology. I have known him for many years and this is not the person I know. So I don't believe I'm wrong. So the reason I believe lies with the algorithm embedded in this technology. So as we can see, this is a clear example of bias from technology. I believe the developers of this technology do not hold or harbor any personal grudge against my friend. But there is no denying that systems and technologies are capable of bias or error. So this is an objective or unintentional bias. But of course, in the process of developing technologies, there are also some intentional or sometimes biases that developers were unaware of. Based on your personal data or personal information, they will design different technologies. For example, if they know you are rich, they will give you a technology that will charge you more. So these are some more examples of algorithm bias. So tech for good is a vision we want to realize in the future. But to make that happen, we need to make more government regulations such as the GDPR in Europe and many other national policies. But still there are many countries that are not making these efforts. So we see great gap and disparities among different nations. For the business community, we also need to pay attention to social ethics for individuals. We also need to apply a higher level of self-discipline. Do not download whatever apps they push to you because you may give your consent without proper knowledge of what that means or the potential consequences for you. So there are many cases. For the interest of time, I will not enumerate, but we can have more discussions on that. A lot of the technologies have such inherent defects. Some technologies were designed to cause damage. Others have a very good intention but caused unintended consequences. So we can discuss more later. Thank you. Joanna. Hello. The first thing I was intending to say, I guess I will say it first still, is that when we think of artificial intelligence, I can take this off too. When we think of artificial intelligence for good, it isn't entirely sensible. It isn't AI that does good. Artificial intelligence is an aspect of technology. It's a set of software techniques that we use to build systems. We use it as individuals or as corporations or as governments or as NGOs. We develop these systems for our purposes. The point is when we talk about AI for good, are we really talking about generating good or are we displacing the responsibility onto the technology or onto the engineers? Because we really shouldn't allow the fact that we have intelligence that seems to be human-like in some aspect, like it uses language or something, and then mistake that to mean that there's another agent that has responsibility. We cannot hold machines themselves responsible. Justice is a human invention. You can see animals that hold each other into a place too, but all those ideas of responsibility, the means by which we punish each other, those are things that we will never build a machine that will necessarily respond in the way that humans necessarily respond to be isolated, to be removed from their power, from their wealth, from their families. These are things that for humans and for any social animal is a huge dysforia. I want to say quickly on this demo. This is a great example of something that in Britain we have been working on. We had an AI ethics, national level AI ethics policy since 2011. One of the most important things, there's only five principles on the fourth one, is that the machine nature should always be transparent. You should know how this system works. Let me tell you how that system works because I was fortunate and I went to the hub and found out by the maker it was an artist. It was an artist deliberately deceiving, and I have heard a lot of colleagues here that are very upset because they think AI is this wonderful magical thing, and then this thing told them complete lies about themselves. But it was a joke by an artist. That is not the state of the art. That's not the best thing that could be done. It was something that is being used to disrupt our understanding of AI. So on the one hand a lot of people are promising too much, but there's some people even here at the forum who have been given a platform to disunderstand, to reduce understanding as well. So I think it is important, artistically with proper debriefing, this is a good thing to understand that the technology is built for a purpose, and in this case the purpose was to make you not believe technology, right? I think it's a wonderful, I think it's a very interesting installation, and I think at a time when there is an enormous amount of hype around AI, we want to sort of temper that, but perhaps... But you need to debrief in psychology. My first degree was psychology, and in psychology if you deceive during the experiment, at the end of the experiment, you make sure the subjects know, and people are wandering around not knowing what was going on there. You can say that about AI itself. I think people don't know what's going on. Absolutely, transparency, AI for good. You just mentioned the hype. So the way we come at it is very much try and demystify the notion of AI, AI for good, AI for ethics, responsible AI, that all of these terms being thrown around. And what we are focusing much more on, how do you actually make it practical for businesses? So now let me take AI for good. We look at it from two perspectives. One is when a business uses an AI algorithm, makes a certain decisions, what does it mean for that particular bank or a healthcare company to be doing the right thing or the good thing? Now, Professor Jonah Bryson mentioned the ethics and the number of principles. There are a number of organizations that have come up with the ethical principles. Those ethical principles, I think, broadly the businesses accept, but how do you translate those ethics into something that the frontline can execute? AI will do good and will be beneficial to humanity. I don't think anyone from a financial institution or a healthcare institution would challenge that. Of course we want AI to be good, but now how does that translate when there is a machine learning algorithm essentially deciding whether someone should get a mortgage loan or not? What variables should it be using? Can it use the zip code in US as a variable when it is highly correlated with some of the other problematic gender ethnicity? So that's what we are here to try and translate some of those ethics. We call it contextualization into something that's very practical for the different companies at the front end. So that is one aspect of AI for good. And then I think a couple of the panelists already mentioned, but also not just looking at AI for taking profit and using it in the businesses can be used to address more societal problems in the world. Problems related to the planet, problems related to equality and humanity as such, and problems related to other species. So that is the other notion for AI for good. And there we just essentially adopt UNDP 17 goals. So anything where you are working towards profit or not profit, if you are working towards those goals then that is AI for good. Excellent. Let's dive more into how companies can responsibly use AI, use technology. Because I think that's a very pertinent topic right now. And I want to turn to Joanna because I don't know if people here know, but she was part of the panel set up by Google that ill-fated, I think it lasted all of 48 hours. Well, I think I still have a contract actually, but I signed the contract in November. So this was Google wanted to set up an AI ethics panel to help an independent group to help it guide its own activities in AI, but it met with a huge amount of resistance from employees. So tell us, well, I shouldn't say huge, it met with resistance that blew up into a big story. So tell us about that. Oh, well, okay. In a way, I'm not entirely sure what happened either, and I think this comes back to this conversation about that AI is everywhere right now. And in fact, I don't even like talking about AI. I think in 10 years we'll be talking about the digital transformation. So there's a feeling that you shouldn't trust any corporation whatsoever, and some people have that feeling, and some people particularly don't trust the big tech giants that know so much and are getting so much money. And so you would think then in that case they would want something that might help make the system better. And a lot of people did, a lot of people were excited. I got lots of, you know, single emails, single tweets, single LinkedIn, whatever those are, things where people said, wow, this is great. You're definitely going to tell us what's going on there. We look forward to you participating. I hope you can make any difference, you know, those kinds of messages, lots of positive messages. On the other hand, there was a small number of people, many of whom, most of whom I had no idea who they were, that were very persistent that they were trying to find a way to tear it down. And after a little while, like, you know, 30 hours, they really focused on one thing. There was one particular member who was out of line with most of the tech giants' political perspectives. Now, I wouldn't say that that member is very well aligned with my political perspectives either. But on the other hand, Google is a transnational company. It's certainly at least a national company. And two of the members of the tech, of the board were politicians, or political operatives at least, from the right and left. And they, and the one on the right was completely excluded on the basis of a couple of the positions that she held by her institution and that she expressed. So, it was interesting that they mostly seemed to go after. There was one other woman that was conservative and drawn too, that they started with her and they kind of stopped it. So, I don't know if that's actually that interesting in a way, but what happened in the end was that Google itself withdrew support after one or two of the other board members, which they carefully designed this board to be a group of people that were strong-willed, loud, but also had some chance of listening and that they would hope would update their opinions and would come to consensus on things and that they hoped would sort of bullet, you know, would stress test their policies before they released them into the public. So, that's what it was supposed to be for. It was a stress test for policies. Seems like a very sensible idea. Yes, so why they couldn't, I don't understand why they couldn't themselves, they're a communication company, communicate to their own people, why they needed to have that kind of balance or that their company couldn't really convince them. My opinion is that probably they were right to have this broad balance, but one way or the other, they should have been able to come to a consensus. Instead, they just cancel culture. They threw the whole thing out and that seems like a very strange move. Well, to me, I think that science and technology historically is a force for good. Or, you know, overall, it can be a great force for good to handle it with care. It is sometimes potentially a little bit alarming how much of a backlash there seems to be against the idea of technology being a force for good. It's almost like it's inherently a force for bad sometimes at the moment. I think it's very, again, I don't want to divide. The more AI becomes, you know, that we become good, it's skilled at using AI, the less it makes sense to differentiate between AI and just normal human behavior. And you can see all through history. Yes, you're right. In some sense, it's a force for good in that there's more and more and more people. Now we're dealing with the sustainability, the very well brought up sustainability goals that maybe we don't want. More and more and more people dominating the ecosystem to the exclusion of other forms of life. And so that could suddenly very quickly turn into a global bad, even though it looked like it was good for a while, if all of a sudden there's a nuclear war or something. Right, right. Yes, yeah. Okay, Chen, let me bring you in your, you know, very prominent at a large company that's been working on AI for a very long time. How does that inform your perspective of this idea of technology being either good or bad or an AI being good or bad? I must declare English is not my mother's tongue, not even my father's. And therefore, I'm not so sure if I hear, join clearly. Did you say that you do not trust the big corporations just now? You did, I guess. I said that I doubted one of the decisions one of them took. So I, working with a big corporate, I feel disappointed to hear this. It just should be a little bit controversial. No, no, no. I didn't say that I didn't trust big corporations in general. I wouldn't have been working with them if I didn't think that big corporations are absolutely a part of our life, right? Right. Let me just switch back to Chinese so that... Na, wysig gyda ni... Well, to a larger extent, I think business has driven the progress of the society, the improvement of efficiency and also the improvement of well-being. And the business institute is also the major body of innovation. The university has done a lot of research, however, to some extent, universities are not the major body of innovation. Therefore, I believe that the businesses, enterprises, they have played a vital role for our society to go forward. So we cannot under-valuate the contribution enterprises have made to our society, and this is my first point. And for the second point, as for the tech for good or tech for bad, as I mentioned before, historically speaking, there are so many technologies that have been invented for bad, for example, the mass extraction weapons. Nobody will claim that the invention of such weapons is for good. And actually, the weapon itself and the improvement of weapons, you can think about what we are using those weapons for. But actually, we still have some technologies that are designed, are invented for good, for a good future. We know a chemical called DDT. If it has difficulty translating, I have this in my pocket, I can read this to you. It's a long DDT, it's a pesticide. It was all... There was also another pesticide with a very long ancient name. We call it triple six. When those pesticides were invented in the first place, we thought there were chemical miracles. But we all know that both of the two chemicals we've just mentioned, both the pesticide has brought us catastrophic disasters to human beings. And even nowadays, we can detect this residues in 99% of our entire population. And of course, this has been stopped use and recently is a glass thing and that is a weed remover. And it was designed for improving the agriculture for the weed control. So technologies and new things that invented for good perhaps will also create a bad result. And therefore, in my opinion, first of all, for the enterprises, when you are making the innovation, creating a new technology, you should have a purpose to design it for good. And for example, some of the companies, when they are designing a new algorithm, they are not designing it for good. Perhaps they are trying to using that algorithm to get your privacy, to get your personal data so that I can make more profit out of it. And I think this is a tact for bad that you are purposefully doing it for bad. So this is not something we need to do and something not we're going to do. And secondly, the enterprises should take up the dual risk possibilities. For example, the transparency and the trust that you established with your algorithm, the explainable algorithm that is not dark, nobody knows what happens with it. So we will know this algorithm will contribute to the human society, to our progress, to improve our well-being. And you can improve the efficiency of the business operation. And I think at BM, we have a lot of cases. For example, AI's application in healthcare, in industrial manufacturing, for example, we can use a video-checking process for the quality control. We have production line quality control. And this is also useful for human resources management. We have a lot of cases, and if time allows, I can share with you more. Push you and say, well, how do we guarantee, how do you ensure that the companies and the people working in those companies do make sure their technology is used for good? Lots of companies have come up with their code of ethics. People have talked about a Hippocratic Oath for technologists or AI researchers. Should there be more regulation? What would you say? I'm itching to jump in on this. First of all, I never criticized business in general, or even one business in specific, only one decision by one business. But secondly, I totally agree that there's fundamental... I don't think there's anything we can do when we build technology to determine. I agree with all your examples of technology that was built for good, and in most cases, it couldn't have been known at the time how badly it might be used. Specifically, I think, the AI revolution is one thing, but the main revolution is actually digital. Once you have information out there, people can use it. You could have good people use it in a good way, bad people use it in a bad way, and we can work hard on cyber security and we can try to ensure that our data cannot be repurposed. For example, we have laws now that, in America at least, that some of the data assets of a company cannot be sold on, even if they go into bankrupt. At least certain data assets are protected, and that's a really important law, because otherwise, things that were given for one reason can fall to someone else, but cyber security has that same threat. The example I often use, and this has nothing to do with even digital, but when the Nazis invaded the Netherlands and Belgium, the Netherlands were very organized, had a lot of data about their citizens, they never did anything bad with it, but when the Nazis got there, they did something very bad with it. With the Belgians, they were very disorganized, they didn't know anything about their citizens, and so far more of their Jewish population survived because their country was less organized. That was even pre-digital, so we have to realize, and I want to say that there aren't only problems, there are solutions, and so cyber security is one solution. Another solution is communicating about accountability, that we have to hold the people who build the system are obliged to show that they are using best practice and due diligence just like it would be if they were manufacturing. So then they keep good records, and I'm sure IBM does this, they keep good records about who changed the code base, who used the machine learning, what data they trained it on, you keep these records, and then when we come to regulation, what governments have to be able to do is audit those accounts and be able to attribute blame if the stuff is being used the wrong way. So we need to just raise the standard. I think the way we are talking, it feels to me at least, we are saying that there is a whole lot of new things that everyone needs to do, but if you're actually going to businesses, there are existing practices, so risk is something that almost every organization of a decent size, as a chief risk officer, they look at all of these risks that we just talked about, so there are data associated risks, so personally identifiable information, brought in personally identifiable information that's been there. Our AI is using that information is something that we need to essentially govern about. So what I would suggest is a more fruitful discussion is not that whether we need some of these things or how it is, but essentially get down to much more of a practical approach. How do we translate some of the very specific AI things, right, so in terms of where does the data come from of a particular lineage, of a particular segment, and therefore it is biased, so now you be very cognizant when you are building a model as to why you are building a model and what data are you using so that you can be very conscious in saying this model cannot be used for anything other than this region for what it was built, and building those practices into the organization is what we are saying needs to happen and I think responsibly are you. Super sure. I just want to say totally right, but there are a lot of companies using machine learning, again because of this anthropomorphism, they aren't following the good practices that all the other companies that use AI like the automotive industry and all these other medical industry they use these practices with AI and yet the AI companies themselves like Facebook may not necessarily use the basic skills of software engineering that they should have. So this is another division here, sorry just one point here is I think they are a very valid point I think you need to take away the notion of governance of AI out of the data scientist view and into the broader business view. I think that is critical data scientist has a certain view but it's a broader view that we want. On that note let's also bring it into the government perspective as well because it's a I was listening with very much interested especially with Professor Ms Bail I have a question you mentioned many important things and as a position of government do you see is there any particular additional rules for procurement of AI related service or software or things that you do you think in what way international society to revitalise fully equipped automatic AI driven card liven as you know in any of our countries it's not feasible because we are all member of international treaty that in automobiles we live in public way there must be someone I mean human means is in the car and all casualty laws with regard to automobile are made based upon that that's a very important key issue ruling that any government is considering how do you say well so I think to quickly say something about this last rule before that I know in the UK we're partly relying on a rule that already said that you couldn't let your horse take your cart home without a person in it for exactly this reason and this is one of the rules that people of course are arguing about but anyway back to the procurement question in general as I normally say there's not that many laws we need to change for regulation for exactly the reasons you said the things we need to do is to create the legislative bodies that have the regulatory bodies that have the expertise to apply existing manufacturing rule into the laws into the software industry however for procurement it may be a little different and I honestly it is not my big area of expertise but my main concern again is how do you ensure the cyber security and particularly the firewall if you wind up using so traditionally how do you control corruption the model economists had was you want a bunch of mid-sized companies and a government and they all kind of keep track of each other and they try to keep each other from becoming corrupt now we're talking about trans national companies that have assets nobody else has so for example with Google going back to Google they build their own chips they don't trust anyone they have their own fiber optic cables since they found out that the American government was hacking them they now encrypt everything even within the company not only outside the company all their internet traffic so they have resources that most countries could not afford and would never replicate in fact the EU cannot make its own chips but Google makes its own chips so on that note alphabets related to Google's sidewalk labs have recently released their proposal for a smart city in Toronto and they came up with the idea of a previously came up with the idea of a data trust that would be owned maybe by the Toronto library I wonder Minister what do you think of this as a good way to avoid there being unintended consequences or to simply shift the risk somewhere else I don't think I can't do very much anyone or any company or any government can completely shift any risk with regard to the use of AI that's why things are so complicated but according to your notion I feel that there is some resemblance to product liability discussion so again there are more than 30 countries looking at their national AI strategy and one of the main things there is what do we regulate is there a need for a new regulation so this is where I think again we need to just go back and look at the existing legislation and see how they need to be modified give an example, autonomous vehicles I think Germany came out with the list of principles on what does it really mean in terms of liability I know in Japan you also have done something similar for robots so again all of those economies are very dependent or very focused on robotics or manufacturing so there are very specific things that governments are looking at drones being another good example where we don't have some of those aviation rules have not been adapted for drone flying should we completely ban it but should we allow them in air traffic control areas probably not, so how exactly so those are some of the more concrete things I think we need doing but then there's also the broader notion I think which we mentioned earlier as to how some of this information is being used and personalized and what additional changes in rules that we need so for example USSF Fair Lending Act it has been there and this non-discrimination has been in the statues for decades now from 60s, so those things still apply but now people are using these algorithms in a way that we need to go deeper and then be more explicit on what they can or cannot do so that's what we mean by the governance which in some cases the regulators might step in in some cases the companies need to do it just because it's good practice so again we distinguish between ethical and legal yes legal you have to do it but ethical you better do it because that's the right thing to do don't look for a law from a EU or a UK government to tell it's the right thing to do for your customers and one of the things that we didn't bring out is trust the AI will completely fail if the consumers don't trust that either the companies or the governments value what they are doing right so in that sense without the trust it will all fail so I think in order to enter the trust we need to follow those ethical principles in a way that works I want to take some questions from the audience so please yeah do you have microphones just down here please thank you everyone very very scintillating discussion one of the things that or a quote rather kept coming in my mind as I was listening to everyone my old law school professor used to say sunlight is the best disinfectant and as we're talking about these proprietary data sets what's really interesting is the inherent network effect of these data sets and also the inherent monopolies Mr Rao I love what you said about legal, ethical may I add another thing to that which is commercial how do we get these companies to come out with their data sets out of their own volition and that's where I think it's really really powerful one idea as I have some really influential people here is perhaps an analogous regime to patents where we know that it has worked for centuries where there's a lot of proprietary information that companies possess nevertheless we've created this regime where folks are coming out of their own volition and making that into the public domain and also perhaps some data exchanges where we could add that commercial layer to it nevertheless thank you for a wonderful panel that's a fantastic point I think one place where this is I think it really hits home is if you consider self-driving cars if you've got these vehicles being tested on public roads with innocent people around it's almost a public good for those companies to share their data so that everybody can improve their system so do you think that's possible? The other thing that's I think back to what you were saying is I think we'll arrive at a technology solution specifically for data you mentioned data exchange if you combine the data exchange with a blockchain concept and then we want people so there are maybe certain fundamental things we want people who generate the data or companies who generate the data again ultimate being the people to own the data but then you want the people to own a solution be able to give aspects of that data specific variables at a time for a specific purpose for a specific duration so all those three need to be combined so I can give my health data to my life insurance company just for today for them to give me a life insurance code after today it vanishes they can't use it for any other purpose that's the kind of thing that we need to do that then we can decide whether the government is sort of taxing it or someone else is benefiting for keeping the data but I think the technology solution will come very similar to what you were saying and I think the number of companies are working on it I think World Economic Forum is also looking at some of those areas I have to say I haven't heard somebody talk about data in terms of like patent law where you had a certain or a copyright possibly how long it makes a lot of sense to me however there's a big problem which is of course all the personal data so you have obligations I have a slide in more slidey things where you say data is the new oil storing it is dangerous and I show lots of explosions so you don't want too much of this oil in your company necessarily and what you do have you have a lot of obligations to protect so I think we have to it's going to be a little more complicated than that and one of the things that we were just talking about in one of the hubs is also about revenue so it may be you want exposure of data as was just mentioned for certain kinds of applications but another thing you might want is to say hey you're making money off of data that you've acquired and you've consolidated a lot of people treat data like it's just allying there infinitely whatever no how you represent it matters a lot and you put a lot of there's a lot of computation there's literally energy and cost of storage and creating a data set is not just like stealing a bunch of people's souls or something like the old idea of cameras so you've got this asset that is shaped specifically for your business maybe what you really need to do is just pay some tax make sure that there's some redistribution going back to the people from whom that data was derived don't necessarily make it into data that's usable for anything because that might actually again open this exposure to it being used for bad commercial nature which obviously companies want to commercialize would you be willing would IBM be willing to share its data let me just still make a few comments before I come to the data point first of all just now we talked about many things regarding the risk associated to technology but we also need to recognize in general technology has significantly enhanced societal progress and increased working efficiency and improved our life from the invention of the wheel that has made mobility possible from the medicine that has improved our living standard and age longevity and also information that has a significant enhanced efficiency and so on and so forth just now you're talking about how we are going to use tax for commercial at another dimension I fully agree with you that's major elements any of the tax there is no longevity longevity of the tax itself can you think of any technology that has not been utilized by anybody that can survive you can survive for one day two days but cannot survive for long so therefore technology application in commercial is very very critical in IBM we said last wave of transformation happened from outside in means very often those disruptors are coming from outside the industries people who are not in the taxi company but due to the platform they become a disruptor to the taxi companies in China we call this a DD in the US we call this Uber and there are companies that have utilized the platform to disrupt hotel industries so on and so forth there are many many of these kind of examples but we also believe that next wave of transformation will come from inside next wave of next wave of transformation will be from internal of businesses especially inside each and every sectors the flagship enterprises they will set up their own platforms they will utilize their data better 85% of the data behind firewalls still today and to go further the data we have collected only 1% of the collectable data lots of data we have not collected them yet or after collection we are not using them let me give example the wearables wearables through wearables you collect lots of data maybe you can publish it in the moments but my doctor is not seeing my data he does not know my exercises daily he does not know my daily activities so it's not being utilized so many data are not being mined so technology process should play a crucial role whether it's AI or big data analysis or cloud or data security or any other I believe technology in commercialization has big potential including in the health sector in medical health in industrial operations even in human resources management of companies big data analytics and AI they can find lots of scenarios for applications so that different sector and the tech themselves can enjoy longevity not only invention after invention you do not put them on shelves they can put it to use and work in the life another question yes you want to just wait for the microphone thank you all for the session Mohammad Musaf, CEO of TPN AI kind of touching upon healthcare and in autonomous vehicles where I work and I used to work for Google so for the Google IBM, Uber, Tesla that have tons and tons of data that is and also for healthcare that have a lot of data on genes and human behavior etc in a pre competitive market where you don't have a standard product that is already out that data is like you said is the new oil so it's very hard for these companies to convince themselves to share that data and make it accessible for others because it will reduce their competitive capability and from a regulation perspective if you try to force them to collaborate it's actually today not relevant because in autonomous vehicles for example each car is using different sensor suite different cameras, different formats so even if you share the data it's not going to be very useful so if you create an entity that allows which actually I'm trying to do with my company to create an entity that allows data sharing across heterogeneous formats you risk becoming a monopoly and then that needs another kind of regulation complexity so I'd love to kind of hear your thoughts on that and how we can have both a carrot and a stick so that the companies that have a ton of data actually have an incentive to share without creating disadvantages for society I know this might be a complex question but I'd love to hear your thoughts Minister Katiyama on the you know in our proposed legislation I think we should require minimal interoperability among several cities and among cities and government the reason why we require minimal interoperability is for the benefit of the whole country the whole human being because more the interoperability exists the more the goodness that can bring about will increase it's clear but you know autonomous very fraught operability is no more necessary because the system is already being open we are living in open sourcewares open data waste world so it's no more necessary that's why and this law I just want to signify the data linkage platform in legislation we think this tendency will continue and another things is we have to be agile in setting rules or making progress or making new something new in business or legislation because we have to advance the speed of the increase of progress of each field is too rapid for example I visited Alibaba's main office in Gawsham and they told me very kindly that they can economize 20% of traffic jam with 4,000 camera in the city that's great that's one thing using 2,000 server IBM, Hewlett, Fujitsu it's not quantum computer it's ordinary computer that's good because this is a public job they wanted to do that that's good and next month I was told that a new software company in United States can economize 3,0% of traffic jam only using 20 or 30% of the traffic jam in the city and this morning in this meeting I was told by emerging new company that only 5% of the car living in the city that data can reduce 50% of traffic jam of the city the scale of Los Angeles so that's day to day evolution it's nothing to designate and verify in very details the lose that's the difficulty that all agencies and governments are facing so we try to neutral and we try to not to anti progress that's why we Japan propose in G20 free data for with class that's the difficulty anyone else I wanted to comment on the scenarios that you just sort of laid out in terms of the data and who wants the data the data network effect has been well documented as you just said very specifically for AI there is an inherent bias for larger companies to own the data because you have more data the machine learning algorithm gets better which means you personalize your services so more customers come more data so there's a virtual cycle playing and it's been well documented that you will create those monopolies is still very open so again people have talked about three potential scenarios that the world could move into we are very much used to the government controlling or larger bodies of governments legislating and controlling so they could potentially break up large monopolies so that's one scenario where the government still hold the power but now what we are seeing is the emergence of supranational kind of bodies as well as companies which are multinational and their GDPs are probably their revenues are far more than any GDP of many of the countries so that's where the corporates could wield that power and all that data or thirdly this is where I think we potentially need to go at least in my view is the individuals own the data so today we don't have the technology and the economics for the individuals to own that data and make it available to either the corporation or the governments that's why the other two entities are owning and have the power so which way the world would go is it the traditional government or the corporations or the individuals is unclear in my view sorry Joanna we are running out of time and I think I just need to try and wrap up the tour so let me just bring out a few key points I think really important one is to demystify AI demystify technology itself and it's particularly been a problem with AI so that people understand more what's going on I think it's clear we also need to have a very much more nuanced debate about technology not thinking inherently good or inherently bad and thinking about the complexity of it there's a very good point that this is not new there are mechanisms for dealing with this there are ways to think about this and maybe we can look at some of those mechanisms it's very clearly an issue for companies companies need to think about the ethical steps they're taking and it's also very important for governments to be thinking about that and then lastly quite clearly which we ended on it seems that data and ownership of data, personal data is really really going to be very important to the future of technology and the ethical discussion for that future of technology so last of all please join me in thanking our panellists and giving them a big round of applause thank you