 To two more minutes One minute and half Yes All right Good afternoon everyone and greetings to the online audience from wherever you're watching from Welcome to this very timely panel on generative AI at the water cannon forum In your meeting of the new champions in in Tianjin in China Today we gather here to explore the Profound impact of generative AI on our society and the limitless possibilities that it holds for the for the future As we embark on this transformative Journey, it's it's crucial for everyone to understand the the potential. You know the enormous potential That we see with this novel technology, but also the the challenges and the responsibilities that they come with it Today we have the privilege of hosting a distinguished panel of salt leaders experts and Visionaries and they're here to answer many of your questions on the challenges and opportunities brought by generative AI Hopefully by the end of the the sessions through their collective wisdom That we have a better idea in terms of how generative AI can be can be used as a force for good a Catalyst for innovation for creativity and also for for human progress My name is Kathy Lee. I'm the head of AI data and metaphors at the water cannon forum First of all, let me introduce our distinguished panelists on my immediate left. We have minister Emily Stoey Manova dad minister of digital transformation of Slovenia welcome Then to her left. We have a John Yat-sing chair professor Qinghua University Dean of Institute for AI industry research Then we have Pascal found Chair professor at the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology Also director Center for AI research To Pascal's left. We have Chen Xu Dong chairman and general manager IBM Greater China Then we have Wang Guan co-founder of learnable AI director of Intelligent Finance Lab of Shanghai Jiao Tong University Ningbo Institute Welcome everyone. It's it's an honor and pleasure to be here with all of you before we get started very quickly Just some quick housekeeping items If you are keen to share Your experience with with your social media channels the hashtag is the AMNC 23 So let's jump in but before we get started. I think first of all We need to ask the question What is generative AI and I'm looking at Pascal if you can just you know, give everyone Overview in terms of what is the technology? What can they do? I'll be great. So Originally the term generative AI is to say that it's a AI system that generates content whether it's text or image It is different from say classifier Which it gives you the label of something when it sees an image or whether it gives you a prediction of some number when it sees some content so But today's generative AI is different from previous generations of generative AI systems in that these are large models So you've heard of large language models in for tax content You've also heard of diffusion models for image content So these are huge models that have been trained from everything on the internet whether it's an image or whether it's in even video or tax and They have a huge parameter size. So these are humongous neural networks and they were not actually trained Specifically to do any specific tasks. They were just trained to generate the content and as a kind of a pseudo task but What we have discovered is that when these models reach a certain size and parameter size We have seen that they are able to perform many different tasks that they were not specifically trained on so this means that these generative models are especially powerful because we can use them for a multitude of tasks in AI and We can use them to build other types of AI system downstream So this is why these models are very very powerful and in fact, they're very creative because they're generative Of course, they also come with the downsides and I'm sure we'll talk about that later. Thanks Pascal So it is a general purpose technology and in that in that case usually industries are the first ones to You know adopt but also to to utilize it. So I wonder if you don't if you could speak to In terms of for example, IBM. How does IBM leverage the the technology for your own operations But also work with other players in the industry Okay, I'll use English to use Chinese So I'll be talking I'll be speaking in Chinese actually IBM started Looking at AI in 1956 so for the last decade IBM has made some very important contributions Including in 1997 our computer won an important award Deep blue award so when It was until AI started to learn to play chess that people started to notice AI And today We have an AI strategy and IBM We rely highly on AI so we have two types of products the first type is targeted businesses I Believe you've already heard of Watson Watson X is our latest version So this is our new product in the era of generate generative AI So that is the first time our second type is that in our software product It also is a guy is also integrated Also for process automation services products. We also have AI integrated also For our mainframe system today, we have already integrated AI in our mainframe systems because During in all transactions in banks, we really need AI to help us safeguard the bank system So in our internal operation, we're also using artificial intelligence So in our company, we have a chatbot if you have something if you have any question You can also at the at our ask our internal chatbot So you can see that throughout our operations. We're heavily using artificial intelligence How to leverage those technology usually much faster than others, but I'm going to turn to the minister As a policymaker What are your hope hopes and fears when it comes to what we see the industries are adopting the technology at You know much faster speed a lot of the times compared to the government What do you think even when it comes to for example protecting the industries against certain risks let it be operational business Or or from cyber perspective? What what are the things as a policymaker that you're wishing to to see that's happening? Thanks for the question. So of course there are fears But I believe that the fears should not hinder the innovation that is because AI brings huge potential And if we speak all the time about the fears then we might lose this potential and this is not what I would like to happen Also as somebody coming from the government I believe that generative AI can really boost innovation and I would like to see that in the government as well So just now in our Slovenian government. You mentioned the chatbots at IBM We are introducing new ways of communication without citizens with our citizens So we're establishing new channels of communication where we would like to use generative AI And I see huge potential. So what are my concerns not fears? Is the competences of the public servants then the competences also of the citizens Then I'm especially my concern is to introduce AI into schools the generative AI into schools Why because I'm not quite sure whether the teachers are know how to use but whether they understand what a Generative AI means how it can be used. What are the the fears behind so? These are usually my main concerns and the biggest one is the biases Because we already have the stereotypes and the biases in the world existing And the AI can even increase those biases So we need to find a way how to eliminate the stereotypes and the biases and to make sure that they will not cause additional devised in our communities Speaking of education is the you know perfect segue to to Wang Guan Tell us about learnable AI. What does it do? What does it have to do with education as well? Thank you for the opportunity. So we started learnable from Harvard innovation lab back to 2017 And we actually were doing large language models, but without calling it large language models We were calling it cognitive intelligence. So we realized that Targeting a niche market is actually better for startups because generative like general model is actually very Expensive So we choose education because we believe that AI can replace a lot of human labor like both physically and also mentally So I believe that teaching is not simply it cannot be simply replaced because the Interaction between students and teachers. There's emotion inside But the grading of the students homework exam problems and tutoring can be done by AI in a very amazing way Actually after I believe six years research and development of our products actually much much more mature and for this year we have successfully applied it in the National College entrance exam in China and also the high school entrance exam in China. So After AI become mature, it is actually more Productive than human labor. It's cheaper. It is faster and it can offer much larger scale Than the human labor. So during the COVID pandemic There are a lot of schools facing lockdown the students are forced to go home and that caused big trouble to their parents it can be solvable by offering a phone or tablet or Just through videos the teachers can teach the students But when they have to do their homework, how can the parents solve the problem? So for us is very simple like take a photo all the rest can be done by AI So the students can know what is right what is wrong why is wrong and I'm how to improve so we believe such technology Can kind of mitigate the gap between the wealthy family and the worst at the poor family because after the model is mature the Cost for offering a new service to a new client is very low. So As a small startup with serious B We believe that this is actually something a good opportunity for little company like us and not Competing directly with big players like opening on on the other side For educational problems, you cannot make mistakes like if you ask opening I 100 questions it can answer 60 of them perfectly and 30 of them Okay, and 10 of them probably wrong, but that is not acceptable for education So once we fine-tune and retrain our model to be super good It gives us a very strong competitive power. So That's what we do for today and actually we recently open our API to all the Educational players so we want to be the fundamental supporters for any education company schools teachers tutors and parents To make sure they can accept very cheap but good services Indeed specialized data Seems to be you know You know have a competitive edge when it comes to the general purpose use language model I do want to come back to that especially I can relate to you know your point on Getting through the national exam. I think many of the you know Participants here in the room can relate to that as well as part of our childhood memory But before before we do that. I wanted to turn to Yaqin You are a industry veteran with your experience with Baidu and Microsoft and now you are working at Tsinghua University Can you tell us a bit more about in particular the generative AI? Landscape in China Well, you know, it's quite interesting. We had a similar panel just about seven years ago in Davis the winter Davis and also here in China the whole technology has just just completely Transform and industry Including China. I talk about China very more. I spend a one minute. This one to summarize, you know my observation when I Look at you know chat GPT or stable diffusion the last a couple of years So my first observation was to say wow and this is the first in a software actually passed turn test in a for computer scientist You know, this has been a major endeavor to to develop something that can pass turn test So this is the first the conversational AI software that has to run test the second thing to Pascal's point that this actually leads to AGI, right? It's not exactly AGI yet But but you know doesn't provide the AI a Pathway towards artificial and general intelligence that is another Go that we've been trying to pursue the third thing. I think it's probably more important You ask about industry and I consider this as a option system for for AI and You know back in the PC days. We had a windows and Linux And in the mobile days we had iOS and Android. So this is the new option system for the era of AI So we're completely reshaped the whole ecosystem. You know where there's the silicon and or the Application ecosystem for example, you know Professor one just talk about education You know that is actually a vertical model based on the large Opus system the model the data he used to train the exams It's not the the data that use for training the GPT But that really works out because you can have a system that is large name to model And you're going to have a number of vertical models that for different, you know industries They have application on top of that. So the industry industry will be very very different in all the apps and all the models will be Really has to be written has to be completely structured. So this is So in China if you look at it all those years China has been doing just some terrific work in basic research in algorithms in Industry applications in every sector and and obviously chat to PT was not invented in China You know we we still in the talk about that but just in the last six months or so there are Almost 100 companies the new companies in the generative AI space and some of those are in the large models and others are in the You know generate AI for the vertical models To generate not only in language, but also the images and the videos And I'll talk more in on the other space and the robotics and also in a biological Computing space just a tremendous activities going on in China and Professor once the companies is one of them Indeed Currently most of the large language language models are in the US in China Somewhere in in Europe as well It does look like it's a paradigm shift and we're looking at you know generative AI large language models potentially Becoming the next you know public facing internet infrastructure So it does have huge impact on the society, but I wanted to go back to Pascal At the end of April US co-chair with the form we co-hosted a responsible AI Leadership summit where we convened you know some of the most prominent AI researchers in the world and we discussed Extensively about the guardrails that needs to be put in place and you spoke you know very it was fascinating to hear some of the You know surprising emergent properties that you see from large language models And and how do we put some of the guardrails in place? Can you elaborate a bit? Please? Yes indeed? Actually Yaching was also invited, but you couldn't make it. So yes, I think so I have been in conversational AI research for close to 30 years and I think people in my area we share this common excitement and also surprise So a few years ago we talked about the fourth industrial revolution and that was when AI was taking off But today I believe along with some of my peers that we are seeing Revolution that's beyond industrial revolution. It is perhaps another you know quantum jump in human civilization I am not somebody who like to you know hype about AI quite the contrary since I've been in this area for so long But fundamentally these models are very different from anything we have seen before because we did not develop them to do What they are doing today? We sort of discover this amazing architecture and we stumble upon this I shouldn't say stumble upon a lot of people work on it but if you talk to the Leading you know neural network inventors such as your shop, thank you They will tell you they never expected it to manifest these models the neural network models to manifest this kind of capabilities So we feel that we have for the first time have Discover not really built but discover a form of intelligence in this case machine intelligence that That's comparable in terms of performance I should emphasize that in terms of task performance It's comparable to human performance in many tasks in many other tasks They actually surpass human performance. So again machine intelligence. They are they are nothing if you don't prompt them Okay, that's the difference between machine intelligence and human intelligence if you don't humans don't prompt these machines They don't do anything whereas a human being, you know, if you don't talk to me I'm alone sitting alone in the cave even for like 10 days meditating. I am a Intelligent being I have internal, you know, emotional life and internal turmoil and so on but machine intelligence not like that They perform the task in cognitive tasks at the human level, but they don't have this kind of inch, you know inner I was the inner inner mental state or inner workings So that's a very big difference, but since we have discovered such intelligence, we need to work with them Right, we need to learn to work with them. Even the people who build these systems. We we don't actually know the extent to which What their capabilities are so my team did or recently did a benchmarking paper on Chatcha BT So we use about 23 standard, you know publicly available benchmarking test sets to evaluate Chatcha BT in The standard natural language processing tasks from machine translation to summarization so on I believe that was the first third party Benchmarking paper on Chatcha BT since openly I didn't publish their benchmarking at the time so we did that and So it's a process of discovery Right and discovering both the abilities of these models and discovering how we can work with them so I think this is Why we are all very excited and we are cautiously optimistic, you know I mean I understand the fear from the general public and I think this fear from human being is very It's very natural. It's related to our survival instinct. It's like when people see, you know About a decade ago CGI Simulation of actors human actors and you see, you know, we know this phenomenon called uncanny valley if you see some simulation humans look very close to human but not quite then they trigger some fear in us and Scientists say that is you know because of our own survival instinct and today We're seeing the uncanny valley of these generative models. They're performing tasks when you interact with them They are answering questions or they are interacting with you telling you things like a very intelligent human being and that triggers of course excitement and of course also fear so this fear is natural We need to recognize that but like the minister just say we should know we should not let it stand in the way of progress on The contrary we need to actually develop more guardrails. We need to work so again with all these the leaders in in San Francisco would discuss about the a Framework for developing responsible generative AI models and not just adding safety layers and not just having human in the loop but fundamentally understand these models and to have a Have it align with what we human want at the database level training level and also at the you know Generation step so that is more like a you know a partner to us rather than Enemy, I don't think they're enemies. You know and so this is my view Yeah, so to summarize is it fair to say that the fear about you know any novel technologies Not so much about the technology the machine gains, you know consciousness But it's it's the same way as you know technology can be used for good or for bad So it's more about you know, how do you actually leverage it? So on that note, she don't I wonder from business perspective. How do you? Again both for IBM, but also with you know the the clients and other industry players that you work with How do you what's your advice in terms of protecting themselves against the potential risks? for example cyber security privacy And IP protection because right now the reality is we we don't know What what are some of the data sets that that's gone into training the large language models? Would you have any advice on that? Well on the potential risks from AI IBM has been working on this for a long time and possibly for the longest time and We also abandoned research on facial recognition because there's too much potential for abuse of that though after chat GTG BT became popular I BM released a product called what's on X which helps companies to look at their own AI Process Now firstly what's an X Sorts out their historic data the second part of it is what's an X AI which? takes data from the Set and puts it into Both open source and propriety IBM models To see what is Possible you know with a training optimization and deployment and third step going back to the question that you've asked What's an X dot governance now? That's the third part of it? it Helps you in every stage to ensure data and information Security because that's what a lot of people are really worried about and So we need to make sure that when you're going through this process. It's also fully compliant and legal so that's Something that allows companies to set up their own internal AI capacity and apart from that We have also been working on other things every company Will assume that their own AI models will never be as big as those big models out there. So looking at ways to integrate Internal AI models with the big models and interesting Possibilities using big models to train come company models. We Call it dad by literally big white and little black. So we get big white to train little black. It has very interesting results that's The last few years of our experience So we allow companies like that to improve their own capacity by means of AI You use Our own model there was no Larger enough left model for us to right to use So when we first started We were facing a super challenging problem, which is still very challenging for even GPD like if you give him a mathematical problem And you want it to be a hundred percent accurate. You don't want it to make any mistakes and you also Want wanted to be able to do logical reasoning We realized that With very limited resource as a startup. It's just not an easy deal so we actually have learned a lot from Earlier players matlab Mathematica and we learn a lot from existing Classical approaches and in the very end we mix them together and make for one thing work So we actually envy a bit for the followers because they now have much more powerful tools But for us and those are the dark ages. We have to go through alone. So but for today when we realized that people started to realize hey AI model can handle these things the Population is somehow educated taught that they believe this is possible There were a very long time that when we tell people our model can do the gradient to train They thought we're liars Until they actually tried our own product. So I believe in the future Open actually invested in a robot company for one X One of the co-founders a good friend of mine. So there will be a day when robot and real human beings look super similar and When that they come I believe most educational tasks can be 100% totally done by AI But that probably comes in like 20 years or even 30 years as for today We folks on very small question, but very challenging question so that even if the other big players They realize there's a big opportunity, but it will take them three to five years to chase where we are So I think that's the surviving approach for small startups I'm not sure if I want my tutor to look like AI tutor to look like human But I'll take the the accuracy. That's for sure. So again back to the guardrails question, minister a lot of the immediate impact on You know that come from AI is on jobs as Policy makers What are your thoughts on that what how do you prepare for future where AI could impact on jobs significantly? The only constant thing in life are changes and we all need to to be aware of that So technologies are changing the job market. It was the automatization for us. Now is the AI and I always like to say that we Once we finish with University It's not the time when we stop learning But we need to continue with the learning all the time and this is something that the government needs to understand It's something that the companies needs to understand the researchers so to continuously update their programs But also all the individuals So what the government can do? Okay, it takes some time to change the study programs Of course, it's not immediate. You can reskill and upskill your workers in a shorter way And sometimes you just need to be patient To make sure that you make all these changes. So again There is a fear that AI will take our jobs. Yes, but it will create new jobs So maybe we should see this as a potential What are the new jobs that will be created and how we can approach to those new jobs? Absolutely, I will recall seeing a research a survey a poll somewhere that China definitely as general public Definitely has more optimistic attitude towards AI compared to the rest of the world I do think that has something to do with the population being I think significantly more digital savvy And probably a good understanding of the technology, but we'll see what what happens in the future Yeah, so I wanted to go back to you in your new capacity As a professor at Tsinghua University and also the Dean of Institute for AI industry research Can you elaborate how your research has? Integrated and incorporated generative AI and what are some of the significant outcomes so far that you're allowed to share I Started this lab when I retired from by do about three years ago So, you know, we obviously doing basic research, but a lot of our work is to apply this research into real problems So, you know, we use generate AI almost everything we do to give you a couple examples one of our research focuses on robotics and a ton of striving Obviously, you know, you need to collect a lot of data You know, we actually I work with by do a polo We have a hundreds of course that actually driving around in China and collect a lot of datas And also we have a robots that collect data, but those data still very small compared with the kind of data you need So one thing we do is so we use generate AI to augment some of this data and also we use generate AI to To do simulation because you know, there's a dilemma when you put a car in the streets You want to avoid accidents? But it is that the accidents you're trying to minimize and with your training and with your algorithms So you don't have those data so what happened with stable diffusion things we use and the nerve is to actually Generate those long-tail cases That that's being super helpful And also allow us to do the connectivity in an end-to-end connectivity from the real Scenario to simulation and a simulation back to real scenario. I call it a RSR. So that is one example the second example is In the biological computing that's one of our major major efforts We built a GPT we call the Biomat GPT That is very similar to education, but this one is for biological and the medical space It is not a you know, it's not a trillion the parameters It has a little bit over just you know one has a light model a billion the other is 10 billion models So what happens you use and data, you know the almost data from a particular structure molecular structure in the cells And the genetic structure But also using the literature that the patent data you lose as the training set And then and obviously you you know to the GPT you have a big model The good thing about this is so once you have this model, you know you can To generate the downstream tasks quite easily and for example, you know pretend Structure prediction and generation and molecular docking and Binding structure you can do a lot of things just with with that one particular model And and a lot more we have people work on multi-model large model Which this is the next step and also model to model interaction and You know she don't just mention that you can use a large model to train small models In fact in the future you can you can actually you know when you try to accomplish the task You can use a federation of different models and different foundation models from different companies and From open source from closed source and also different vertical models So those will it's another research where we're doing and always we have other people doing reinforcement learning doing also deploy the large model into into edge to funds your robots and also your Your your IOT devices In fact that I see as a huge risk. We talked about that later When you connect the information world into the physical world the biological world use, you know, they're there a lot more safety issues and the risks Yeah, but indeed but even the model interaction because you want to make sure you always understand what the machines are doing You don't want to leave it to the machines because I think maybe many in the audience Remember the science fiction book. I think that came out in 1908 Called the machine stops because the machine works so well human being just decided to to leave it to the machine and one and Over the over the time, you no longer understand what the machine is doing, but we'll get there later Yeah, exactly we're calculator Pascal You're from research and and you've been a champion on this issue Because now a lot of the resources when it comes to large language models are concentrated in the industry How do you make sure that academia has, you know access to to to the research? to to the, you know, Necessary resources that you need to conduct the research. Can you elaborate on that? Yeah, indeed? Kathy up until just about three years ago I think academia research industry research are quite similar in fact, it's We used to Think that we used to know that academia academic research tend to be longer term So we will be doing more a futuristic kind of research whereas industry research will be more applied research but this has sort of flipped with these large models because You know, I mentioned these models came out during the pandemic And I think there's there's a reason why because the companies made a lot of money They have a lot of money to throw at the resources computing resources Which academic institutions don't do not have so since the emergence of these large models Generated models I remember just about two years ago. My PhD students asked me Pascal What are we gonna do now everything you can just prompt she be GPT three at the time and What is the natural language processing PhD student gonna do? You know, we don't have a thesis anymore, but that's actually, you know, no longer true because With the emergence of first of all GPT three then and charge it be D and so on. We see a lot of problems with these models number one, they're not transparent so Not just not transparent to the users but not even transparent to the researchers Which means what which means that we cannot as developers of these models Predict what it's gonna do its performance as I mentioned earlier. We we even when we benchmark it Recently hugging face. There's some kind of benchmarking and they realize when you benchmark it with different prompts It will give you different results. So it's not transparent It's not quite controllable and that is a big challenge a big big Technical challenge so in academia. So among my students one of the biggest tasks we are tackling right now is how to mitigate Model hallucination. So I mentioned earlier generated models are very powerful in generating content But because they are generative and they are creative the flip side of it is that They don't care. Well, they don't care. I shouldn't use these terms But they generate whatever you want them to generate. So it can be factual Okay, if you just want them to generate beautiful Pictures, you know, I also teach a course in design at the Central Academy of Fine Art on using AI for design It can generate creative ideas that we haven't thought about great But on the flip side is that when you generate certain things that do not quite fit the physical law you know a person with six fingers on one hand or Or or Chatcha BT responding to you about about about some kind of historical event that never took place, right? And many people take it of face value. So like a lot of people me me included can be very gullible So you ask those questions that you get the answers you have no idea that there are actually wrong So this is a phenomenon called hallucination and how to prevent hallucination is a huge technical challenge So my PhD students are very busy today working on how to mitigate hallucination from different ways Also working on how to mitigate model bias, you know, they can generate toxic content Response they can generate bias response and as a reflection of the data. They've been trained on so so the imperfect human society Is reflected or represented in the imperfect Generative models so we actually have a lot to work on and to work on these models to to mitigate hallucination and mitigate Toxicity and so on we don't necessarily need to retrain the large model So we don't necessarily need this huge computing Computing source to retrain these models in order to find solutions Because in the future if you think about it any end user is not going to have the computing source to use it But we want the end users to have the ability of Controlling what they want to see so this is what we are trying to build right in academia institutions we need to look in the future and And look at you know developing smaller models more controlled Controllable models and more tools for the end user to you know control what they can use from the From the generate the large models So there's a lot for us to do on the other hand coming back to your original question Which is that today in fact these large especially the large language models are mostly trained from English data because that's the most predominant dominant language on the internet today by far because even Non-native speakers will create websites in English even if they have Website in their own language. There's always English. So the English abilities of these models Way more superior than the abilities of these models in other languages and we have found that via our benchmarking and The second to that I will say the abilities in Chinese language is also. They're also quite impressive Because you know, there's also a lot of Chinese content today on the internet and and the Chinese Internet use is very You know, it's it's very Very popular. So Chinese users create and consume a lot of internet content in Chinese And don't forget not just websites, but you know all the social media use and so on So there's a lot of Chinese language resource as well. However, you know, even the closest Next languages on the internet Spanish and Indonesian languages. They're actually the next closest Popular languages on the internet We don't have very good language models in those languages actually for Indonesian languages despite the content in these languages on the internet There isn't you know, it's about ground zero for Indonesian language and natural language processing I remember when I first started in HKUST at the time 1997 I Remember Chinese language at the time was considered low resource language So we I worked on Chinese language processing at the time as a under Represented language today. It's no longer the case Yeah, but there are many underrepresented languages still Indonesian being one and Indonesia actually has many different Languages as well. So there are many many languages and all there are 6,000 languages in the world today The the LLM's are not performing Nearly on par with the English language in any of those other languages. So we have a job to do collectively in the public sphere together with the companies with the academic institutions with the government's support to Enlarge the effort on building generative AI for every community for every language and for every economy in the world So we can we do not see this Increase in the in the in the economic divide between the haves and have nots really the haves and have nots of AI It's it will be more terrible than we can imagine today So we need to actively work against that by expanding research and development in other languages Yeah, well said. So that's when we talk about global access and participation. It's important to you know Now Chinese as language has caught up But we need to make sure other languages and cultures are represented as well I'm going to turn to minister Being at the forefront of policymaking in Slovenia and also you level What what are your thoughts on implementing, you know the kind of anticipatory? Governance especially now we're going to have the UAI act coming up It's you know just past the the the Parliament But it's probably going to take another two years to to to to come into effect. What are your thoughts around? How do we what are we regulating? Are we regulating the the technology or the use cases applications? What are your thoughts? so On a national level, we haven't discussed regulation yet because we are very much involved in the European Legislation and that's the AI act and also our expectations from the Convention on AI That is being done by the Council of Europe is there as well. So As you mentioned it takes some time But yet if we want to Regulate something then it's good to take some time and see what you do you want to regulate Maybe you don't want to make a mistake and then to miss some opportunity or to do a mistake there Maybe to be so soft on the regulation so What we are doing right now is we are watching the development of the technology Which is very hard for a government even for a research institutions We can see that these developments are very fast So if you want to put it into legislation, it is even harder And then try to be as cautious as possible for the possible risks. Yeah, absolutely I We are going to open up the the panel to to Q&A session. I already see there are questions this gentleman here Hello Sangu delia YGF from Dana Some quick questions the first is you had mentioned something about how in education You basically cannot have you can use AI that's just going to be mostly right because it's critical and then some Applicate if you're going to fly a plane and United Airlines tells you our pilots are safe 90% of the time you probably wouldn't fly it right there. They're just some fields where We shouldn't take those sorts of risks We're already using AI in the criminal justice system and we're seeing how is amplifying massive biases right and creating issues there So my first question is are there certain fields where given the fact that we don't have a hundred percent? Accuracy at a certain fields where we should limits the application of AI Second quick question is around how do you think quantum computing and the advances in quantum computing? What's what's the impact is going to have on AI and final question on in the same way? Calculators impacted pedagogy. How do you think the way we learn in schools needs to change? given AI right do do we need to take it out of the classrooms or do we need to embrace it and reinvent pedagogy? Thank you Thank you for the question for the first one I Think there are certain field where We have very low tolerance if anything is wrong then the The side effect could be Acceptable and if you like that is going to be highly risky if we apply any immature products for example like surgery like What are your example? Education actually is not that critical for daily homework Yeah, but for exams. It's a different story So so that's why we offer actually different services with different cost like for exams The cost for every problem is just going to be much much higher to make sure the accuracy is very strong For the second question I'm I don't I want to say I'm an expert in quantum computing domain. Probably we have better experts here. I Don't know anything about But I as an educator as a professor and also as a mother of two teenage daughters I actually have a bit of a contrarian view about the impact of AI on education I Believe in the future as I mentioned earlier Machine intelligence thing can take over a lot of skills that we possess today skills So what we need to train in humans the future humans is to be more human So to have more critical thinking more humanities So I advocate for curriculum revival into having more for example history philosophy Ethics the arts, you know the creativity side and as well as mathematics and sciences So and I've advocated for education system where everybody received this kind of a holistic Curriculum today our education system has been very much in dividing into silos Our engineers for example, I can see very clearly you asked the engineers developing a system to figure out Human value alignment is a huge challenge that you asked ethicists to give us feedback on the part on the systems They do not necessarily understand the algorithms. So in the future, we cannot have this kind of silo anymore so we need to go back to the basics and teach our younger generation to be really a Renaissance men and women and then I go back to the basis. So more humanities more sciences more mathematics and maybe less I shouldn't say that but Less of the skills that we are trying to teach them today because those skills will be Replaced by machine. So that's my view on the impact of AI on education. And I don't know anything about content computing But I think they're good And the first question is about which area we shouldn't use a I think in every area where we need decision-making We should have humans make the final decision using AI as a tool So I keep emphasizing this partnership with machines as a tool and humans making the final decision decisions so be it medical doctors or tutors and Company CEOs So there's a decision-making step by the end should be made by humans together with the machines as tool And I think you answered the calculator question already this gentleman in the back and then this gentleman Well, I just like to come in a bit on quantum computing Yeah, well, I'm not an expert but IBM is a world leader in quantum computing and we've been Taking various measures the main area for application of quantum community is well training AI models and also research on Medical teaching materials Now we think that it could massively shorten the development period for new pharmaceuticals. Thank you I believe these these models are capable of writing the code as well and alongside of that You know, they are capable of building similar such model like themselves replicating them If the panel would you know, deliver it a little bit on that and you know help us understand the implications of that Especially in the tech industry piece, you know, what is the future of code look like? So question is implication of gen AI writing calls Yeah, I think it's good The biggest developer of the GPT 4.5 or 5 is And my science works from Microsoft Still right code, but most of his friends So I think that this will certainly enhance the productivity very significantly in the next few years not only writing code, but also in terms of good architect design work infrastructure data center So it is going to think so that's all positive. I think we will we don't you know, we don't have to Pro members In fact, you know, this trend has already started quite a few years ago If you look at you know, when I was writing code, we spent time writing code in the last 10 years most of the coders developers actually use the open source using some type of Automatic tool to help them writing out. So I think this is a good thing. But one risk is Machine Process Yeah Thomas And also I'll encourage them to use as many certain AI as possible. But the companies with a bank of critical business, you can still use it to help productivity. We'll have the right emails, the summary, and the communication. But with the financial transaction, anything that has to do with the nature of the company. Absolutely. We're going to take two more questions. This gentleman here, and any questions on this side? Anyone from this side? Good. Then there's one more there, and then, yeah, please. Yeah, my name is Ding Long Huang from the YGL community. So today, both Professor Zhang and Dr. Wang talk about AI and education. Actually, this afternoon there is another session about education disrupted. It's not about AI, but still people talk a lot about AI. And on that session, I share a recent case. It's in China, in Shenzhen, some junior school student. They use AI to design a beach park. Their work are quite fantastic. Even people see their work and feel this should be generated by the graduate student. But actually, it's generated by just several 10 years old kids. But people have different opinions about this. Some will say, oh, this is cool, because it seems like our students don't need to spend so long time to learn so much knowledge to generate a professional work. But another opinion we feel, okay, this is unfair, even somewhat dangerous. So firstly, I have one question for Professor Zhang. So as a Tsinghua professor, suppose Tsinghua students want to use AI in their study. Do they use AI to do their homework or even use study to do their research to write their paper? What will be your suggestion? And I also have a question for you. Sorry, we are running out of time. We can keep it to one question, and sorry about that. Maybe we get that question as well, and then we let the panelists answer them collectively. Thank you. Hi, Brian Wong, also YGL. My question is to the panel. You all see the biggest bottlenecks in the development of AI moving forward. Is it in the cost of electricity, the cost of compute, access to data, and how will this impact this and the geopolitical situation, particularly the export controls, U.S. to China, how will this impact China's development in the AI sector? So for the first question to go to Yaxin. Yeah, I love it. All my students use whatever AI tools has been available. But they have to tell me if this is the help to use GPT-4, whatever they have to tell me. The same thing is for any synthetic contents. It has to be a clear donation. This is from AI. Or if it's a box, a digital person. We have to know it is digital. So I think that is probably the first policy and law that we should make. And in the future, education obviously is about learning knowledge, but it's also about how to learn the tools. So I think in our kids right now, when they go to kindergarten, they need to start to understand those tools. Just like a computer. Just like a PowerPoint. This is more powerful, but it's not that different. It's just new tools that will enable us to encourage us. So unfortunately we did run out of time and we are a Swiss organization very proud of being on time. So just wanted to mention one thing. The forum we launched the AI governance alliance just a couple of weeks ago with the aim to champion the responsible global design and development and also deployment of responsible AI with a current focus on generative AI. So we really hope to work with all of you from industry, government, civil society, academia to really pave the sustainable and also inclusive pathway for the futures of AI. So please do get in touch. And finally really just a big round of applause to all of our panelists today. Thank you very much. Thank you. And I hope you enjoy the rest of your time here in Tianjin. Thank you.