 I'm Ravi Agrawal, the Editor-in-Chief of Foreign Policy Magazine. I'm also the host of F.P. Live, our video channel and podcast. We will be live streaming this discussion on F.P.'s site and our podcast, so I welcome our viewers and listeners from around the world and also those of you who are here in this room. Thank you all for being with us. Our session today is titled Protecting Democracy from Bots and Plots. And it's a fitting topic because 2024 is a really, really important year for democracy. This is the year that more people than in any other year in the history of the world were head to elections. You have five of the world's six biggest democracies heading to the polls. So that's India, the United States, Bangladesh, Pakistan, Indonesia. The only one that's missing from the top six is Nigeria. More people than in the history of the world will use their adult franchise. All of this, by the way, is the subject of F.P.'s latest print issue, the year the world votes. Now, elections are usually about hope and there's a lot of hope involved with four billion people heading to the polls. But there's also a lot of fear this year. And I think there are fears about the health of democracy globally. We can be honest about that. There are fears about the rise of nationalism. There are fears about what that rise of nationalism means for liberal values. There are also fears about myths and disinformation, which are ancient age-old problems, but they've kind of been turbocharged by technology, specifically now with the rise of artificial intelligence. So we'll be talking about that, too. I think the question I want to sort of pose to our group today, and I want to bring all of you in on this as well in the audience, is how can we make sure that AI and technology are net forces for good rather than chaos, especially in a year like this one? That is so important for democracy. And how do we do this globally, not just in rich Western democracies? So let me introduce our panel. I have to my left, Jan Nipawski. He is the Foreign Minister of the Czech Republic. To my right, I have Matthew Fritz, the CEO of Cloudflare. Also on my left, I have Alexandra Reeve-Givens. She's the CEO of the Center for Democracy and Technology. And then to her left, I have Andrei Kudelsky, the CEO of the Kudelsky Group. We will also be joined momentarily by Smriti Zubinarani. She's the Minister for Women and Child Development in India. I believe she's just stuck outside, but she'll be with us in another minute or two. So let her sneak in when she makes it. Alexandra, I'm going to begin with you. So you run the Center for Democracy and Technology. How worried are you about the role that technology will play in elections and in preserving or hurting democracy this year? So I think there's cause for concern. And there's a reason why it's been a dominant theme here at Davos this week and why you covered it so well in the release edition of the magazine. We already live in a fragmented information ecosystem where there are echo chambers, where there are many different sources of information hitting your average voter, your average citizen at any given moment. And now we have to think about how AI is layered into that. I think about the risks in a couple different categories. One is just general miss or disinformation, dis where it's intentional, about the state of the political environment, about the state of the world, misstatements about candidates. We've seen this already in the case of fake audio or fake video about candidates. We'll probably talk about some of those examples as you move in, whether it's about Slovakia or even recently, you know, former President Trump himself as being the victim of fake images of him on the plane with Jeffrey Epstein's private jet. So there's that kind of concern about what is the truth, what is the ground truth in the information environment today. Then we also have to think about targeted messages to voters. So already in previous election cycles, we've seen robocalls, we've seen automated text message campaigns that are sending incorrect information to voters about their voting location, about whether or not their poll is open or targeted manipulated messages that are designed to influence their behavior. Generative AI makes it easier to target that and personalize it than ever. We know that the threats through privacy, because of privacy leakage, it's easier than ever to come up with those tailored personalized messages. So we have to think about that too. And then the final thing I'll touch on is the threats that are facing our election officials who are underpaid, significantly overworked. Just to bring it back to the U.S. for a moment, we have, you know, over 8,000 jurisdictions through which elections are conducted and often those are conducted on a volunteer basis or with a very light staff. And what we have is a world where it's going to be easier than ever for them to be the victims of either phishing schemes, doxing where their private information is revealed. And so we have to think about the infrastructure that's supporting them as well. So of course that's a parade of horrible. There are hopeful things about how tech helps, you know, connect the world and get out the message. But these are threats we have to be really conscious of as we go into this year. There are threats indeed. Matthew, let me bring you in. AI often gets cited in this way where we think of it as a threat multiplier. But whenever I speak to tech CEOs, the comeback I often get is that well, tech could also fix these problems. So do you see AI as, you know, a whack-a-mole problem or something that can get ahead of malactors? So I think, you know, from a good-nosed perspective, companies like Cloudflare have fundamentally been AI companies from the beginning. We sit in front of somewhere between 20 and 25% of the web and the theory of the company has always been if we could see enough of what was going on, we could use machine learning and artificial intelligence systems in order to be able to predict new threats before our clients were attacked by them or vulnerable to them. And it's been amazing to watch the same way that I think we as society have watched open AI and other generative AI systems really almost emerge on the scene over the last 12 to 24 months. Internally at Cloudflare, we've watched as our own AI systems are finding new threats, finding new vulnerabilities that no human has ever identified before and surfacing those in a way that we can protect them. So I think that the good news here is that AI systems are fundamentally driven and are most successful by whoever has the most data and the good guys do have more access to data than the bad guys, but it does mean that we have to work together and coordinate. Can I push you on that very quickly? Because when you say the good guys have more access to AI, but the good guys where? So the good guys in America, sure, I buy that. What about governments, for example, countries that don't have access to the chips, don't have access to the know-how, might be newbies when it comes to AI and can't catch up? Sure, I think that that's where companies like Cloudflare believe that it's incredibly important to make our technologies as available as possible. And so we have a number of different initiatives. In the United States, we do something called the Athenian Project. We've worked with NGOs around the world to take that same project and make it global where we can say, even if you don't have the resources, we want to make sure that you have the infrastructure to make sure that you can have a trusted, reliable, believable election infrastructure. And I think that's incumbent on technology companies. It's not just us. Microsoft has done the same. Google has done the same. And I think we all benefit from having a stable governmental infrastructure and it's then incumbent on us around the world to make sure that we're making our technologies as accessible as possible to protect democracy wherever it's going on. I'm going to come back to you on that. Minister Lipavsky, your country had elections a couple of years ago. You're looking ahead to European Parliament elections this year. What are you worried about from a public sector, government perspective when it comes to protecting and safeguarding democracy from all of these threats that we're describing? Thank you for your introduction when you pointed that four billion people will go to election this year. And we are living in a world where society communicates through different kinds of internet platforms. Most of them are global ones and therefore what is happening in one country today might happen in another country tomorrow. So we need a really global solution and to have a global discussion about the way how we communicate as people, which content and how are we presented. And I think the society moved on quite, I think the discussion moved quite along, moved in a good direction, in a description of the issue. But now we need to be looking for solutions. And different actors are looking for different ways how to solve it. I honestly like the European way, the European regulation on NIF, for example, which will tackle with some of those issues. And definitely we will see more and more false content being used as something which will disturb the election process, which will disturb the way how the society makes decisions. And just very quickly, do you think governments are prepared to deal with that? I think governments need to globally agree on solutions to that. So it was, for example, my country which proposed and consponsored a resolution in the UN in this regard. And we should be more thinking about the right not to be manipulated in a sense that human-centric approach in AI and also in this way of communication, it's totally must. So I need to know if the photo or video of something happening is true or if it was created artificially. 20 years ago when there was a movie, Jurassic Park, out, I as a child went into cinema watching Tyrannosaurus Rex killing someone and I knew that this was content created for fun. I was sitting in a cinema, it was obvious. But today these things can be produced quite easily and it made disturb our society quite a lot. So I think we need to rethink also our focus not to try to put all the heavy burden on companies. We need to also be giving some guiding principles. Andre Kudelski. You run the Kudelski group. Do you think from your perspective that we have a sufficient framework of global laws to deal with cybercrime and if the answer is no, how do we go about trying to create that? I think one of the key issues that I see is that if you have an election process is in one country. As for cybersecurity, people that are trying to be the bad guys can come from any place and from any different jurisdiction. And take an example of regulation. So regulation is something that can be pretty useful and efficient if you have the same territory for the ones that you have to protect and where the bad guys may be coming from. And if you have this asymmetry, it's important to come with some technologies that allow to fight against this asymmetry. And as an example, you should have more contentress ability. You should have to use solution to trace and to identify if a content is fake or not. But even if you find out that the content is fake, you have to know if people are ready to hear this criticism or sometimes people that are interested to get the content, to get the information, it will not be really interesting to know if it's true or not. So assuming I had the curiosity to know whether a video was a real one or a deep fake one. So let's say I have that question. Is it even possible to verify? Yes, it's possible. But it requires to use the technology also in the full process. I'll give you an example in AI. Today, you just have some rules to say you should not manipulate some element and you should say where it's coming from. You can have some elements introducing traceability of contents that even going through AI you can find where the content is coming from. Set in a different way, it's like for food. To be able to know what are the components in the food that you are eating. And for element of video, that is something that you can achieve through combination of watermarking, element of blockchain. So fundamentally give more traceability here. But that is helping, but that is not solving everything. Matthew, is that possible from where you sit? I think for sure. This has been framed in this discussion very much as a technology question. You have two technologists, a technology policy person, two people from government. I actually think that one group that's not actually on stage answering questions which I think actually plays a big role in this is actually the group you at some level represent which is the media. And so if I were running a media company today I would be thinking about how can we using the role that we have traditionally played as reporters of what is going on as the truth tellers in society at some level how can we be working with or developing ourselves the technology to be able to say this is something that someone actually said is something that was actually generated and I'm going to help you distinguish between one another. That to me feels very much like it is a natural role for media companies to play. And I think we as technologists would be happy to help facilitate that. So I'd be curious what the FP is thinking about in terms of your role about disinformation and finding automatically generated content. I mean so we're less of a newspaper, more of a magazine. The amount of news that we do is fairly limited. We do a lot of news analysis, a lot of arguments and essays where our audiences in most cases read the news they come to us for a lot more. But putting on other hats I've worn in the past working for CNN or elsewhere I can speak for those places and saying that they'd love the technology you're describing. I mean if the AP or Reuters had access to the ability to fact check and verify in the way you're describing they would take it as long as it's affordable and sort of a customizable enterprise solution they would run with it. And I think that's the partnership because I think trying to say that Andre or my company is the one that says what is right or wrong, what is truth or fiction that actually isn't the place that we're really good at. We're good at saying was this automatically generated or not and I think that we're saying is this real information or disinformation that's actually been a role that goes way beyond AI and that's a role that traditionally the media has stepped up to play and I think that's the right place for a lot of these questions around this to be centered with the support of technologists to help us help you better tell those stories and differentiate truth from fiction. I'm going to let both of you jump in. I just wanted to add, however, not all tech-seers admit that they're not good at every single thing so I thank you for that. At least one tech-seer hasn't been invited here who likes to think he's very good at media as well. Jump in and then I'll come to Alex. Just as a key element, one of the most important things is to allow the viewer or the reader to be able to make his own opinion by himself. So fundamentally, to get some data where people can judge by themselves and not just believing someone or not believing because if not, it's more a religion just fact-checking. I agree. People want to be able to make their own choices. Alexandra, I was going to come to you with a question and really in the spirit of trying to push us towards solutions, in this big election year, what kinds of safeguards and measures are you thinking that companies and countries should be trying to implement as they get ready for elections? Sure. Well, I'll pick up on a theme that I think you were just getting to, which is one of the most crucial intervention points is how we surface authentic, trusted sources of information. And you're absolutely right that the media has a critical role in this. Some of the tech companies do. If you're a search engine or a social media platform, I think it is your duty to help surface the trusted sources of information. But we spend a lot of time with election officials helping them understand how to navigate this new normal and how to boost the trusted place to go to for your polling information. How do they rapidly respond when there are mis or disinformation campaigns going on in their jurisdiction? And really that is a crucial element. And there's some low-hanging fruit there. Again, forgive the U.S. focus and the study I'm about to cite, but one piece of research that my organization did looked at the domains that election officials were using. Were they using a trusted.gov domain or something like springfieldvotes.com? And only one in four election officials in the United States was using a trusted.gov domain. Wow. That is low-hanging fruit. And thankfully the Biden administration is focused on this as one of their areas to prioritize. How do we shift election officials over to trusted domains, those trusted pathways so that they can be putting out this authoritative information and have an anchor point for reporters to turn to and the tech companies to turn to to make sure that good information is getting out there? So when we say what are the interventions, there are some places like that that really feel like we can be moving on them quite quickly. And then I would say when we look at the other points of influence in the ecosystem, legislation and regulation is certainly important as the minister was touching on, but given the time sensitivity right now, I will say as my theory of change, I'm very focused on what the companies can be doing. I think there's a conversation to be had around the generative AI companies, what their products are able to generate, whether they're automatically labeling as you were surfacing, whether they have usage and content policies that stop people using them for mass political targeting campaigns, for example. Do they detect that type of behavior and have a policy against it? OpenAI has announced such a policy, for example. But then there's also the social media platforms. It doesn't really matter if someone creates a manipulated image if it's not being distributed in a way that's actually going to shape the election. And I think one of the pieces that is underreported and so crucial to focus on right now is even as we're living in this heightened threat environment, a number of the social media companies have actually been scaling back their investments in trust and safety, in particular around elections. And those that are still keeping up the work are facing more political scrutiny and pressure to disband those efforts than ever before. So in the United States, for example, right now we have congressional investigations and lawsuits against people that study mis and disinformation about elections on social media platforms. There is currently an injunction in place stopping the Biden administration from communicating with social media platforms about interference threats on the topics of elections. That's actually going before the United States Supreme Court this year. So we're in this bizarre environment where right as the threats are ticking up, the investments in actually doing the day-to-day work of online trust and safety for our information environment is being scaled back and is under attack. And those are all things we need to recalibrate right now to try and address the threats this year. And just quickly, how? Yeah, so we have to have the social media companies keep up the work. There are really important lessons that... Is there a way to force them to do that, push them to do that? You get them to places like Davos and you have them talk about the work. You know, right now, because a lot of it sadly is in the staffing and decisions of companies, making sure they're putting in those investments, making sure that they're sharing information, that they're doing it not just for the U.S. election but for the other elections around the world. You know, that has to stay a key focus even if there is political pressure. There are important lessons that we learned after 2016, right? Social media companies learned about how you track mis-and-disinformation campaigns, what coordinated inauthentic activity looks like on a network, how you put breaks in, when a rumor is flying, you get people to check whether or not have you read this article before you forward it. Fact-checking programs. That architecture, it hasn't been a silver bullet by a long shot, but at least that architecture has been in place and there's an entire academic field now that studies this and analyzes what interventions might look like. We have to make sure that those interventions are still in place this year as a bare minimum for us to be able to navigate this landscape. Just to move us into a related theme to what Alexandra was describing, how worried are you about the rise of nationalism and populism in Europe but also around the world? And the reason why I bring this up is that, you know, a lot of that is often related to freedom of the press being curtailed. A lot of that is related to group think related to election security as well, related to democracy itself being weakened, various pillars of democracy being weakened. How are you thinking about that? So, those are not new phenomena to our world. But they are accelerating trends. That's what I wanted to point. And definitely internet platforms, not only multimedia platforms if they are focused on putting people in different groups which radicalise, which is a mechanism which was recently quite very well described in behaviour in international communities. This can be accelerated and typically Russia information war plays with supporting both left or right extremists just for the sake of splitting up societies. So, it's obvious that any kind of lie is very well working to support these malign processes and therefore we need to be looking for solutions. How the freedom of speech free journalism needs to be supported but in the same way it should not endanger our democratic societies. So we need more resilient societies. We need companies to understand that the corporate social responsibility doesn't only mean to do something nice in a local municipality but that their tools are not misused for the sake of this. But also there needs to be some kind of accountability on the other side which is not easy to define. Very often this development is faster than the regulations goes. So for all of these proposals you're describing is there any sort of concrete measure that your government has been able to take to enact them? I think the EU it's quite on a good path with AI Act with different legislation on this matter. So I think it is not the best perfect solution but definitely I think this is one of the things where the EU delivers quite global in a global way I think we excel in that. Matthew I want to pick up on a strand of something I brought up earlier and that sort of about big and small. So you work a lot in cybersecurity and cloud computing but if you think about it the biggest players in the cloud computing space it's basically three companies which are among the three biggest companies in the world. You think of the most cutting edge fabs it's either Nvidia or you have TSMC in Taiwan that dominates the bulk market. When you have monopolies of that size and scale isn't that in some sense dangerous when it comes to democratizing what they do when it comes to how smaller countries, smaller companies and people at the bottom end of the supply chain can plug and play what they do. Aren't they at a disadvantage? Maybe this is more therapy session on describing things I'm not qualified to talk about but I'm not sure I'm qualified to talk about that. I will say that Nvidia actually uses TSMC so they are not themselves a fab. I think that though what we have seen through history is that even what look like very stable companies get disrupted by technology all the time I am tangentially related if you had asked me three years ago who was going to win in the AI race I would have said China is going to be first by far because they have invested significantly in it, they have got the technology and they have the best access to data as a local company and whoever has the most access to data tends to win in this space. Second would be the United States and then Europe would be a distant third I thought in part because of some of the privacy regulations around Europe which maybe just find trade off but that is a trade off that Europe has made. That's not turned out to be how the AI race has played out at this point in time and we are in pick your sports but the early innings, the very beginning of this so things can change dramatically but what I think is actually interesting is I was super focused on what the inputs into the AI system were and the fact that China had incredible access to this, the US had somewhat more limited but still access to it and Europe was more restricted I thought would really benefit China and yet China has actually been slow especially in the space around generative AI and the question is why and I think that the answer is at some level we have all seen examples where a reporter can trick a Microsoft AI bot into saying something incredibly racist or horrible and that's very embarrassing to Microsoft but nobody goes to jail, nobody disappears whereas imagine if you're trying to build a similar bot in China and someone can get it to pretend it's a student in 1989 in Tiananmen Square all of a sudden, effectively the regulatory apparatus that is regulating the outputs makes it very difficult to create those systems and so as there is regulation in this space I think we have to be really careful about what part of the value chain we're regulating and what's hard about AI systems is they are very different than any other technology that we typically interact with most of the time you know if you have X plus 1 equals Y, for any given X you're going to get the same Y, that's not how AI works, it is a non-deterministic system, the exact same input can result in different outputs over time and that makes it very difficult from a regulatory perspective if you say please guarantee you will never have this output to make sure that that's the case and so I do believe that there is a role for governments and regulators in this space but I think looking at what has taken China which was way ahead and held them back is actually come out of a cautionary tale as we think about regulation in the rest of the space and I think this is developing very very quickly and so we need to look at how these systems are being regulated but we have to be cautious about making sure that we don't shut down innovation as we do that and are you suggesting that in the West we're over-regulating I don't think yet but I think that we should be very very careful about regulating things that can actually be controlled, you can control what the inputs into an AI system are but inherent to these systems is that they're non-deterministic and so if you say never give this particular output I'm not sure how you can guarantee that that will always be the case and again we've seen example after example after example of even well designed, well restricted systems where people have been able to trick them, you know, pretend you're my grandmother and you're and I'm dying of a horrible disease and you know tell me what the secret is and the systems will work around some of the limitations that are in place that means that as we regulate these systems we just have to be very cautious to not again ask to do something that is impossible for a non-deterministic system to do. Alexandra just picking off when we think about tech you know in forums like this we immediately bring up AI but going back about a decade ago and you think of the rise of the smartphone in say places like Africa or India where essentially you had hundreds of millions of people come online in the space of a decade and without the smartphone they never would have come online because PCs and the internet were growing much more slowly and yet literacy rates in many of these parts of the world are quite low. I mean there are states in India for example that have a 50, 60% literacy rate. You put all of that together and it's easy to imagine that it's harder for some people to be able to deal with myths and disinformation when it's election related let alone stuff that is AI powered or deep fakes. So you know if you would advise companies or countries in this case to begin to think about tackling myths and disinformation when you have populations that aren't particularly tech savvy or media savvy or even literate how do you deal with that? Well to be honest I mean we have the same issue in advanced democracies with high literacy rates too. Exactly. But it's just exacerbated in countries with low literacy rates. Yeah I think that's right and that's why we really need a whole of society approach to this. You know the answer isn't going to be wave a magic wand and suddenly mandate that a company never allow a false information not feasible but number two the efforts that they would take to do that would end up over correcting in such a way that actually you undermine the extreme value of social media sites to share information. So you have to be really careful in the balancing act here of kind of how what those normative levers are that that you're able to pull. I mean I think the answer is some of the modifications and lessons that we've learned over the years around how do you signal boost what you know is authentic how do you signpost information that might be of questionable quality whether that's through labeling or fact checking or you know down rating something if it looks like it's of the spammy category as opposed to a quality site and then at the same time having media literacy be a really strong component of how governments across the world and societies across the world try to retool for this moment and again not one thing of those is going to be the silver bullet but hopefully through an integrated approach to have more resilient societies that can withstand this fragmented extremely democratized information environment in which we now live. Minister Lepofsky do you think tech companies are too powerful right now when it comes to their ability to just as Alexander was describing I mean they can signal boost false stuff or choose not to you are very good at reading my mind. I think it would be a huge mistake for governments to accept that something is not possible to regulate control or to somehow work with if it has possibility or capacity to endanger those governments and artificial intelligence is a great piece of technology which can deliver different kind of results so it needs to be developed such kind of regulation opposition to control of those technologies that the governments will be sure that it is not going against the interests of the governments and this is very important principle which basically every player will understand and this is the reason why we pose those questions how to work with them and I give you one very specific example my ministry of one of us is responsible for licensing process when Czechia sells weapons there is a threat of movement of weapons and ammunition and we need to give a stamp that we agree with this transaction that it is not going against interests of Czechia produce a gun so we will say ok there is not possible to sell them to Russia very easy example and same goes for the dual use products like CNC machinery which can be misused for production of let's say parts of ballistic missiles at the same is valid for software same is valid even for AI systems and we need to be sure that we are able to control and every state in the world solves the same problem because of course this is a matter of national security this is a matter of being able to control what the government is supposed to have in control and I think there will be solution this is the reason why I am calling for right not to be manipulated it's very easy principle it is not a given set of check marks you need to fulfill since and I absolutely agree that the system is non deterministic but it's a clear guidance for everyone what the results definitely should not be and if it's then you know that you need to go and let's say fix it and to have a capacity and understanding for that so we don't know it is a new new technology it's a new thing that you can chat with computer but if you chat with someone then you should know that you are chatting with computer or with a human person same goes for images same goes for video same might be valid for many other things let's say that the same technology which will give you the possibility to distinguish that your car is not a hitting person might be misuse for targeting purposes same piece of technology so how do we deal with that those are big questions they are indeed Alexandra quick and to when we're talking about AI as the arbiter of access to information I think there are real concerns with the world where the government gets to decide whether or not the AI is acting lawfully we do not live in a perfect world and there are plenty of governments around the world right now that are putting extreme pressure already on technology companies where the words of the opposition are misinformation that can no longer be shown so that's why this work is so hard the second you come in and you legislate that you really go into dangerous territory now do you want the CEO of a tech company deciding if information should be upranked or government minister deciding if the information should be upranked neither of those solutions is ideal right I think we can have or my community can but so instead I think we have to look at smart interventions that perhaps you know you can come down on different sides of this a place of ideas to solve out so that's where transparency mechanisms for reporting so you know how those decisions are being made is hugely important but a world where the government just gets to decide is this AI sanctioned or not yeah no your point is well taken I'll let you come back very quickly and then I'll bring in on so and therefore this is the reason why for example my country joined forces with Maldives, Mexico, Netherlands South Africa to promote in the General Assembly the protection of human rights in the context of digital technologies so the same human rights we have now are applied to the digital world and then you have countries like Russia, China which would like to create a new set of rules for the digital sphere which I don't think it's a good idea we should work with the good old rules we have and so we have a means how to sometimes decide maybe in a slower way but we should be able apply that to the digital world not in some kind of new set of rules André Jean-Pérez I would just like to come back to the question of innovation fundamentally what keep the overall ecosystem honest is through innovation you may have new initiative new elements that will give you different perspective fundamentally I don't think that the government can just decide what is right what is wrong if not you come to a system that is in reality not the power of the people but the power of the government having said that it's extremely important to have regulation that are not allowing to try new things but maybe to be able to regulate when you have abuses and it's extremely important to have this capability to come with new elements out of the box of thinking that can also challenge the way we see what is right what is wrong because if we just do things too much by regulation we can even not really be sure if something is right or wrong because the perspective may be biased and in a way through AI we may suddenly find different perspective and then one of the most important element is to educate people not in a classical sense but in a way that they are able to understand maybe not one single reality but different view and they make themselves their own opinion and decide what they think is right what is wrong thank you you know and as you're speaking we only have a few minutes left on this panel I'm going to welcome on stage one of our guests who's a little bit late Minister thank you for coming we're almost out of time so I think what I'm going to do is I'm going to ask you one question you haven't heard this discussion so far but I'll ask you a question about India which is going to the polls in a few months it's the biggest election in the history of the world for those of you who don't know when India votes it's spread over four or five weeks because it's that big you know we mentioned earlier in this panel that India is a country that has you know gotten online a big way in recent years partly through the smartphone and yet it's also confronted with low literacy rates especially in some states especially among women how are you thinking about combating as someone who's in government right now misinformation leading up to the elections so I'm late to the party because there was a man driving me through Davos and we are from the world's oldest democracy one of the aspects about the Indian political systems that is possibly unknown is not only the fight through government instruments with regards to disinformation not only partnering with media at large to ensure that citizens are better informed but if democracy is to truly fructify you need citizen engagement in governance and policymaking one of the most celebrated digital instruments for engagement is MyGov where citizens are requested to give their inputs as to how policy should be framed exactly two and a half weeks from now we will present the interim budget and a vote on account and that budget also has a reflection of inputs that are given by fellow Indians digitally through the platform when you talk about our elections it is notable that though we do it in five to seven phases from the last day of the vote being cast it takes us precisely three days to compute the results it is done electronically 945 million Indians today qualify as voters of which 94% already are bio-authorized 70% of these voters will cast their vote and the fact that they come to vote not only through systems that are electronically and digitally enabled but they are better informed because of social media engagements I'll give you a small example how is the digital democracy delivering when you talk about let's say deepfakes Prime Minister Modi speaks about watermarking each and every AI product so that we know of origins and can better inform our citizens with regards to the information source from which let's say they form an opinion we have the Ministry of Information and Broadcasting which calls out fake news if we find one and is brought to our attention through some media outlets so the government can give a clarification with regards to the news at hand I would also like to hear a state that when we talk about digitally in some way empowering the voter we also have engagements which are not only available at the status of the general election as a woman I proudly say this we have 1.5 million women who are voted into office at our grassroots so when we say our digital democracy and democracy per se has delivered it is not only delivered on the surface but it is percolated to the depths and the layers of administration and our population so India is not only digitally ready for the elections but also a democracy that celebrates our achievements digitally and gives possibly pathways for the future as AI grows as we augment our processes I can only say this since time is running out so let me just push you very quickly because time is running out and then we'll conclude but where are the checks and balances and what you're describing I mean doesn't this system that you're describing where you have my gov you have the media doesn't that make the government too powerful because the election commission is a process which is dealing from government the judicial system which is fair independent of government and we have a media that calls out if there is any such anomaly so it's not as though democracy is just served through the pillar just of government or politics democracy is equally served by other tenants which keep democracy alive which is an independent media a fair judiciary and a robust system of the democratic process and election commission that is de-linked from government or politicians Minister if we had more time I'd love to discuss how some of those pillars are being weakened but we'll leave that for next time I want to thank all of our panelists for joining us and also our audience for bearing with us thank you everyone and we'll continue the conversation have a great Davos week