 So basically, like Xenof has said, we've been working, Hasgeek and Privacy Mode has been working towards understanding the effect of the IT rules among the tech community over the last two months. And we have been conducting FGDs and interviews with members of legal and policy teams of different tech communities. We've also talked to public interest technologies. We've spoken to startup founders, CISOs and security tech teams as well. And what we've correlated so far is merely an understanding about how does this work on the ground? How are companies planning on responding to the IT rules? And more importantly, what are problems they see or they envision as of now? So broadly speaking, what we've currently found is that there is large aspects of compliance that requires maybe a little more discussion, which of course, Udbav will definitely, you know, take part in and expand more on the legal aspects. But some things that we'd like to point out is the difficulty of the personal liability clauses that are enforced in the IT rules predominantly with your chief compliance officer. Another aspect that, you know, we've found to be highly confusing with regards to compliance is the aspect of periodic reporting. The IT rules mandates that any organization must report on all the grievances and submit it to an authority during a periodic period of time. But nevertheless, there is no clarity on what such a report should entail, what the format of such a report is and what they mean by periodic intervals. So these are aspects that, you know, members of the community were worried about. Another aspect that we are speaking about is with regards to takedown notification and data retention and data deletion practices. As mentioned earlier, you know, grievance, redressal mechanism is an important part of this legislation. So what happens is the institution has a period of about 36 hours and 72, 36 hours to remove any content that is seen as questionable or has been highlighted as a grievance. When such a situation occurs, you know, is that period of time feasible for a lot of these entities? Also taking into consideration, we don't know the volume of grievances, right? That come in the, in the period of time. And then after that, another huge aspect of this legislation that, you know, individuals are worried about is with regards to privacy. So there is a clause known as the first originator clause in the legislation that requires one to track down who the first originator of any type of questionable content, you know, whoever was created, who is the first Indian who was created this type of any type of questionable content, which means to what degrees of privacy are we talking about? You know, how far is that clause willing to remove encryption practices is also something that we've discussed. How this might even affect freedom of speech is another aspect we've discussed. And these are some points that we've spoken about. And I think a last point that we would want to talk about is the oversight monitoring that is currently seen. A lot of organizations do not know who enforces the rules. They know that they can, that Meti does contact them with regards to grievance to just issues, but who do they contact when issue, when there are problems is when grievances are not resolved in a specific way or format. So these are the large sort of brackets that we've observed when we've been doing our FGDs with regards to this law. And in the recent past, we've also seen many large, tech entities also file public hearings and reputations in courts. We see digital publications like News Minute, The Wire and the Quinn to a file cases in the Delhi High Court. We've also seen Google as well as WhatsApp, also file cases with regards to the IT rules as well in the Delhi High Court. So without further ado, I guess we can speak to Udbath Udbath, are you there? Hi, Bhavani. Thank you so much. Hi, everyone. So Udbath, yeah, sorry. Go ahead, Pat. Sure, Bhavani, no, please feel free. Like I, I'm happy to like for you to introduce the world to just spend 15 seconds talking about what I do, is that all right? Yeah, yeah, please go ahead, please go ahead. No problem. Sure, thank you. Hi, everyone. I am Udbath Bhavadi. I am a president's public policy advice of the issue of specific issues. I'm based in New Delhi in India. And over the last two years from Mozilla have been working on various issues that impact the open internet, which is data governance, intermediary liability, content moderation, encryption, connectivity. I thought that over the course of the next, like maybe 15 to 20 minutes before we sort of really move on to questions, I would probably try to do four things. The first is give your a high level overview of what intermediary liabilities, some of the global sort of history and its Indian context and how it's developed over the last couple of years, largely in the interest of showcasing how the legal regime that governs intermediary liability in India has evolved as well, but using real world is not really getting into laws or things like that. Then covering why intermediary liability is important, what are the different ways in which it is allowed the internet to grow and develop fundamentally in the way that it has. Thirdly, talk a little bit about the rules and talk about some of the sort of high level provisions within the rules and what sort of impact they can have on the internet. And then finally have a sort of dedicated section that can answer any questions that you all may have as well. In case you are in front of a computer or would like to sort of like have something to read along while I'm talking or as late like in the chat, in the Zoom chat, I've also just shared a link to a blog post that Mozilla has put out about this that sort of contains a high level summary and post the talk, we can also sort of leave it as a comment on the YouTube channel as well or just Googling Mozilla, open internet India should sort of like bring this up as one of the best results. Having said all of that, I think firstly, like to start off with what is intermediary liability, right? It sounds like a very legalistic word that seems to say complicated things, but at a fundamental basis, it simply means that platforms, no matter what that platform is and everything from Google search to Amazon web services to a blog that you post to the chat that runs on Zoom are all intermediaries and intermediaries are essentially entities that allow for content to be sent or like communicated on their platforms across either two users or multiple users and sometimes not even just users but like machine to machine communication as well. And intermediary liability is the idea that these platforms should not be liable for the content that is on them unless they are specifically required to, unless they specifically know that there is something going on on their platforms after which if they don't act upon it once the government tells them that their content is illegal then they will become liable for it. At the simplest level, it means that I say, send a text message that is considered seditious under Indian law, then the platform will not be liable for said sedition until somebody tells the platform that that message that has been sent across is illegal under Indian law. Here is a government agency or a court saying that it is illegal and therefore you should prevent it from being sent on your platform. After that point in time, if the platform either completely ignores the request or doesn't act the way that the law requires it to do after that point, like the platform can be liable for that content being on the platform. The similar law apply like similar frameworks apply to hate speech online, extremist content online, C-San, which is child sexual abuse materials online and a wide variety of content on the internet. Undoubtedly speaking, this is also called Safe Hub, which is that you as a platform or as an intermediary are safe from liability as long as you don't ignore the fact that you are informed about content being illegal and refuse to act on that content immediately. Like having explained that at a high level, how does it really operate in practice? And I think in order to understand intermediate liability, there are two important things that one needs to know. The first is this law in America called the Communication Decency Act. Now the Communication Decency Act in the CDA is an act that many people call specifically section 30 of the Communication Decency Act as the fundamental basis upon which the internet has grown and developed in the way it has and what the Communication Decency Act essentially says is that platforms are free or intermediaries are free to regulate or edit the content on their platforms without being held liable for it in any way. And the history behind this is that there were lots of conversations when the internet was sort of really kicking off between like in the early 19 and mid 1990s in America that platforms will start being held liable for allowing certain speech on their platforms and not allowing certain speech on their platforms. Whether there was reviews, whether there was trolling or whether it was otherwise illegal content, people started saying that unless platforms are given some sort of a protection against the idea if they act upon a piece of content, they will be protected from liability against it. Then they will be forced to either take down too much content or do oversensorship, something that I will talk about in a bit or alternatively shall simply allow all sorts of content to simply be on that platform which will not be good for users and society in general. Which is what led to the first so-called intermediary liability law in America called the Section 230 of the Communication Decency Act. And you may have also seen the news and like in general in like public discourse the idea that even in America there are many conversations around how Section 230 of the Communications Decency Act should be amended so that it can now be like reformed to account for the new ways in which platforms and the power that they fundamentally enjoy in society activities like the deep platforming of President Donald Trump in the United States of America is also another instance that has sparked that debate. Now why is all of this relevant in India? So India didn't actually have an intermediary liability law until sometime like under the IT Act was amended in 2003. And when new rules around them was also passed in 2011 and the history behind why India did not have an intermediary liability law under that point in time. And why we did have one after that I think is very interesting and really helps underscore the importance of why India needs intermediary liability. At that point in time, there was a website which ensures some of y'all are aware of called Bazi.com that was sort of like an eBay but for India where you could bid on products on the internet and depend on the highest bidder could actually get that product sort of like either physically shipped to them or delivered to them. A very typical sort of like e-commerce model. At that point in time, there was a sexually explicit video that was circulating around the internet and somebody had put up a copy of that video office deal on Bazi.com and had said that this is going to be on sale and the CD would be sent to the person who wins the bid. And what happened at that point in time was that the police actually arrested this CEO of Bazi.com saying that Bazi.com was playing a role in the propagation of pornographic content in India which is illegal. And at that time there was widespread outrage in the industry because people said that Bazi.com did not have the ability to monitor each and every piece of content that users put onto its platform for sale and the fact and monitor who was being sold to. They agreed that they took the listing down immediately or when they were informed of this but because India didn't have an intermediary liability to gene, the head of Bazi.com was actually had to spend many years and quotes trying to prove himself innocent by saying that this is how online platforms work at the scale at which they operate it's impossible to hold individuals liable for the action for illegal content being put by users on a platform. And in general that is something that the Indian government agreed to and that's why India has an intermediary liability to gene because without it and without safe harbor it is remarkably easy to penalize individuals for content that they had nothing to do with and that they even take it down when it was illegal but until they take it down, any harm that occurs in society because of it is something that they are not liable for which is what India's intermediary liability to gene has now said for a fair bit of time. With all of this context I'd now like to move on to the second issue which is why is intermediary liability important? So what India's intermediary liability to gene does is it actually creates due diligence obligations for intermediaries to enjoy safe harbor. So what it says is if you are an intermediary you have safe harbor and I'm gonna get into what are the different kinds of intermediaries, active intermediaries and passive intermediaries in a bit but it says in order for you to enjoy safe harbor you need to have and follow certain steps. And those steps are steps that can involve a wide variety of obligations that unless you follow you will not enjoy safe harbor. Examples of things that were part of the previous steps that were present under Indian law was that there is a government committee under section 69A of the IPI which is also the purview of this immediate discussion but essentially has to come from the committee that tells platforms to take content down after evaluating content. If that committee tells you that this piece of content should be taken down then you have to take it down and if you don't take it down you don't enjoy safe harbor for that particular piece of content. Similarly, there are other provisions that platforms would have to share data if law enforcement agencies made a demand for that information too which is actually present in the latest 2020 draft. And there are many such other obligations that are present in the intermediate liability framework that further due diligence obligations that platforms needed to comply with. So if you only if you follow those due diligence obligations do you enjoy safe harbor if it can be proven you did not follow them you do not enjoy safe harbor and therefore could be held liable for that piece of content. Given the technology sector in India you can certainly imagine that like every startup that involves an interaction between two entities or multiple groups of entities in some form or the other is an intermediary. The chat function inside Dunzo and Zomato is an intermediary and IRC server that you or I may run is an intermediary. A chat server that we may run on Discord is also at the end of the day an intermediary. And the broader scope of intermediary is so wide there have other laws and regulations in the world that actually explicitly split up intermediaries and call them and provide different obligations for different kinds of intermediaries. For example, there is something called the e-commerce directive in the European Union and what the e-commerce directive specifically says is that there are different categories of intermediaries that carry out different functions. For example, telecommunications providers. Imagine you are on a phone call and you are currently plotting a murder that you are going to carry out with three of your accomplices day after tomorrow at this location. Now just imagine if somebody held your say service provider hypothetically an Airtel or a reliance geo liable for the fact that it is a party or an accessory to carrying out that murder because that conversation took place on their network or took place on a phone call that was run through the Airtel network. That seems like a pretty ridiculous idea, right? Then that's something that like equally applies to so many ideas on the internet as well. So keeping this in mind, the e-commerce directive says that if you're a telecom services provider then these are the obligations that you have to carry out to a well safe harbor. If you are a platform that sells services or provides services then these are the obligations you have to carry out. If you're a communication platform then these are the services you have to carry out. And like all of these categories are different levels of obligations in them. The reason I mentioned all of this is that India's regime does not have any of this subclassification the way that I've described. And because of that, there are many issues that sort of summarize. The first is that unless an intermediary really enjoys safe harbor, it's very, very easy for them to comply with any request that entities ask them to take down content because of the fear that they will end up losing their liability. And there's a very interesting example that I'd like to give specifically about how oversensorship and intermediary liability works. So there is a very sort of important case under Indian law called Shreya single versus the union of India, which is something that is also linked in the blog post that I've shared in chat, but also otherwise just Googling Shreya single versus the union of India will be too many useful some of these and we have to read the whole judgment. But what those rules did was before the judgment came out in, I believe it was 2015, like there were three entities that could ask platforms to take content now. The government could quote code and finally any user, which means that if you and I got into an argument on the Facebook, say post fed, and if you thought that I defamed you, you were just right to notice to Facebook saying this person is defamed me, take this piece of content out. And there was an applet mechanism, but essentially anybody could report any content they that they discovered to the platforms that was considered notice to the platform for that content to be illegal. As you can imagine, people used to send thousands of things that sometimes you just used to annoy them. And there is this research institution in Bangalore and with an office in New Delhi as well, for the center for internet and society that did a fantastic piece of research around this, around 2000, but what they found out was that they essentially did a string of which is that they sent over a hundred approximately fake takedown requests, which is content that was perfectly but they just claim that it was illegal to different internet providers and ask them to either take the piece of content down or to block that content from being served on platforms. And the vast majority of that content, despite it completely being legal, was blocked by those internet service. What that showcased, and this is something the Supreme court actually used and cited in the Shreyas management is that if you create an obligation where unless you comply with the request that somebody who's authorized by law is giving you, then in order to be legally safely simply comply with that request and will not do the due diligence that they are required to actually determine whether that content is legal or illegal. And because of that, like report and the research around this the Supreme court actually struck down that part. They said that users cannot report to platforms that content is illegal for it to mandatorily be taken down. Platforms will have the notice for a piece of content being illegal only when either a court or the government agency is. And now as you can imagine with this entire sort of like training, the due diligence obligations that could cover intermediary liabilities like essentially prevent over censorship, they prevent, they allow platforms to sustainably scale and grow. And they also ensure that in general ideas like the freedom of expression, privacy and security are things that platforms can continue to build features for as long as they aren't held liable for them. Because imagine I am, what's happened? The year is 2013 and I'm deciding I am going to build end to end encryption and make it mandatory available for everyone or imagine I'm signal or any other messaging application that you trust that actually has good information. And suddenly I like there is a law that says if you enable, if you do not have the ability to block content that the government says you should like block, then you may be liable under Indian law for that piece of content. Now, as you can imagine, that's something that may actually make you think twice before implementing features like encryption, right? So not only can intermediary, like having bad safe harbor laws can not just have impacts on users but can also have very tangible impacts on companies. Both from feature development as well as like Mavani just spoke about earlier, also many problems with how like compliance and sort of the financial burden that it can impose upon companies, especially those that are smallest. So those are some of the reasons that like why intermediary liability is fundamentally important. And I hope that like in this broader context, we've like I've explained to you the different kinds of intermediaries that are like TSTs which more like, you know like a dump pipes as well as like active like intermediaries that allow for active communication such as like hosting a YouTube video or hosting a blog as well as like other places where intermediaries also exist, right? Like I think for this audience that I'm speaking to GitHub is an intermediary because GitHub is allowing you to store code on the internet that anyone else can view things with if you choose to sort of share it that way, right? And the same thing applies to IRC chat rooms, mailing lists Discord servers, Slack groups, all of these in all of these instances, those platforms that are providing this service is our intermediaries and therefore are bound by these rules and these regulations. So having explained all of that I thought I'll now just very quickly go into like what are some of the main ways in which intermediary liability like is evolving in India and what are some of the big challenges there, right? Like I would say that the first thing that the rules have done and this is something I've mentioned in the past is that they've created this new framework and that framework says that for intermediary liability there are only two categories of intermediaries they're going to be social like they're going to be significant social media intermediaries and all other intermediaries. And the thing is that there are a set of obligations that apply to all intermediaries and then there are a set of very onerous obligations that apply to significant social media intermediaries. There aren't any special obligations that apply only to social media intermediaries even though that's a term that's actually used in the rules a couple of times. Now the reason this distinction is very important is that like some of the worst changes in the rules only apply to significant social media intermediaries. Now significant social media intermediaries are those intermediaries that have over 5 million or 50 lakh users in a year. What is a user? It's a daily active user, it's a monthly active user. None of those things are really clearly defined at all in the rules and the government has the ability to notify your platform that it believes otherwise should be a significant social media intermediary into one as well. And once you become a social media intermediary you're supposed to have no compliance officers, right? Like so as you can imagine do more than 50 lakh people in India use Slack or do more than 50 lakh people in India use services like Discord, right? And you can imagine like they probably do like across all the different services. So everything that I'm saying right now most people only talk about how Google has to comply with it or Facebook has to comply with it or Twitter has to comply with it all of which is true. But they also much, much smaller companies that are used to a far more diverse they also actually have to continue to comply with these rules. And that's something that I think many of us like quite often forget. And just what are some of the things that these significant social media intermediaries have to do? Like the first is you have to enable traceability of encrypted messages which means that you have to be able to say the government will give you a message say this is the piece of text, this is an image and you have to tell it when was the first time this message was sent in the country. What that means is it doesn't matter if it's a forward it doesn't matter if it's been copy pasted people all they want to know is the first time this message was sent in the country and as what's which has challenged these rules in Goat that will essentially require enter and encrypted messages a hash of every piece of content that is sent on that platform to be stored associated with the user who is sending it each and every time it has been sent. And as you can imagine that's very, very scary as an idea more than those of the hundred and possibly calls but also the privacy and security risks that it creates. The other things that they are required to do is that they're supposed to endeavor to use automated filtering. They are supposed to ensure that if there is particular pieces of content that is being paid for or promoted by people then the platform needs to be built. And then there are also a bunch of obligations that are applied on entities that are all in all intermediaries so both significant social media intermediaries but also all other intermediaries. So technically everything that I've said to you about like telecom service providers stack discord also have to do a bunch of other things alongside any intermediary even if you just have like two users so you could set up a blog tomorrow and you could host it on your own server and you will technically be an intermediary that would be required to comply with some of these provisions not the provisions around social media intermediaries but the smaller ones. Now what are those provisions? And they're not really small at all. I think the first is that if the government asks you to take a piece of content down you are an arbitrarily supported within a period of 48 hours. If a government tells you that they want information of users on your platform or law enforcement agencies all for other purposes you are supposed to provide that and it will be 72 hours. And a bunch of other obligations around what sort of content you can and cannot allow on your platform. One of the things that are present in the 2021 rules of the kind of content that the platform should prohibit on their platforms is information that is patently false. As you can imagine it's a very very big term and who determines what is patently true or patently false information even the world in which we live in is something that I think that we should all really think about and consider and whether we want the government to be able to require pieces of content to necessarily be taken down because they are patently false where the government determines whether it's patently true or patently false because that's how it would work under the current sort of regime. And the final thing that I haven't really gotten into yet is that there are also elements of legal and personal liability in these rules and what that means is that if you don't comply with them very similar to the story that I told you earlier about Basi.com and why India has an intermediary liability regime people can be held liable and for significant social media intermediaries like they have to have grievance officers and it's possible in many cases can be issued like issued against those individuals who are supposed to coordinate with law enforcement agencies but even for all other intermediaries it's very possible for the government to do exactly what they did with Basi.com just by saying that this is a piece of content that you haven't taken down in 48 hours. And it's a very commonly used argument that why shouldn't there be harsh obligations to take harmful content on the internet? It's anyways so much of it is on the internet and it's causing so much society to come. But the true answer to that is that making that content making platforms liable for taking down that content is not going to prevent illegal content from being on the internet. The only thing it's really going to do is disincentivise innovation place entities in these platforms at least and just ensure that these rules are used to like selectively harass either certain platforms or certain individuals because the problem of online harms and online harm content is not that of it not being taken down kind of like digital literacy and far other bigger deeper issues that don't really like aren't really accounted for in these rules at all in any real way. So with that like I'm not going to get into some of the other provisions like content filtering like everything around digital media regulation which as Bhavani mentioned earlier have been challenged in courts but in general like this these are a high set of high level like overview why intermediary liability is important and what are some of the things that are deeply problematic in the intermediary liability rules and like in the final sort of like five minutes I guess of my individual talk before we really get into questions I'd like to also very quickly talk about like what is the current legal statuses of these like IT rules right and what's happened there is that these rules are now law they were first passed in late and then platforms got three months to be able to comply with them and then what ended up thanks Annabelle I'll try to speak a little slower and hope that like y'all can hear me if it doesn't work please let me know I'll just turn my video down so that my audio is clearer so what is the like so essentially these are now law and because they are now law platforms are technically required to comply with them today as you can imagine I like there are so many examples if we can instinctively come up with them things like traceability of enter and interpret messages and automated filtering that's so many platforms that we know about don't necessarily do so it's a open question as to how these rules will necessarily be enforced but at the risk of non-compliance is very very strong and that's something that I think we should all necessarily keep in mind so what and what like groups all over the country have essentially done is that a entities like Haskeek as well as so many other entities are really working towards spreading the word on these rules telling people what are the ways in which like they are bad what are the ways in which they can be improved what are some of the different changes that the government could do with regard to how these rules are enforced to minimize some of these harms while still allowing the government to meet some of its legitimate like objectives behind limiting harmful content but I think there are also separate certain people who really decided that because of the way that the rules have now already become law they need to be challenged in course so there what's happened is that I would say that the challenges around the digital media part of these rules something that we haven't really spoken about but these rules also give the central government the power to be able to take down a single piece of news content on an online platform or to ask an online news platform to amend a particular piece of news content something that as you can imagine even in print publications like it's very very hard to do but it's much easier to do that here for online news content and because of the sort of vast proliferation of online news content the registration requirements that these rules contain the fact that the government can ask you to send a particular piece of content in a government committee this committee doesn't have a judge it doesn't have external experts there are some sort of like self-regulatory mechanisms but in an emergency a government committee full of executive members can order for certain pieces of news content to either be taken down or to be amended as an example so those are the parts that have actually been challenged more actively in course all around the world sorry all around the country where there have been challenges in Karnataka, Kerala and a bunch of other places and I think the challenge in Karnataka and in the Delhi High Court and the challenge in Kerala in particular has been quite successful where live law has actually managed to get order that says that though that part of the rules will not only apply to live law or legal news publication until like that case is essentially settled but in general there are also some challenges that organizations like the Internet Freedom Foundation have made themselves a part of and have participated in like across the country both helping allies but also actively being parties in some of the cases as well but I would say that all of these are at like very very early stages similar to this there is also a challenge by WhatsApp that has specifically challenged the traceability provisions in the intermediary liability rules as well in the Delhi High Court and they have a blog post that if you just Google WhatsApp traceability FAQ on the internet you should be able to get the link that they explain why they have done so and why they think traceability is a problem but as we saw last time for example if you like if you remember I told you about the Bazi case and I told you about Shreya single all of these cases took years to play out at least two to three years to play out which means that these rules if they all if all of them in their entirety have come into force in May 2021 I would say those are the sort of timelines we should keep in mind before courts will actively be able to make a difference as to how these legislations broadly operate so because of that I think what we ultimately like need to keep in mind is that like these rules are law platforms are required to comply with them there are some good parts in those rules such as the provisions that say that like if you do automated filtering you need to account for like bias and discrimination in those algorithms such as the fact that if a platform voluntarily takes content down to enforce its community guidelines or the law actually it's just the law not community guidelines then they enjoy something called a good Samaritan protection which is that if an intermediary is not supposed to exercise editorial control over its content then just because it voluntarily takes down content that is otherwise illegal under Indian law doesn't mean it will not be a community animal so there are some good things and changes in these rules but the vast majority of them are very worrisome both for privacy and security as well as for like the average small startup that just wants to like operate either a consumer or a business to sort of business phase facing like entity that will definitely increase the sort of like compliance burdens that they will have to fulfill but also put them at risk of legal pressure in a way that will make it very hard for them to operate in this space so having covered sort of all of this I would just like to end by saying this is definitely a very pressing issue it's there are many organizations like I've mentioned in this conversation like the Internet Freedom Foundation like the Center for Internet and Society and others all over the country that are working on this issue I please do visit their websites like to understand the issue better let the word about them because of how fundamental they are to ensuring that the Internet can remain an open space and with that I'll like now hand it back to Bhavani and then after any questions that may like come up or anything else that I can answer thank you thank you so much Udbav that was very exhaustive in the explanation and covered a lot of points so now we'd like to open up the open up the chats for any questions that you guys might have for Hasgeek or for Udbav as well you can definitely send your questions via chat or via the YouTube link as well so those are two options so while we wait I was kind of wondering Udbav as of right now you spoke at the end about a bunch of court repeatitions against the IT rules right and as far as I can tell there are majorly two main types of players that are currently there you have digital publications who are extensively petitioning in like you said Kerala, Karnataka and Delhi and then in addition to that you also have larger intermediaries like Google and WhatsApp who also file cases and one thing I wanted to like two things I wanted to ask while we're waiting for you know everyone to start raising some of their questions one is why do you think both these groups have been so vocal about it and the second thing is with regards to the Google case I think there's another nuance about the law with regards to scope creep that we kind of could expand upon because I think as far as I can tell that case was basically that case is basically where Google is fighting to not be under the IT rules because it is a search engine and shouldn't be governed the same way as a social media intermediary so maybe you could expand on this while you know we're waiting for some questions to come Sure, I mean I think that like that's just one example of for Google case that you described in the scope creep is just one example of why there are many aspects of these rules which are suffering from like definition ambiguity right unless you define terms purely and concisely which it's possible to subjectively impose them upon certain kinds of entities and certain kinds of obligations and because of that I would say that like what Google is trying to say is that its search engine is not a social media entity and like in that process I'm going to wait for the quotes but I can assure you that from the definition of what a social media intermediary is there are many other entities that maybe thought of as social media entities but aren't social media entities that won't necessarily have the resources or the where with all of Google to be able to necessarily find them out so I think that like this approach of like trying to fight a case that is just for yourself so that your particular like product or outlook excluded from a preview of the rules is something that like is not really scope creep but like is definitely definitional ambiguity by design right like if your definition is broad and vague anyone can subjectively choose to impose it against entities in any way that they really want and that's what platforms are fighting against but I think in the long term the sustainable solution to this is really taking it to like quotes and then quote in a principle level where you don't argue like the various very important it's not like it's unimportant like sub issues of implementation but like fix the problem of the fact that the intermediary liability provision under the Indian IT Act simply does not empower the government to be able to make rules that are as broad and as vague as this to cover everything from all intermediaries which the rules previous rules did but also all digital news content which many people are lending that it does not the argument that is being made is that if these rules have to do the things they do and needs to be a new act it needs to pass the Indian parliament and then after that it will become rules the morning context publication actually did a really nice story that I think came out just yesterday actually that actually showed how they got ITI documentation that showcased the ministry of IMB and the intelligence bureau and a bunch of other people also said that if such content has to be regulated it's better to pass a law producer rather than just do so under the rules so I think like we need to keep that broad opinion and context in mind and that while those debates are important like the solution lies in fixing it for everyone and not just a few powerful companies Sure, thanks Udub, since we're anyway in meeting to explain the legislation to the developer community as what Pai Delhi is a part of maybe could you expand on what you how you think this community might be affected by the IT rules you gave a very broad sort of in-depth explanation about individual rights and you know overarching business issues but as an individual in the developer community do you think there are things that people might should be worried about and should be thinking about? I mean I think that like there are some sort of immediately obvious ones that I believe I mentioned in the conversation as well things like board being hosted on Github things like chat rooms that like developers may run on discord and Slack like IRC servers they or like other entities may host all of these things are things that fall within the scope of intermediary liability and these rules and what that will essentially mean is that A developers will have to be a lot more careful about what they say in such platforms especially if it may have the possibility of running a foul of government laws and issues but secondly they will also have to be careful about what platforms they choose because one could argue that if a foreign-based platform provides these services in India but doesn't necessarily comply with the entirety of these rules then it's very very hard to imagine how like the comments platform will be allowed to continue providing that service in India so something that I think all of like everybody on like hope they zoom channel like zoom sorry conversation probably has a signal account right and the Indian government has already started signaling that signaling that signal is not complying with India's intermediary liability rules and if that escalates and it goes down the way it does it's very possible then app like signaling will be blocked right so at a basic level there are the issues that like actions that developers are already used to carrying out via the internet they will have to be much more careful about but content may be censored and taken down much more easily and in general I would say there is also the compliance aspect of it that is as it's actually a developer issue right it's like most startups are actually founded by young technology companies are founded by developers so the amount of sort of obligations that they will have to like ute like and the amount of money that they would have to influence to be able to implement these rules is massive and because of that one has to keep in mind that it's very very hard to imagine how they will be able to do so in the same way that they've done in the recent past before these rules and when necessary coming before so I think it will also impact innovation and other things that developers may be able to do as well someone has raised the question how does the use of TOR services get impacted by these rules technically speaking the TOR service anybody who owns the TOR service is I would say an intermediary depending on like and these are publicly available numbers but depending on how many individuals in India use TOR social media TOR sort of services it's very easy to imagine TOR being a significant sort of like social media intermediary as well depending on how the government chooses to define that and if they choose to enforce against it right I think it's quite obvious that like TOR is a service that is built and designed to overcome government censorship for example if like China can pass a law that says TOR should not follow certain people in a certain way the TOR service is not going to change its features to comply with Chinese law it's simply like exist and keep existing the way it already does and I think because of that if the Indian government really decides to go after TOR or to say that TOR is not complying with Indian laws and regulations and therefore TOR has to be banned in India then I think they must certainly empower under these rules to be able to do so whether they will do so with another question we haven't seen such broad based blogs in the in the past like ways as well so I think like one has to really think through what are the different ways in which platforms will resist these government requests to what exists will they change their future features to be able to comply with these rules too because some of them may choose to prioritize the privacy and security of the users and if the government decides to do something about it those services may be blocked very similar to say signal or to TOR I think that was a follow-up question to that where the individual said but there is no way to track who runs the TOR service yeah so I mean in practice what ends up happening is that like there are ways for example that countries like Iran and China like actively be great TOR services that are ways to overcome them by running like using bridges and running like and accessing certain nodes that are privately like you know like hosted and things like that so it's very much a cat and mouse game I agree completely that it's impossible to completely drop on in India but there is also for example a publicly available list of TOR nodes with their IP addresses that one can connect and as some of those IP addresses in India then like after some investigation it's not impossible for the government to go so like knocking down on somebody's say that here is evidence we have that you are running a TOR server and therefore and therefore we would like to do something to you because of it right so it is definitely a cat and mouse game there is a lot of back and forth but like whether the government chooses to enforce it in a particular way is what will determine how much the rules will impact these things in practice I'm sorry Devani I think Hi sorry so somebody else had another question where they're asking how are strategies to deal with the rules different for content and social or social media platforms versus transaction intermediaries like payment portals etc I would say that like payment portals in general and to have a lot of onerous obligation that applies to them anyway like like guidelines by reserve bank of India criteria and best practice that are enforced by like banks that they deal with standards like PC and DSS so I think that like the risk for payment portals is much lower largely because payment portals usually take payment information and pass the current one and very rarely are used to actively utilize like content in a particular way where you know you don't express your thoughts on a payment but you can make payments for someone else's thoughts that say me or me not be controversial for example so for like I can completely imagine a payment portal getting a request if it is an intermediary in the first place which there is an argument to be made under Indian law and in financial law that some of them may not be pleased because they do know like who the user is they do know where the sort of transaction is going through so they won't they have a lot of visibility into the process that necessarily takes place but like I mean presuming that they are intermediaries I think that it is possible for the government to make a request that says tell me who did these transactions on your platform in the last 72 hours and I think because of the nature of finance like payment portals will be happy to comply banks will be happy to comply and the government will be able to get that piece of information so we anyways in India don't really have too much privacy when it comes to financial transactions in the government so I think the risk of payment portals is certainly present but it's lower than what it would be for other companies someone else raised a question about Tor again and this was with and it can be sort of a broader question which is you know people can always use Tor via VPN and this is also something that when we've spoken with other experts in our group discussions has been raised as well that if you are a bad faith actor you will find loopholes like VPN to sort of enable practices so in that sense is VPN really something that the government can trace or govern adequately no I mean I think that's the whole point right like I completely agree with the broader thing that like banning good technology will only make it easier for bad people to use it like that's something that I completely believe in and therefore everything that people are saying both on YouTube and here are true but in practice I would like to say that like the law of law and regulation should be should not be there to be 100,000 ways in which people can you know like break the law circumvigate like navigations those things are always possible and I absolutely agree that it's not going to be like easy on any of these things to completely be but I think there's the larger point that like it will certainly make legitimate users of these technology feel much more unsafe about using them because they will think that they're doing something illegal when they aren't really necessarily doing something illegal right so I think that like no matter what for example like very sophisticated blocking and surveillance regimes in like those in China like those in Iran exist and like people are able to get around them and are and are able to sort of like access the rest of the world's internet but like not too many people do so because there is an environment to fear and there's an environment of like and there's a chilling effect on what sort of thoughts and expressions you can actively talk about right so because of that I think one needs to keep in mind that the like it's okay if there are people means to get around the blocks that are imposed by these rules but like the average user like you know the person who's not as technically adept as many of the people like on this conversation are should not feel threatened by using those technologies right and the fact that like laws and regulations like this make it very easy for that to happen they make it very easy to verify technology and make legitimate ways to protect your privacy and self-sufficiency as something that there's an environment to fear about is something that must be imposed so I I agree that there are technological solutions to get around all of these things many of them so intent to use a piece really get back that does not mean that these laws and regulations are okay and that that should become the base and the standard where you're supposed to require to utilize service sector or to be able to access block websites I think sort of piggybacking on that with regards to the sort of vilification of tech right somebody has given an opinion but I sort of want to articulate it as a question they're saying that laws always lag behind tech and who is the user in a mesh network Amazon is providing mesh networks through DING and other devices in the US and ubiquitous networking and is here through LOT my sort of question is you know does this law seem to have any understanding on how tech actually works on the ground and like this person says you know there are complex layers to this and how will it sort of tackle all of these layers no I mean I think that like in the example of Amazon and DING like the user in that case is not something that the government really would want to go up because it's a mesh network like they will simply hypothetically presuming something like Amazon Ring is made available in India they will simply go to Amazon and they will say we got this message from this IP address that you think originated on your network tell us what you can do Amazon will either tell them like at what time it happened or any other information they may have about it or they may say we don't have this information and then the government may either be okay with that or will force Amazon to make changes to its products so that it starts tracking some of this information right and that back and forth between like technology and law and like how law lags behind technology is very true but I think that like coming up with good high level principle based regulation that is regularly updated is the only real way to account for that that is it's not a reason to not either regulate technology at all now neither is it however an excuse like you know like make very excessive and openly burdensome laws like we haven't made in India to attempt to regulate technology in a way that doesn't really understand it so there is definitely a principle based law but you ground that like needs to be followed in this um we had just one question where somebody on YouTube basically asked will it have an impact on adult content I think because you were talking about basi.com they're probably asking about it but in general you know not just adult content but content in general in India how do you think it'll be affected by it I'm I'm only being on top of that question I mean I think that with regard to like adult content in general like the consumption of adult content is not banned in India but like the distribution and the creation of adult content is like banned and regulated in India and because of that it's the same cat and mouse game like let's say it's blocked let's say it's not the alternative things so like does it increase the possible risk or the compliance obligations around this probably like for example like you know search engines being meeting needs to like adult content available and if there are law comes out that's all adult content is illegal in India and therefore you're not supposed to demonstrate showcase any adult content at all is an example of the reason which it may end up impacting it but like immediately apart from websites being blocked and all of the things that have quality happening in India I don't see legally why it would change anything apart from making it slightly riskier to like operate in the whole space in terms of otherwise content availability I think that like we definitely see over censorship right like though the consequences of not complying with government request is quite problematic and if you don't comply with government request to take pieces of content down then like the odds are that like the government will be able to come up to you much more effectively and if that ends up happening like your employees may get arrested you may have to pay hefty fines you may be dragged to court and most people would rather take a piece of content down rather than have to deal with all of that trouble so I think that like we're only beginning to see the impact of these rules because they imposed like for a month just literally more than a month now the most problematic parts but in the longer term if they stay the way they are I definitely see freedom of expression and like general content availability taking a hit in India I think we're getting a lot of questions about traceability so could you maybe elaborate on the traceability argument and you know how it came about as part of the IT rules and I I'll write lines and generally like how is it possible to implement traceability yeah I mean I also saw some questions around traceability in ring mesh and things like that the thing is that the government doesn't really care about whether something is technologically possible or not when it came up with the traceability right like they just wanted that so if you have a piece of technology that doesn't allow for that outcome to exist then you have to change the technology or not utilize that technology which is equally applies to placing people on mesh networks as an example but also as much an equally applies to things like end-to-end and which so at a high level that's the answer the government did not make these laws on the basis of how technology works in practice and what are its pros and cons but by wanting certain outcomes to exist in the first place excuse me so in general I would say that like why are they a part of IT rules they are a part of the IT rules because the Indian government long back in 2019 lots of messages that were going around that apparently excuse me that apparently were causing like lynchings and because of like rumors of child kidnapping and things like that in you know 2018-2019 said that they were romantic intermediary liability rules to like clamp down on those things from happening and what they said was because WhatsApp is end-to-end encrypted like if they ask WhatsApp who first sent this message before it went viral WhatsApp can't do so and they wanted WhatsApp to like give them that answer and that's why they have come up with the traceability provision that they have so that's that's what they got to why it's a part of the IT rules in practice however like I would say it it is not possible to implement traceability without either significantly breaking into an encryption or massively increasing like you know user surveillance invisibility so like it's not to say that like it's not possible to do some of the things that traceability may require so for example big companies like Google and Facebook do scan your email to see whether you have child pornography on it or not and if you don't like and if you have it like they block your account and they report you to the police right so but WhatsApp for example can't do that for end-to-end encrypted content but does it in other ways where it looks at what groups you're a part of if you've been reported as users for sharing CSUN content so there are many different ways of dealing with the problem of like you know extremist and sexually like and like CSUN content on both normal platforms and end-to-end encrypted platforms but like what the government what the Indian government wants which is tracing who sent it first seems impossible to do and like the WhatsApp post that explains this like there's a fairly or like I think good job of it as well in terms of like if people like I also think that in general what are the implications I see there are some like questions but in terms of content censorship and tracing what would be the implications and private conversations and chat like the thing there is that like nothing in the traceability provision says that it will only apply to public conversations like traceability in fact will apply and its biggest use case will be for private conversations because most WhatsApp conversations are private conversations from people to people and sometimes like you know very small groups of people so what like companies like WhatsApp and Signal have choices either they implement traceability and by massive increasing the data collection or breaking into an encryption and associating metadata of messages with users number one or two like they don't do it and then like because of that eventually either get banned by Indian government or get taken to court so like in practice I would say that that's the end game that we're moving towards gives in first what steps we take together is what we have to see but unless put step in like I think it's a matter of time before that really ends up happening yeah I think one aspect so do you know use spoke about the internet freedom foundation and their petitions and how they've sort of worked towards it but one question raised is what are the avenues for individuals and communities to advocate for freedom of speech and privacy under the IT rules how should they go about it I mean I would say that like the IT rules are already low and it's quite hard to change law once it's become law like to give you an example I think like in the last in the in the current terms of like the current government I would say that our two big laws that have been delayed and didn't manage to completely change because of public opposition the first was the land acquisition act and the second was like the Pamela reforms that have temporarily been pushed out right that's the level of sort of sustained public engagement that is required to make everyone aware about the law and so that people can start only thinking about it so because of that I would say like that public engagement on digital rights in general there are many like great avenues like the internet freedom foundation and like research organizations and entities that I've just mentioned earlier like CIS and Bangalore like ASGEEK as well for like some issues in terms of having community or users who understands them that people can be a part of but I think in the specific aspect of these rules it's very hard to imagine them changing unless either something really really big happens or unless the courts step in and I know that there's slightly pessimistic view but like realistically speaking I think engaging with digital rights in general understanding the importance of privacy and security spreading its awareness in the public and making sure that people are aware as to why these things are important is much more crucial than hoping to change these rules because of like engagement because that's not really the modus operandi of like how rules are gonna operate in in practice there were many problematic things with the rules in their last version as well but they won't change until they were changed for the worse I think one more question that was raised which I think you sort of answered but I just sort of want clarity in case you haven't is a great deal of policy in public tech gets away with our peer review and consultations so what is the scope to push for this with Metis under the IIT rules other examples from international context to borrow from you have just said that it is law right so what are the avenues considering that it's been passed and how do we work on it no I mean I think that like there's a big difference between doing a public consultation which was done on like on a older version of these rules and to follow the outcomes of the public consultation so I think that like there is more and more peer review and like consultation that happened for the data protection bill it had to be the liability rules and I think that like why it is important to give entities an avenue to like comment on them which I think that like that has started to happen more and more in India's case and so I think that what we should be asking for instead is for meaningful consultations that means if well meaning people are telling you that certain things are good or bad ideas you won't just ignore them and do whatever you want to do anyway but engage with them and like improve laws and regulations because of that like things internationally like I mean all laws in the European Union go through a very extensive public consultation process but that doesn't mean all laws in the European Union are good like they have a pretty horrendous copyright directive that is very bad for the internet and took years to pass and also went through lots of public consultation and peer review so I think that public consultation and peer review are important and should be done so meaningfully but they shouldn't be looked at like one key solution to like solving all problems because the prerogative and the onus of throughout the law should be finally and always will rest with the government the US government for example ran public consultations in a net utility before they rolled it back in the Trump administration too so like there are so many examples all over the world where like even if meaningful even if consultations are done like they aren't necessarily followed and I think like once and like making sure the public consultations are followed through and lead the changes in law is an act of like public activism and of like general public engagement that requires the understanding of these issues to significantly change and improve all over the country