 It is my distinct pleasure to bring Max Techmark, a dear friend and somebody that I look up to tremendously who's been extremely active for many years now in helping us avert global catastrophic risks, existential risks, and many more, things like climate change and nuclear war and AGI risk, and so much more. And Max is actually the impetus for this event and many more that we'll have in the future. Max and I were talking somewhat recently about how Orwellian systems are potentially possible in the near future in a degree that have never been possible before using technologies that are available now, and that we really need to work on this now as fast as we can. And so this is a call to you builders and all of us to work on the technology. I'll hand it over to Max in a moment to kick off the discussion, and then I'll return to have a fireside chat. Max, thank you so much for joining us today. Thank you so much for having me, Orwellian systems. This is a topic very, very dear to my heart, and I'm so delighted to get to discuss this with you all. I'll start by just to stir things up a bit, share some slides with you and some thoughts as provocative as I can, and then look forward to the fireside chat and some secret discussion. It's fascinating, I think, as a little homo sapiens that I am to think about how technology is really empowering us. After 13.8 billion years, most of which didn't seem much in life at all here, and then a long period when we humans were just running around trying to not starve to death, we have started to become more and more the captains of our own destiny. We're developing the science and technology, letting us understand the world and shape the world the way we want it, which is giving us these amazing opportunities which we can either use to make life flourish like never before, not just for the next election cycle, but for billions of years, and not just on earth, but throughout much of our universe also if we get it right. Or we can use the same technology the drivers ourselves extinct through some really stupid use of technology. This month is probably the greatest risk of creating an accidental nuclear war or not accidental since the Cuban is a crisis that would not be a humanities finest moment. For instance, so we're getting more empowered. How can we make sure that we as a species make the smart decisions, not the dumb decisions? Well, part of it has to be to have a really healthy discussion so we can consider our options and make the wisest choice. If we end up with the Ukraine crisis escalating into a global nuclear war between the US and Russia, for example, and as in a recent Nature Food paper that came out the other month, you know, 99% of all Americans die of starvation and most, the 99% of the Russians and the Europeans, then you got to ask yourself, how did that happen? And it's pretty obvious that nobody really wanted it. So the reason it happened was because we failed to have a functional conversation. We failed to use our collective intelligence as a species to make the wise choices. I think of it a little bit as if I'm walking on some beautiful cliff walk. I'm walking along the edge of the cliff. There's a little path there. And in fact, I went on a beautiful hike yesterday. And if I go too far to the left, I'm going to just fall off and die. It's fine as long as you can see where I'm going and make the right decisions. But suppose my brain were sliced up into little pieces somehow, or suppose there was a little censorship mechanism in my brain that whenever I was trying to decide whether I should turn a little bit to the left or to the right, all the thoughts about turning to the right were censored and printed from happening. And I could only ever have impulses to go to the left that would bias my decisions and I would be more likely to go off the cliff. It's not a healthy situation for Homo sapiens to be in a very Orwellian state where we cannot have good re-conversation about important topics. So what's that got to do with Orwellian systems? Everything, of course, because if you have a very Orwellian system where this sort of free and healthy debate is impossible, it makes it much more likely that we as a species are just going to make really dumb decisions and we're going to squander this amazing potential that we have. That's why I care so much about these issues. So I put together some slides to stir things up a little bit. So can you see a slide? Now we can. Yes. Right? Can you see a different slide? Correct. Yes. Great. So a little bit about how artificial intelligence in particular is enabling Orwellian systems and how it's enabling 1984 and not just in the future, but right now. And then since we have the pleasure of having so many people here who love to build stuff, I want to talk about how can we use these same technologies to fight back against 1984, to expose it and make it easier for everyone to circumvent it so we can have these sort of useful, helpful discussions across our planet about making the right choices. So if you live in China and then you're probably very aware of how Orwellian the system already is. Most, when I visit China and I talk to tech people, intellectual people, university people, they all know how censored they are. It's not hard to notice. You go on your browser and you search for human right to watch. You don't even search. You just actually go to hrw.org and you get a screen that looks like this. So it's kind of pretty in your face. In the West, I find most people I know are less aware of how censored we already are. That's probably because most of the people I know work at universities and have the same opinions that are currently, that the tech companies feel are acceptable. But if you talk to people who don't hold those opinions in the West, they're often much more aware already of how Orwellian has gotten. If you want to do a quick experiment on your own, just take out your phone right now and go to one of the biggest Iranian news sites, presstv.com. You'll see that the US somehow, with their leverage over the Internet, sees the domain of it. So you'll get this. I tried this a few minutes ago. It should work for you. But more generally, I think in the West, censorship is much higher quality in the sense that it's harder to notice that it's there. I love this Baudelaire quote that the devil's finest trick is to persuade you that he doesn't exist. So if you're going to do really good Orwellian systems, one of the definitions of good is that most people in your population aren't even aware of how Orwellian it is. But if you look a little more closely, it's pretty obvious that most ways that we communicate with each other today are very Orwellian, even in the West, not just in China. Both the legacy media, mainstream newspapers, social media, search, and also email. So let's talk just a little bit about where we are in terms of the Orwellian systems in the West. Legacy media. I'll come back and talk quite a bit about how much is just omitted and how many important things you just don't hear about because someone doesn't want you to hear about it. Social media. Well, just to make this a little more fun and personal, I'm just opening up my web browser and I did not particularly prepare this for this talk. This just happens to be the tab that was open because we have this anti-radicalization project where we take a controversial story and we make a little video and show both sides to it to see if people can for this improve the news project. That tweet is nowhere to be seen. It doesn't show up on my news feed. And in fact, we tested it doesn't show up on anybody's news feed. Some AI algorithm decided that that video should not be shown to anybody except me. If I happen to know the tweet because somehow they know best. As another example, let's try search. So let's just go to Google and pick some topics might be a little controversial. So right now, you know, I'm pretty concerned about us getting into nuclear war as I just mentioned. So let's search for nuclear war. And let's click on news because I want to I want to see what what's going on here. A lot of stuff. Can you see it says that there are 31.6 million articles news articles here in Google about nuclear war are you able to see my screen with this? Yes. Yes. No. Let's look at some of these. That's kind of cool. Right. There's 30 over 31 million articles. I have access to via Google. I can find them. So let's look. Scroll down here. Some stuff. Oh, they have 10 pages. Great. Let's go to page 10. Oh, there's more. Let's go to page 19. Great. 22. Oh, there are no more. Wait, wasn't there 31.6 million articles out there? Oh, but Google doesn't let me find them, actually. Somehow it seems like these are the only ones that Google has decided I'm supposed to see. I don't know exactly what's going on here. My guess is that they are only showing me the ones that someone at Google had time to actually vet. For those listening in the stream, Max will likely be back in a moment. The connection is going in and out. Here I'll go. The joys of live streaming. This is connecting from New Zealand, and so it's a long connection, and sometimes the internet hiccups. Or there might be some censorship in the works. Does anyone have a tinfoil hat? Give a couple seconds for Max to return, and if not, I'll just start taking some questions from the audience for a moment. Great. Raise your hand if you have a question. Yeah. Over here. That was excellent. It was really, really nice to hear. And actually this group of talks has been called breakthroughs in computing, but I kind of just see them as nightmare fuel, if I'm honest. But one of the things which is when tools are being built, one imagines that they're being created for the greater good. And we look at moments in history where tools have been given to kind of like the underdog who later arise to be the oppressors, you know, Ahmed spoke about the example of Al Qaeda, and that was the setup there. Or where such tools, which were built for good, find themselves in kind of not so good hands. Do you feel those who are creating such privacy and anonymity tools are doing enough to ensure that those tools are not being used by the wrong people or in the wrong ways or for the wrong reasons? You know, one even wonders how that's at all feasible without implementing a level of moderation, which in essence is an antonym to privacy. So, you know, is enough being done by these two creators to kind of ensure that the tools are being used well and for good cause? I mean, I think this varies an enormous amount across different tools. There are certainly many tools that don't take this kind of thing into account at all. There's a lot of work in the more privacy oriented technologies to really establish some security guarantees and some privacy guarantees. And kind of the, one of the main goals is to get to a point where you can't look at traffic moving around and you can, oh, there we are, Max is back, so Max, I'm answering your question quickly and then I'll hand it back to you. And so some ranges of tools are much more, are able to kind of give you an anonymity and privacy and so on and not kind of spy on you and collect your data and so on. But there are a few and far between, and so most of the products and so on that you use on a day-to-day basis maintain like these vast treasure troves of data about you that are then kind of used primarily to advertise but are kept decryptable by those companies to then be able to use in a bunch of ways and that means that all of that is accessible to the state, right? And this means many states because now the data is getting stored in local regions, right? So it's not just one state now, users or their data is being kind of fragmented and put in a bunch of places and that means that many states have that access, okay, great. I'll hand it back to Max and by the way, a huge thank you to Max for joining from New Zealand. It's both extremely early there and connectivity is getting in the way. Max, we were joking that maybe this is just a little bit of censorship beginning in the way but we won't let them stop us. So I'll hand it back to you. No. There we go. I think you're there. Good. Great. The bandwidth here in my Wellington hotel room isn't so great. So anyway, I was just giving you some little random samplers just for my own life as to how the Orwellian things that I come across and another thing that's been driving me absolutely nuts is this theoreticalization project that we're doing. When we try to promote things, we get blocked all the time by Twitter, Facebook and Google. We tried to promote our celebration of the guy who prevented nuclear war. This day in history today is the 27th of October. This Kamasiliyak trip over prevented a Soviet nuclear strike against the US. We thought that was pretty cool. They blocked this ad because they said that's political. We tried to celebrate a while back, the people who eradicated smallpox and saved hundreds of millions of lives. They blocked that because the ad had the word vaccine in it. So these things are sort of borderline. Hard to know whether you should laugh or cry about it but you see it not just in social media and in search. We even have problems now in email really strifling us. I used to think if I wanted email Juan, he would just get the e-message but that's no longer true. We got about 70,000 people signing up for this newsletter with more unbiased news and it turns out that we can only actually reach about 15% of them now because for everybody else Gmail puts it into the spam filter even if they say they want it. There was a Stanford study that looked into these things recently and found that they're using machine learning to pick out what goes into spam and there are definitely some political aspects to it because for example, political emails from Republicans were six times more likely to get sent into the Gmail spam filter than those from the Democrats. So all ways in which we try to exchange information with each other are being a little bit more challenged now. So how can we help with this? It's a bit tricky to work on this particular problem than most other tech problems. If I want to work on better cryptography, it's so obvious at least what the goal is, what I should try to do. Here even figuring out what my goal should be is a bit tricky precisely because the truth itself is sort of obscured, sort of the nature of the beast gets a bit more meta. So let's just talk a little bit about before we talk about specific tools, just a framework for how to think about what the problem even is. If you look at media bias, which is an aspect of Orwellian systems, there's a lot of talk about fake news, but that's not the only thing you should think about. If you think about this Venn diagram, some stuff is true and some isn't. And then there's some stuff which is mentioned or claimed in the news and some which is just omitted. What we want, there's a lot of the green stuff in the middle, which is correct and true. There's a lot of talk about stuff on the right side here, which is plain but false. That's the fake news, the disinformation. But in our experience, the part that's most dominant actually is the part in yellow on the left, stuff you never even hear about even though it is true. For example, how many articles you read recently about the enormous tragedy happening in Yemen right now, where there's a child starving to death every 10 minutes or so. There are all these different topics that just for some reason we don't get to hear very much about and that's usually why these bad things persist. Sunshine is the best disinfectant. If you let people find out about something bad, they usually stop it. It's Orwellian systems, so I would be remiss not to throw in a picture of George here. It's really interesting to think about also what is it again the problem that we're trying to combat. One axis is where do you draw the line between what's anti-disinformation and what's censorship? It's a trade-off between real information and freedom of speech, of course. Where should you be in that trade-off? Another really interesting trade-off is between anti-disinformation and propaganda because as George Orwell himself wrote, basically the first thing any government or organization will do if they actually want to do propaganda will be to try to mask it as merely fighting disinformation, fossil fuel, lobbyists that say, there's a climate hoax and climate change people are just spreading disinformation. It's the oldest trick in the book. Cambridge Analytica even admitted doing this pretty openly and I'm a scientist. So from my perspective, one of the key things we've learned in science is how important it is to have a free and open discourse. It's not censored. If Galileo put out a tweet saying, hey, guys, I think the earth actually revolves around the sun rather than the other way around. If Twitter had existed back then, you would probably have gotten flagged. This violates the community guidelines on disinformation. You should get the correct facts from Pope Urban VIII. So I think what we've learned from this as scientists is we should never, ever give powerful entities like governments, big companies, et cetera, special influence over fact-checking because figuring out the truth is just really hard. This article here argues that the real pioneer in the US of fake news was arguably the tobacco industry and another tricky thing we have to be very mindful of when even asking ourselves what the problem is, is the distinguished symptoms from actual rude causes. So I suppose this woman shows up at the doctor and says, I have a fever, cough, and headache. The doctor says, OK, I'm going to treat you with ibuprofen. Do you think this is a fantastic doctor that fills you with confidence? What would you say, Juan? I would expect a lot more questions. Yeah, even some kind of diagnosis first rather than just treating the symptoms. But now maybe she has COVID-19. Maybe she would benefit from getting some steroids or something. But now look at our democracy. I think there's a pretty broad agreement that our democracy is also not doing so great at the moment, feeling a little bit unwell. We have a lot of symptoms, the perfusion of disinformation. We have filter bubbles, growing polarization. People hate each other more and more, both within countries and between countries. We have growing income inequality, a lot of anger at the establishment. So what do we do about this? A lot of people are like, OK, yeah, great. Let's just block this information with machine learning and ban the hate speech. This is exactly like that lousy doctor who's going in and treating the symptoms while asking what the diagnosis even is. I would argue in this case that we're being hacked by AI very much in an Orwellian system. That's a key part of what's actually causing these problems. And it's only once we see that that we can figure out better how to treat it. Let's look at this word disinformation, again, that's so often used as silence critics. I looked it up in the dictionary for you, and it says that it's false information. And blah, blah, blah. How do we know what's false? The first, last, the number one most important thing I feel I've learned in my entire career as a scientist is that figuring out the truth is really hard. I really have to be humble. Sometimes it's really easy to figure out that something is false. Sometimes it's not easy, you know. Facebook had to do a poll later on and say, oh, sorry, we kept banning these posts. Maybe we shouldn't have Jack Dorsey himself said, oh, sorry, we blocked this New York Post tweet about their own article. That was a total mistake. It's not easy to know immediately what's true and what's false. And even if it has nothing to do with politics, you know, we physicists spent 300 years believing in the wrong theory of gravity after Newton until it turned out that Einstein found the errors. And so humility has to be at the root of this. And this leads to, from the first part of my talk, I've mainly been spending 10 minutes just talking about the problems. I want to spend just a little bit of the rest on strategies for making things better, tools we can build, et cetera. They don't want us to know. Swept away by the Orwellian system and built a thing that, in particular, that the best method we have so far as a species to figuring out the truth is science. And some of the key things we learn from there is humility again. Figuring out the truth is hard, so never, never let authorities tell you what's true or some committee that the government has or some company has. One of the key things we do in science is we do not listen to people more just because they are rich or have a fancy hat or because they're a minister. We judge them by their track record of making correct predictions in the past that trust has to be earned in science. And anyone can say anything they want at a science conference. We don't believe in at all in sort of censorship. Those of you who are parents also know that if you're being too strict, it can often backfire and have exactly the opposite effect you want. And I think I personally feel it's incredibly patronizing now when I go back, for example, when I went to Sweden and visit my mom and I just for fun wanted to see what sort of spin and propaganda the Russian media were presenting to their own people. So I went to RT.com and the Swedish government decided no, Max, you're not allowed to see that because your feeble brain is so feeble that you're just going to believe everything Putin says and everything RT.com says. So they just blocked RT.com, biggest Russian news site in the entire European Union. So pathetic. And this is just like the same mistake that this woman is making with her daughter here. And another thing, yeah, so there are a lot of cool tools you can do where you try to take all the things that science does right and just bring them into the media ecosystem and make them accessible and less nerdy and boring for everybody. We did, I'll tell you just a little bit about stuff that we've done and then can brainstorm about things we can all do. A bunch of MIT students and I, for example, did some product, started building some free tools. We built a little free news aggregator, which I'll just show you a couple of minutes on improvethenews.org is a free news aggregator that lets you make up your own mind by reading a range of perspectives. I'm Max Tagmark, an MIT professor working on machine learning and physics. I have the idea for improvenews.org because I agree with Einstein's quote, but everything should be made as simple as possible, but no simpler. And I feel that we should apply this to our news. Instead, media sometimes oversimplify and report things like a fairy tale where one side is 100% good and the other side is 100% bad. And machine learning comes along and gets us impulse clicking on these oversimplified stories, trapping us in hyperpartisan and hypernationalistic filter bubbles, creating an increasingly polarized world at a time when we instead need nuanced understanding that enables working together on great challenges. And we can see this polarization for many of the hundreds of topics tracked by improvenews.org. For example, let's scroll down to social issues, click on its header to see its subtopics, and scroll to immigration. Although there's much talk about fake news, an even bigger problem is oversimplifying by omitting key facts. The best way to catch those omissions is to read both sides, which improvenews makes it very easy by simply sliding a slider. That's one tool we've had fun doing. We also use machine learning to read about 5,000 articles per day and figure out from 100 newspapers, figure out which ones are about the same thing. And then we separate out the facts that the articles across the controversy agree on from the narratives where they disagree, so you can easily rise above the controversy. We've also been doing some academic stuff where we just use machine learning to see if we can measure the bias and find it more easily. And that's been a ton of fun. For example, we published a paper as Samantha DeLonzo and I, where we just took a million articles from 100 newspapers and asked if with no human punditry at all, if the machine learning could just automatically find bias. And it could, spectacularly. So for example, we just took all the articles about Black Lives Matter, and it automatically put all the newspapers on a spectrum, which, when you look at it, looks like it separated left from right, although we didn't tell it anything about which newspapers were left or right. This is, machine learning didn't even know what that meant. How did it do it? It just noticed that the frequency of words was very different. Some newspapers talked a lot about demonstrators, where others talked about rioters. And when we looked at abortion articles, some talked a lot about fetuses, and others talked about unborn babies instead. And it found that these biases that it discovered were very correlated across topics, and it automatically produced dictionary of emotionally loaded synonyms for things. So you can see, for example, that where some, if it's about immigration, some newspapers would talk a lot about asylum seekers, whereas others would talk about illegal immigrants or illegal aliens. Some would talk about assault-style weapons, while others talked about semi-automatic firearms. All of this just popped out from the data itself. We didn't put any of this in. So machine learning is really powerful for discovering when people are trying to hack you and manipulate you by word use. We also found that there was a whole other axis of bias, which we interpreted as pro-establishment versus establishment-critical bias. So some newspapers like the New York Times, they will always talk about the defense industry. Others will instead talk about the military industrial complex, which is not a word you'll find in New York Times or Fox News very much. It's not just whether you're critical of a government, but also powerful companies. So New York Times and Fox News will talk about oil producers, typically, whereas smaller indie newspapers will talk about big oil and sometimes even with capital B, capital O. So it's kind of fun just to see how machine learning was able to, with no human input at all, put all these hundred newspapers in a two-dimensional plane, both with left-right bias and pro-versus-establishment. Critical bias. And these are just some little examples of some tools that we made with high-TV search or a little improvethenews.org nonprofit. I would love to chat with you about other tools that we could work on together because I feel that technology is not evil or morally good. It's a tool, and you can use it for both things. So far, I would say that technology, machine learning, has mainly been used to make our society more Orwellian. Most of the machine learning is used to analyze the users in great detail and figure out how to press their emotional buttons and hack them. But you can just as well turn those tools and put them in the hands of the users to analyze the media itself, to analyze what the big corporations are doing and what the governments are doing and make that freely available to everybody. I firmly believe that tech can be incredibly democratizing and I'm super excited about the opportunities for building anti-Orwellian systems and giving them way for free and making not just society more pleasant to live in, but enabling humanity to make much better decisions so that we can create this amazingly inspiring future that I was mentioning in the beginning. So thank you so much for having me on. First of all, can I get a quick round of applause for Max's words and thank you, Max, for joining us. So let's just jump in right there. So can you maybe, when you describe anti-Orwellian systems, that's a great phrase, and we should be rallying the group towards that. What are the kinds of properties that you would have? So basically on many of the things that you just mentioned, you can just invert those, so unbiased things, things that really get to the truth and as much as possible, give the evidence and the sources and so on. You can get into making sure there's no emissions and so on, but maybe deeper than those. What is the sort of the kind of like, when you imagine using computers or using systems or reading media and so on, in a great anti-Orwellian place, what would you sort of expect to see or pin us a bit of the vision of that? That's a great question. So let's go back to the basics and ask why do we even want it, right? Why is it better to live at a very practical level? I would say the non-Orwellian is better. You actually make more correct predictions about what's happening. Similarly, I think humanity will make much better decisions to produce a happier future if we have a non-Orwellian society. So that's a very practical one. Make good decisions. That's again why I mentioned if your brain were sort of weirdly restricted, you as a human being would make better or worse decisions just for yourself even. The second one I think is if you, I'm just speaking to speak for myself. Now, I personally really like the idea of democracy because if what we're trying to accomplish is create a world where people are more happy rather than unhappy living, we wanna make sure that people actually have a say is that people can wield influence and try to push things in ways that make them more happy. So that means you wanna shift power from the top to the individual people. That's of course the core selling point of democracy. So to me, Orwellian is exactly opposite where you take away the power from individuals and you put it all in some super central authority whereas anti-Orwellian should have this effect where you are instead shifting power to individuals. In democracy, it's absolutely an absolutely crucial way to do that for the thing you need for doing that is to make sure that ordinary people have access to correct information because if people believe the world is completely different from what it actually is, they're gonna make decisions which are often very much against their self-interest. So that's my second guiding principle. I think we should look at all our technology always and ask, does this technology lead to more concentration of power or does it lead to more decentralization of power in the news and information space where we create technical tools which make it really easy for people to find out the truth for themselves and make it very easy for them to see when the powers that be are trying to manipulate them. And that's, you know, a lot of the people here and a lot of our communities are working pretty hard on decentralizing power structures including things like decentralizing economic structures and so on. The communications layer is like one of these extremely important ones to lock open in a way. So one of the topics that we often talk a lot about is establishing digital human rights in a way where you can have the freedom of being, not just freedom of speech but freedom to communicate and freedom of association on the internet where you can talk and so on. I'm pretty keenly aware of how it's not even just kind of like the drowning out of kind of the signal by a lot of noise. There's also just overt states shutting down complete access to communications infrastructure where these days this means a loss of communication with your family, with your loved ones and so on. And in a crisis where you can't even like interact or coordinate and so on. And so these days we've become so intensely dependent upon our products and our tools and systems that like, you know, just imagine being cut off from the ability to message or text message or call anybody or a map or, you know, like as a human, you can't quite even like operate nearly as well as we used to, so given how dependent we are on all the tools. And like that can happen overnight. Yeah, I mean, you see it happening today in Iran, for example, where people are not even able to text message each other because the government doesn't want them to talk about riots going on and the protests. And in a small way, I was talking about how that's happening to our improve the news effort, where we can't reach, where 85% of the people who said they want to get our news letter can't get it because somehow Google's algorithm doesn't want them to. So it's extremely frustrating in a way where more vulnerable to that now than we were 200 years ago because at least before we actually had more to face the face interaction, we didn't have as many friends who lived far away and it was a little bit harder to actually go in and block that communication. Whereas now by making friends that are not physically located where we are we are so vulnerable to being just shut up from them. When you think about just the access to information that may be just kind of super biased and so on and being able to show everybody all of it, how do you think we should be equipping people to be able to not be swayed by it? We have very clear examples of social media manipulation where you could systematically push certain mimetics, certain images and articles and so on through people's channels and feeds and so on and just in a very behavioral experiment style cause certain actions and cause certain beliefs and so on. So we have a lot of evidence of that being possible. Now how do we kind of inoculate people to this kind of mimetic gaps and we do it sort of like a massive scale. We have the internet, we have massive access to all human knowledge and yet people can get manipulated by so kind of detectable fakes and so on and kind of extremely biased arguments and so on. So how do we kind of like, if we are gonna open the, not necessarily open the floodgates, but like really give access to everybody, how do we do it in such a way or how do we equip people with the right kind of cognitive tools and critical thinking to be able to distill this? Great question. I think we need cognitive tools but we also need good old fashioned technical tools, software for it. Cognitive tools won't be enough. The basic cognitive thing we wanna convey to people is just to make people understand that they are being censored and that the society is very Orwellian right now not only in China and in Iran and in Russia but even in the West. It's just that in some, in many ways the propaganda is higher quality in the West. Now there's this beautiful quote that the propaganda is to a democracy, what violence is to an authoritarian country. You don't need to be do good propaganda if you are a dictatorship. Doesn't really matter if people like you don't want so much or not that they still can't get rid of you as the leader. Whereas in a democracy, you have to up your game and make it so that people really believe that they're not being manipulated. I think, frankly, just saying we're gonna have a better cognitive tool is naive. Human beings are so easily hacked. My wife, may I whom you met her psychology research it's very clear that humans are just incredibly hackable and we don't stand a chance against an AI system that knows everything we've clicked on in the last five years. The only basic thing again that it is useful cognitively to do is just make people aware of the fact that they are being manipulated and they have to be, they have to actively seek out the technical tools that can cut through the bullshit for them. So let's switch the technical tools that we also need to give to people. I think, first of all, I would love to see more technical tools which just people can use for themselves to just verify that, yes, I'm being censored. I'm being manipulated. The number one thing I always teach in science courses is the importance of just trusting your own eyes and ears. When I teach about astronomy, I actually have them go out and look at the sky and figure out for themselves how the moon and the sun and earth is moving and so on. I always tell them, don't take my word for it because if you do, you're missing less than number one about science. And I would like to have tech tools like this. That's why I started today, even by just showing you some experiments that you can repeat yourselves. If you don't believe that press TV.com got censored, you can try it on your web browser. And I would like to see more of that. And I even bought the domain censored.org. So if any one of you has any cool tools and want to spend some time working with me on like free tools that where people can go in themselves to see the censorship, just shoot me an email, techmarkedatmit.org. Sorry, tech. It should be, we should build tools so that anyone who wants to know, am I really being censored? You can go to the site and be given a little list of tools if they want to make it really easy for busy people to make their own lineup of what's happening in the news. I think there's a huge range of cool possibilities here. Another one I've been working on with Anthony Aguirre whom you know and I them see and others. This is a project they started where you start measuring trust scores of people, not by whether the government says you should trust them but based simply on their past predictions just like in science. It's called, you can go to mataculist.org and we're doing a much bigger thing now where we're just recombining you with machine learning to try to make trust scores for newspapers, politicians, others. So you can actually hold people accountable for what they've said and then people can go in themselves and have something to base them on or what they should trust, maybe which is more sensible. Max, how bad do you think the current manipulation really is, right? So I have a hard time kind of giving credit right now to a lot of the social networks for, they seem to be kind of like primarily advertised and primarily other corporations are using them and states are using them and so on but and I sort of like mostly fear that in the near future all of those treasure troves of information are gonna be like kind of systematized and you put into some kind of like feedback and so on but the question is like how much is that happening already and how might we tell, right? So is there a way where like experimentally we can kind of detect what kind of manipulation is happening already? You could have a very insidious situation where not only do you have like that excellent censorship that you're describing, sorry, excellent propaganda where the censure just has to be much more sophisticated, there could be just this extreme degree of sophistication where it's just so light and subtle where like even we can detect that our beliefs and perspectives and so on are being drastically shaped by some systems. Like this could be starting to happen a lot more deeply today than we might guess. Or how do you know what to draw the line? So I think here machine learning also is this incredibly powerful tool. That's why I showed you that little example of what I did with Samantha, where if you just look at a big dataset all these kind of subtle things just pop out very clearly. I would love to collaborate with people here listening to this on building other machine learning tools. One thing we've started doing it, we call project Overton. So the Overton window versus define this, what's not censored or what's this course right now. It varies a lot over time. Like the Overton window gave rights. So it's very different in Lisbon that's now in the country today. So the goal here is just use machine learning to actually put the Overton window on a mat by looking at, for example, what fraction of tweets got in different countries and as a function of time. Very doable because the information is out there. I don't have time to look at five billion tweets and see which ones disappeared, but machine learning does. These companies won't tell you what they're doing. But I think it would be really very valuable to just have like a dashboard where you can say, okay, right now here is what they don't want you to see. Here's what's okay to see. Here is what's blocked in this country, in that country. Again, in the spirit of empowerment, let anyone who wants for free get to see what tricks they're trying to pull on you. That I think is quite a helpful one. And that way even the more subtle things that you mentioned, which is on their own or hard to figure out, if you see that someone is trying to push you in that direction, then you start getting a bit suspicious and then you start seeking out the information that can give you a more clear view. How would you kind of, what those sort of shields would you think are buildable against these kind of social credit style feedback mechanisms, right? So my expectation here is that the social credit system is working so well in China that it's going to start getting exported to other countries because it'll kind of be this very smooth gradient towards order and stability and it'll be kind of very accessible and the arc of that turning very bad is gonna be so long that it'll get exported to a lot of other countries before a lot of populations realize it's too late. So what are the sort of shields that we can kind of build in right now to try and prevent that kind of thing? How do we orient people towards being able to combat that kind of thing? So from our perspective, we sometimes think of it as, hey, if we can establish secure and private communications everywhere, so you really force a clean inability from states to spy on all communications and kind of really establish that both technologically and enshrine that in rights across states and the UN and so on. But are there other kind of things that you are thinking on could be extremely useful, like where it may not be about communications, it could be about some other ways of coordinating or things like that? Great, great question here. So social credit score systems, to what extent are they coming and what can we do? What kind of shields can we build against them? First of all, I would say, you talked about it in the future sense, how can we prevent them from coming from China to the West? I would argue that they already have come to the West to a significant extent. We, there's another kind of bias. I just wasn't allowed to say that or my credit score would go down max. Yeah, you know, it's a, this bias here is a little bit more harmless. The reason, I don't think, but it just comes, it's not, the reason we underestimate the extent to which it's come here, I think it's just because many of us hang out in tech circles and at universities where people we know mostly have the opinions that are considered accepted right now. So that's why we don't hear a lot of anecdotes about social credit scores. But a lot of people at universities, how you should ask themselves, how many Trump voters have they talked to recently and heard their experiences or whatever? You know, the, I was just reading, you know, for example, the guy who found a job, one of the, that's a social media company, which a lot of Trump voters use, he posted this thing about just what happened to him. And it starts by saying that, hey, let me tell you that there's a social credit score system in China and we also have one here. He personally, when he started this, Visa and MasterCard decided, first of all, that his company couldn't take credit cards for payments. And then decided he and his wife also were banned for life from using credit cards. So if he wants to take a flight from Minneapolis to New York, you know, he can't pay with a credit card. His wife can't either because he's marrying to a guy who did something that's considered outside the old and window, namely that's very social credit score actually. I think many of us in the West have this misconception about what actually happens in China and what used to happen in communist dictatorships in the Eastern Bloc. My wife grew up in a communist dictatorship under Nikolai Chachescu in Romania, right? And she points out to me that, you know, they didn't usually just, they didn't go and typically shoot people who complained about Chachescu. They didn't need to. They had a social credit score system the way it would work as well, you know, her mom, for example, she wanted to be a teacher, but she didn't, they wouldn't let her become a teacher because she had a sort of two bourgeois, her parents were little two bourgeois. So she didn't get that particular job, you know? At MIT, we also cancel people now, sometimes when they say things that are considered politically not quite right. You just don't get admitted to that university. You just didn't get that promotion. Oh, your credit, you can't get a credit card thing. Normally, 99% of what's done and what was done in the old Soviet Union and in communist Romania are these little nudges because they discover that's enough. That's all you need, right? And there are no shortage of things like that happening in the West. The second thing I want to say is, you don't even have to go out and look in the world too much to get some indication for where there might be Orwellian stuff going on because human nature hasn't changed very much. You can read Machiavelli from the 1400s or look at the shenanigans they did in the Roman Empire. The basic incentive structure has always been the same. There's nothing new there under the sun. The only thing that's new is technology. So people with power were always trying to skew the debate and the public discourse to make people feel that they should have more power. The Swedish king. Swedish propaganda getting in the way. Wow, it's fascinating. Max, that was really funny. We lost you as soon as you started talking about the Swedish king, it all went away. The Swedish media is clearly getting in the way. So the point I'm making here is just that you wouldn't just naturally expect that any entity with power is going to be trying to control the narratives and do Orwellian stuff. To whatever extent technology permits them to cement that power. This is the way things work. The only thing is, so is it surprising today that all governments are trying to use more modern tools to do that? No, of course that's what you predict. Is it surprising that the tobacco industry tried to do some Orwellian stuff to silence scientists who talked about lung cancer too much? No, it's completely natural. And it's not new. What is new is just the technology. And my key point is that you should predict that any way in which powerful entities can use technology to make things more Orwellian in a way that serves them, it's probably happening. They're trying the best they can. If they haven't done it, it's probably just because they haven't figured out the tech yet or because it was too illegal for them to be able to do it yet. But in the same way, we should ask ourselves how can we use technology? How can we build tools that expose that and fight back against it? On the criticism, so one of the things that we could do and which is very common in our communities is to say, hey, we need the access to digital cash and significant amount of economic freedom. You could combat the social criticism by making sure that people have the ability to have finances and access basic economic tools and products without any kind of association. Being able to have full anonymous structures to be able to have some kind of baseline rate credit system and you could do that with crypto, right? You can build that kind of stuff today with zero knowledge, which is pretty good. We have things like Zcash and others that give you a little bit of cash. Now, it's still very police around the edges, so it's very clear what goes in and out and so on and certainly you can't yet use crypto payments in a bunch of places, but maybe if we've solved the adoption there and we can get crypto finance and crypto economics there, like maybe we'll be in a better spot. What are some of these other things? So what about access to infrastructure? So I think there's this pernicious problem around making everything very identity based on the internet where if you suddenly have the ability to detect who everybody is and so on, you could land yourself in a structure where every moment where you're gonna be using whatever service or buying something, you could have a system that immediately checks in that moment with some centralized database of the social credit system and whether or not you deserve the right to use this sort of thing and we could kind of establish constraints against that right now, I think at least in the West, we don't yet have that degree of checking, so it seems like a policy thing that we could get through. I wonder if maybe this is worth a policy video where we show how bad and pernicious this could get and we try and get some bands on this kind of behavior where at least access to basic systems and basic utilities should be sort of guaranteed close to us a human right in a bunch of places in an anonymous sort of way. Music to my ears, I think it's a really good idea to make effective videos, making clear to people how awful things get if you do things in certain ways. So let's see what the point is. Let's geek out a little bit though and just go through a wish list of empowering tech, anti-Orwellian systems. Orwellian systems want to limit people's ability to communicate with each other. So what do we have against that? Anti-Orwellian systems, cryptography first of all is great for making sure people can't read what you do. Even in the West, like France has been trying hard to ban strong cryptography, but so far, yeah. So that anyway, but that's a great one. And then the anonymity ability for people to communicate anonymously. Any technology there is super valuable. Another thing Orwellian systems try to do is social credits core systems where once they find out that you've been doing naughty things, they try to punish you by making it hard for you to just live your life, right? So blocking your ability to transact, for example, buy train tickets, buy stuff. So DeFi I think is a great anti-Orwellian system there in that you can still use cryptocurrency and do your transactions. Another thing which is... So crypto today, cryptography today of course is quite effective if you're communicating with one other person. But if you wanna have a public square where people can discuss more openly, there I think there's a lot of room for better anti-Orwellian systems. This is still, whenever there is a system that's very popular and large like Facebook or whatever governments and other powerful entities see that and start putting pressure on those companies to not just make money, but to selectively limit things to favor the powers that be. And so I would love to see more innovation in that space. One anti-Orwellian strategy there I think is very interesting is adversarial interoperability. That's quite the mouthful. And for those of you who haven't heard about it, maybe we should take a minute to just unpack what it means. So an old fashioned way of fighting against the Orwellian system is to try and make laws to protect people against them. And that's usually pretty doomed because it takes 10 years to do the law. And by that time, they figured out five other ways of screwing you over. Like in Massachusetts, we had passed the law once upon a time saying that auto mechanics should be allowed to fix cars that had some computer chips in them by actually reading out information from them. And when the law finally passed, almost immediately the auto company's protocol and so that does not covered by the law. But whereas if you instead just pass the law saying that the default is that anyone, any innovator, entrepreneur should be able to build whatever products they want that can be sold to auto mechanics. And if some private company can figure out how Ford encodes the fact that there is a problem with cylinder three and the engine, they're free to go ahead and fix it. That way it's not actually illegal for people to innovate. And now you don't have to worry about law. Capitalism will take care of it. Some MIT students will figure out how to do it and sell a chip to all the auto mechanics. With social media, I would love to see something like this. Also, a law saying that by default, people are allowed to write software which logs in as them on Facebook and presents their own Facebook feed to them in whatever way they want. There was a company actually that started out doing this where you could log into Facebook and all your other social media also and have it presented to you on a single unified page. And you could post wherever you wanted and they were sued into a smoking creator by Facebook. But if you have a law saying that the default is that these things are legal rather than the default is illegal, now it becomes a market opportunity for tech startups to create new things. That's the best way to break up the monopoly power of the social media giants. I would love to see that. You're the master of coming up with good phrases and have D in them. This is sort of the social media where you don't have to break up Facebook or anything like that. You just let people innovate. And I would much prefer having my own software that I bought from some company which logs into all my social media and gives me control like email is. If I don't like Gmail because it puts all the messages from you in my spam filter, I'll switch to a different email provider. There's an SMTP protocol and I don't lose access to emailing just because I leave Gmail, right? Whereas if I will lose access to all my Facebook friends if I leave Facebook with adversarial interop it wouldn't be that way. I would be able to access just like I can have many clients. I could have many social media. I would see them all in the same place. And if Facebook's user interface sucks, they use a different one. I think we'll get these web three social networks that do just that. Like we have the beginnings of a few. They're of course not scalable yet to be able to handle the large scale volume of traffic. But my guess is that we'll get there in the next two years. Two to three years is my estimate for when we'll see social networks that are in web three that have massive scale adoption. So things like tens of millions to hundreds of millions of users. And from there, from 10 to a billion is not as hard as from one to 10. For this one though, I think to unleash this wave of tech innovation you really need also a legal victory. European Union is probably the best most likely place to get it. Where if European Union passes a law saying that by default these things are legal rather than illegal to do, then. Are you familiar with fully homomorphic encryption? Like the, there's like this cryptographic, the set of cryptographic methods that let you encrypt programs and encrypt data and then run the encrypted program over the encrypted data in anybody's computer. But the computer can't tell at all what program it ran or what the data was or what the output is at all. And so it's extremely, you know, outrageously expensive. So that's why we don't use it yet. But the sort of like predictions are that within the next five, six years it'll become practical to be able to actually run these kinds of things at large scale. Do you think like the EU could like, we could like mount a very strong campaign in Europe about this and use the very kind of privacy sensitive bandwidth issue here I think. Oh yeah, so I was mentioning fully homomorphic encryption. Let me know if you can hear me, is it good? No I can. Great, so fully homomorphic encryption being this cryptographic method for encrypting programs and encrypted data and being able to run a program over that data in anybody's computer without the computer telling what it is. So we could kind of get a policy victory in Europe to try and kind of force all of the social networks to start computing with these kinds of methods. They're too expensive right now, but say within the next five to six years we should be able to do this. I think that's fantastic. So we want to, whenever you talk to politicians and policymakers, it's important that we all explain to them how pro-democracy and anti-urwellian it is to have these laws where by default doing the tech innovation, doing startup companies that empower the individual should be legal. It should not by default be illegal. And once you have that, I think the marketplace will be way more efficient than a bunch of little laws here and there trying to regulate what's gonna happen. If we continue coming back to the wish list, so you asked what can we do to prevent our society from getting more Urwellian. So we talked about privacy and we talked about encryption. You talked about homeomorphic encryption. You talked about being able to do things anonymously. Then we started talking about ability to do stuff in society. So making it harder for the Urwellian systems to prevent us from going shopping and things like this. There obviously crypto is great. Crypto still is, it's not very hard if you do a lot of stuff with your crypto wallets for people to figure out what your- Yeah, a hundred percent, you are on the blockchain? Yeah, yeah, still in the clear right now. We totally need to encrypt all of that. So we were just, the talk before this one, I was from Nim and talking about like, you know, mixed nets and getting full privacy. So there are a number of these chains that will get fully private, but it'll be a big fight with policy makers and so on because the moment that large amounts of money can move in full secrecy, then you get into all the questions around sanctions and so on. So it's gonna be really important to press the case on why this is extremely critical to support for the future of democracy. Even if you had a partial victory, suppose you just said that, well, small, maybe large amounts of money would freak out governments, but small amounts of money, you know? Yeah, exactly. If you do $10,000 of something in a year, at least let them not block that, right? Because I think most of the things where they make it hell for people in an ordinary way are actually the little things. You just can't buy that little air ticket you needed which was cost $200. WikiLeaks, they prevent people from giving you $5 donation to them because credit card companies decide that you can't use them. Even if you want a small victory like this and said that there's a limit which can be hardwired into these anonymized blockchain systems with these small transactions you can do. Maybe there can be a limit on how many you can do per year and total dollars, whatever. That would be a huge win for the anti-Irwellian side. Let me source some questions from the audience and from Twitter. So if you're on Twitter, just ask a question with PL breakthroughs and I will check it out. So hashtag PL breakthroughs. And here, raise your hands and we'll try and build a queue. All right, first question. And please say your name. Here. Although, actually, for privacy reason, if you don't feel comfortable, say it. Well, my name is also Max. To make it easy. Fantastic. One quick question. You talk about Orwellian systems. Do we have like a measure or like a metric of how Orwellian a system is? Let's say Orwell, the book is like 100% Orwellian. Some hunter-gathered society in the jungle is like 0% Orwellian. Where, how do we measure that we can say currently Europe is like 70%, China is 80% and some other countries only 40% and how do we, how much Orwellian is actually still okay or acceptable versus where it's very critical for society? I love this question. They say that sunshine is the best disinfectant and having an Orwellometer just trusted, transparent, for which France countries, companies, other systems this way could be very, very valuable. I think Orwellianness is, so my t-shirt, you know, pretty high Orwellian score, but you have, there's a number of factors that go into Orwellianness that you can measure separately. One is stuff to do with information flow, right, levels of censorship. We talked about how I think there's a real great possibility to use machine learning to quantify that in a very objective way that people can verify it themselves. Privacy, you can think of benchmarks, they're also where you're very transparent, to specify, you know, this is what we're counting and this is how the Orwellian score comes up, so that it's a key thing for, if you do the Orwellian.com site yourself where people can see how Orwellian different things are, the first thing you have to make sure is your site is not Orwellian, that it's not the max's Orwellian committee that just got together and gazed into their belly buttons and ranked people. You have to have the scientific approach where there is references, you can click on each number there and see exactly how it was computed so people can reproduce it themselves. So information flow, there's some sub-aspect to this, you know, to what extent is information blocked, to what extent is it censored, et cetera, to what extent it's a private and then you can look at, again, finance, to what extent is finance Orwellian? And does the country allow cash? Is cash banned? Do you, to what extent, or how much blah, blah, this and that? And then, for each aspect of Orwellianness, to what extent does your political opinions affect your career advancement? For example, I'm very embarrassed that MIT, my own university, we recently, we had a guy who was going to come in and talk about the climate on extrasolar planets and he was canceled all of a sudden because he had written an article in Newsweek about university admission systems and they had nothing to do with extrasolar planets. He wasn't even going to come and talk about politics at all at MIT. This reminded me a lot of my friend, Alex Valenkin, one of the pioneers of the big bang theory with inflation. He did some, he said some stuff when he lived in the Soviet Union that they didn't like and as a result, he got canceled, his grad school got canceled, you know. And so you could, trying to quantify that too, it's actually a really good exercise. You're tempting me now to spend a bunch of time writing down a scoring system with different orwellianism and then. Let's build it, so. Yeah, email me. If you want anyone listening to this wants to spend some time volunteering doing it, I think let's do it. All right, next question. Hi, Max. How far was speaking here? So on your research on what is left and what is right, well, the machine learning detected what were the subjects pertaining each area, so each side, right. Did it discover what is left and right by itself or was it a tag that we implied? So what we did was something extremely simple. We just took a million articles from 100 newspapers and we trained the machine learning to predict which newspaper the article was from. Well, just from the text, okay. That was the task. So that doesn't mention left to right or anything, right. And we discovered it was very good at it. So we started to wonder, well, how is it doing it? And then if you look at the actual paper, we were able to do some machine learning transparency stuff and figure out how it was doing it. And we noticed that it was doing it in a way that you could easily visualize with a sort of generalization of principal component analysis by plotting things in this plane that I showed you. And then when we humans looked at it, we were like, wait a minute, this is like left versus right. The way it had sorted the newspapers, it just sort of popped out. It had no idea what left was or right was, whatever. But that spectrum, it just emerged from the data just from, and also the pro-establishment versus establishment critical, just from that task of trying to predict which newspaper wrote each article. So this gives me a lot of hope again. That was only one million articles. There are much more tweets than that, of course. That suggests that we need articles and publications in the negative space, right? So you spot gaps there in that chart and you're like, oh, that's interesting. There's no articles or pieces in some of this. All right, next question back there. Yeah, yeah. So the American elections are coming up. And I noticed in the past week that a lot of emails that I get from Democrats that usually go in the promotions folder are got automatically moved to my main inbox folder. Really? And I remember the same thing happening in 2016 when Bernie Sanders was running. And all of those emails went from my main inbox to my promotions folder. And you mentioned similar things. And so what I'm wondering is in the face of such blatant social engineering and manipulation, even if you might lean more towards the people who are doing that, like how do you handle people who might have reactionary tendencies to this kind of behavior, basically? What do you do for a moral perspective? Thank you for sharing this spectacular example. This is another great thing that should go in the oral hallometer and that we can build together, right? It's very easy to just set up a bunch, set up a large number of Gmail accounts and have them subscribe to different things and then automatically monitor what goes into the promotions folder, what goes into the spam folder, which has now been hidden under the three dots. We click more and what goes into your main inbox and just look at, quantify how it measures, goes over time. And then have that as a website. I would check that website from time to time. It would be quite entertaining to see what, and once it becomes quite scientific like that, that this is a site that's trusted and reputable and people can reproduce themselves, then it starts to become embarrassing for Google. They'll start getting questions and they'll probably dial it down, tone it down at least a bit. And now things are a little bit less Orwellian. So this is another example of how the best disinfectant is sunshine. You shine light on the Orwellian behavior and you actually get less of it. You don't have to ever even make any laws or anything. You just make it easy for people to see it. All right, next question. Great, great conversation. Vincent's my name. My question goes in direction more on the asymmetry that we've seen in misinformation and a lot of generated, of course, influence especially around elections from like counterparties or whatever you want to call them. Wait until all the GPTs get on this next election. This was actually the question like a direction I wanted to ask is like with, like this will only increase, like you already have like millions of bots creating like a lot of misinformation and like leading to stuff like anti-vax and like explosion and like people for example and stuff like vaccination. Like what, like on the not negative side, but like how would you like kind of like protect against this in this anti-Orwellian, more decentralized media age like with like an explosion in bots, explosion in misinformation, explosion in like large language models and GPT and generating even increasingly like well-argued misinformation. Like you can see with stuff like, I think the debate stuff OpenEye is doing. Like how do we protect against that increasingly well-orchestrated asymmetric kind of warfare, information warfare? The, so we've talked about various tools already here that we could try to build. I think we should also protect a little bit our own brains by just being very careful to not use the word, in what words we use. I showed you those examples of word use that the machine learning discovered is biased and this information is one of those words, right? Do you use the word disinformation or you use the word anti-disinformation or do you use the word censorship? They don't mean exactly the same thing but if someone is trying to perpetrate censorship they are going to call the things that they're censoring disinformation. So I really don't like that word at all. I think it's very, very loaded. Even conspiracy theory is another one of those words where of course there are people who believe that the world is flat and whatever but the conspiracy theory was actually first came into use shortly after the Kennedy assassination and from some declassified documents you can see that it was this CIA actually that came up with a phrase and started pushing it as a way of just shutting down certain kinds of arguments where instead of saying I disagree with your argument you're wrong because of X, Y, Z you would just say this is conspiracy theory I'm not going to talk to you I don't talk to conspiracy theorists. In science we've seen this happening throughout the ages in the middle ages you didn't call someone a conspiracy theorist if you wanted to silence them you would call them a heretic and I don't talk to heretics. So shut up and go away. I would encourage all of you to stop using these very emotionally loaded words and I don't even feel that this information is the biggest threat that we face. I think you all of you listening to this are very good at calling bullshit on things. If I start telling you some random bullshit about how whatever China is the greatest democracy on earth does that mean you're going to believe it? Of course not. If someone tries to come and sell you some snake oil are you going to buy it? Of course not. You're not morons. You're so used to calling bullshit on things and I don't think the biggest problem is that we have to protect you from seeing bullshit. Your perfect, much bigger problem is that important information is withheld from you entirely and you just see one side of the argument. So the question you ask is a very good one. What can we do to fight back against the willing systems? If we had a super simple answer we would have won this battle already. So it's obviously hard. But I think we did talk about a number of tech tools that we can build and Juan you mentioned some more. Who ideas? Vincent's question was a little bit about it. Yeah, we just say let's work together and figure out what are the main things that should be built and let's build them and let's make them free and cool and easy to use so that the same tools that are being used right now to make their systems more Aurelian, those tools are being used for anti-Aurelian purposes. That's the basic game plan I would advocate. Yeah, and Max, what about the deluge of information that is coming? So when you think about GPT-3 and just generating vast amounts of this material, including deep fakes and so on that are coming, how do we generate some good antibodies for that or just can equip people? Well, there's been some pretty, yeah, the manipulation and faking of things is gonna get very strange, this election cycle. Oh, for sure, and for sure, for sure. Deep fakes are getting so good. I think we should use all the tools we can. On one hand, it's still good to keep pushing for some legislation. Like I would love to see a bought or not law at the federal level and in the European Union level where there's just a law saying that if you show a deep fake or if you get called up by somebody who pretends to be a person even though there's a robot, there has to be some little message there that informs the human that this is actually fake. In addition to that, of course you want to build technology tools for it. And you could imagine the future cameras have just something where they do a hatch of the image and put it on the blockchain or whatever so that you can have, so that later on it becomes possible to always verify what is actually real. And there's a lot of commercial opportunities for startup companies to do these technologies. I think the market for tools that tell the consumer what they can actually believe is gonna just keep growing with all these things that are happening. And not just for seeing what's fake, but also for people to know if the system is being loyal to them. Like if I have an Alexa in my house, I would be happy to pay a little bit extra for it if I knew that it was actually loyal to me in the sense of only serving my interests, not Amazon's interests. If I'm using some tool to help me navigate when I drive from A to B, I would pay a little bit extra to know that it's not always routing me past the McDonald's when I think I'm hungry, but actually taking me in the shortest path that is loyal to me, not to some company. So I think this is a real profit opportunity for companies right now to have, just like you can have certain food products that are certified organic and people are willing to pay you a premium for having less pesticides in their food. If you can have a certification system that can certify that this thing, this photo is actually not fake or this personal assistant is actually loyal to you and it's certified by this nonprofit organization that you set up. I think people will pay a premium for these certified non-Orwellian things, which means that marketplace innovation can really do the rest. Val from Twitter asks, do you know of any model country or government that are right now on the way of reducing their censorship at home? So countries that are reducing censorship? Sadly, I don't. Do you? You know, I'm spending a lot more time in Europe and in Iceland and I've definitely felt the huge difference of spending most of the, spending a lot of time in various places in the US for spending time in Europe and just the amount of political content on all of my systems has gone way down. So it's not quite any country reducing but there's definitely a huge difference here. And so you could adapt your, definitely like your Orwellometer or at least that form of, that particular feature was not as high in these other places. So it's definitely different from place to place in space but I think over time it seems feels like it's getting worse. Yeah, everything is ratcheting up. And often they use the others as an excuse to make themselves worse also. So they're like, oh, Russia is so bad that to make sure that you'd be protected against that. We are also gonna become more authoritarian by banning Russian newspapers. Actually, America is less censorious than the European Union now, which was very surprising to me. In America, you can read RT.com just fine. The American government trusts people to read bullshit from Russia newspapers. There may also be some interest there. No, but I think it's more the First Amendment actually is quite strong and there is nothing quite that strong in European legislation. But I would love it if you can tell me about one role model country that we can point to and which is doing this experiment of having less censorship as a data point to see what happens. And otherwise we'll build at heat with better tools. Max, we've arrived at 8.30, which is our time to go. Thank you so much for joining us for this conversation. It's been tremendously enlightening and fun and hopefully super useful to a lot of people building. I look forward to making a lot of these things with you. Let's make that our well-o-meter and many of the other tools we touched on. Again, thanks for joining us and goodbye. Thank you so much. This was really, really fun. If you wanna work with me to build stuff or one more ideas, tag mark at MIT.edu. Love that we work together. I don't wanna whine about stuff. I wanna build stuff. Great. Take care. Have a great day.