 Hello, everyone. Welcome back from your lunch. So in this room, we all believe that we are working towards a better world, with more privacy. The motto of this conference, as Fabrice said, shaping the future of privacy. My observation is that because we all believe in what we do, we are no longer really questioning if we are doing the right things. And this may actually limit our possibilities to successfully shape the future of privacy, because whatever system you choose, there will always be weaknesses to be critically reflected upon. And let's think back of ourselves. When was the last time you sat down just to reflect is what we are doing the right thing? So in this panel, I will try to do in a very short time an attempt to discuss some potential weaknesses of our system that are often raised by critics of my work. So I am trying to use the opportunity to get fresh perspectives on thoughts that I struggle with myself when I think about the future of technology and the future of privacy. To do so, I welcome my awesome panelist, Renata Frans. You haven't met Frans yet, but he's from Fairphone, the tech lead of Fairphone. They make a very nice, ethical smartphone that I coincidentally use myself. And I also welcome Gaelle, which you have already met from E-Foundation. My Fairphone runs E-Foundation, so friends here and Renata, I have watched so many interviews with you, so I have a lot of questions. Okay, so if we wonder, are we doing the right thing in this room? Obviously, the first question should be, should we be doing code in the first place? Because Renata, I sometimes wonder who has more power, policymakers or open source software, because I see that NextCloud is sometimes encouraged, but also limited by existing policies. And I wonder, should all these passionate people go into politics? What's your opinion? I think that neither nor, and that's the problem. I think that we have a big problem before, which is who's funding the political parties and where are the resources allocated for the free software projects? And so I think that in a way, it is not making the people knowing code becoming the next politician. I think that's a potential approach, but it's not the ideal approach. I think that we need to become more aware of what is behind politics and what is missing from building more resilient free software projects. And I think that there's a lot of money in politics. One of the watchdogs in the European Union found that Microsoft is the biggest lobby company in Europe. And then you see where the agenda of politicians go, because who's funding politics? So we should all get rich and then invest in politics. Shall we fix the system of how political parties are funded? We can also fix capitalism. How do you see this in practice? Is the potential of the Morena project currently limited or encouraged by politics? And have you sometimes wondered about going into politics instead? That's all I think. Going into politics naturally depends what you call politics. I mean everything that we are doing every day is kind of politics, actually. The choice we do, the value we have or we want to share. So what was the first part of the question? How do you see this in practice? Is Morena encouraged or limited by policies? It's changing. I think there is some hope and we need hope all the time. But I see that today there is all those topics about privacy, but not only it's about strategic independence. It's about values, ethical values, these kind of things. And what we see today, I think, at least in our western countries, is that more and more people care about their impact and what they are doing. There are more and more people who would prefer to use an electric car maybe. We see that Fairfax has a huge success because it cares about how the hardware is built. And we of course know about the climate change and the problem with carbon impact, energy, etc. And I think that there is a growing awareness about all those things in the general public. So I think that we are encouraged by these trends today because there are more and more people who would like to have something better in their pockets, their smartphones. Something that wouldn't leak all their personal data all the time, something that would have less impact in terms of network activity and energy and so on. Okay, so don't become a policymaker, don't become an open source person. Let's fix the systems instead. Renata, it's always super fascinating to discover privacy news and privacy perspectives. As I said, I read many of your interviews, and in one of your interviews with not only tech, you stated that we need a global ban on surveillance technologies at all levels, both commercial surveillance and governmental enforcement agencies. However, in another interview with Stanford University, I noticed you were advocating for developing AI and using AI to massively correct societal problems. So the following questions occurred to me immediately. Isn't it a problem exactly that we reduce societal problems to algorithms, maths, logic, automation, data and massive spreadsheets while these problems are about people? I think that the problem goes back to the talk that I had earlier is how we designed the technology. I think that we fixed the design of it and we make it participatory, multidisciplinary and so on. If I look at a specific example, you know, we could be monitoring in real time what is going on in the biosphere and in the Amazon to prevent destruction. When it is destroyed, it's no way back. For some problems, you really need sophisticated real-time information together with, like, yeah, for example, the destruction of the biospheres in Latin America. Sometimes technology can really, technology designed with purpose, with principles, and designed for it can really, I mean, I'm not anti-technology, and I know that it has the potential to scale and prevent the situation from getting worse. It's not a magic solution and it has to be combined with policies on the ground, of course, and people who know the local realities inputting into it. But that's different, you know, we need surveillance, maybe like the sensors shouldn't be on the people, shouldn't be in these situations that are going on where, like, you know, we need speed and coverage at a level that humans only would not be able to deploy permanently. You know, 24 hours a day measuring, like, you know, changes in water, changes in patterns of vegetation. I mean, I'm not an environmental expert, but I have talked to some that are confident in a combination of policies on the ground and technologies to accelerate certain issues. Another example is on distribution of benefits or identifying, for example, we are doing with, and that's very grassroots, we are doing with our Nepalese colleagues, something trying to cross-reference using, like, you know, citizen technologies, citizen deploy technologies to measure the quality of air in Kathmandu. And map it and cross-reference it with the public health data of people dying from, you know, respiratory diseases. That's the kind of surveillance that we need, you know, like, we need to surveil the air or the elements in the environment instead of surveil movements of people and, you know, like, patterns of behavior and so on. So some of the examples of an AI genuinely solving a societal problem could be distribution of resources or health or environmental issues. I think that if we can translate many of the, you know, like Paris Agreement commitments in a well-designed and sustainable technologies, it doesn't need to be AI that can be efficient at scale and efficient at results and in shape locally. Okay. But good. I mean, we cannot, we cannot ignore the fact that we have limited capacities, but some technologies deployed can increase our ability to do something. So what's your opinion then that most of these technologies require massive computational and energy resources? So if we try to solve climate change through AI, we might be causing climate change. I think that we need to fix that massive consumption on, I think that it's part of the design process. There's only one way to do things. And for example, we have been exploring at the Open Knowledge Foundation a lot of experiments with small data that are like, you know, like, that it needs, it can be like, you know, technologies, specific technologies deployed to analyze a specific situation that lead to good results. Not everything has to be big and massive and massively consuming. Small R. Yeah, it can be like, you can be designed at scale, locally deployed and based on clean energies. That's the problem that we have is that the design of those technologies didn't consider sustainability in the first place. Yeah, they just ignore it. So in my academic research, which I do next to my next cloud job, if I still have the energy, doesn't happen all the time, but sometimes, I state that the most effective way to preserve privacy is to follow strict data minimization standards. So I advocate for not collecting any personal data. And if you do, to only collect a bare minimum and to delete it nearly immediately. And reviewers that are critical of my work critiqued that I throw away the baby with the bathwater, with which they mean that if no data is collected, no AI can be developed. And they say that I'm killing the opportunities of AI before AI had the chance to mature and show its true potential. Counter-arguing, I always say, well, it's causing more problems than it's solving right now like discrimination bias, et cetera. But, Kael, I'm curious about your perspective on this because let's take this as a fault experiment because your organization's product is effectively limiting surveillance on citizens. Imagine that in 10 years from now, and everybody is using a phone with your operating system and next cloud, obviously, will happen, certainly. A public debate arises that because everyone is using privacy-friendly non-surveillance technologies, we have effectively prevented the possibility of developing an AI that can solve climate change. Can you imagine the scenario happening? Have you ever thought of this? I think this is a very interesting question that makes me think about big data and AI. Because what we hear for years is that if you want to do interesting things with AI, you have to collect a massive, huge amount of data. And it justifies everything that we see today with a personal data collection, et cetera, et cetera. But to me, it's just a sign that AI is not effective enough. It's just that it's not working well. Because if you take a very simple example, I have children, and I know how to train on AI to recognize some letters. You have to make it read 100,000 types of A letter, B letter, and C letter, just to have it learn that this is an A letter, B letter, C letter. But my children, when they learn the letters, they just see once, twice, and they know that it's an A, it's a B, it's a C. They don't have to see this 100,000 times. So I think that the thing is AI has to make progress first to use a smaller set of data to train themselves. So there are some progress to do. And that's it. Let's go to you, Frans. So Fairphone has very moral ambitions, but there must be moments also in your organization where either technological, economic, or moral compromises have to be made. Could you share with us a story when you struggled because technological progress at Fairphone triumphed over FX and moral compromises had to be made? There certainly are these situations, definitely. I mean, we are constrained. We are constrained by priorities that we take ourselves, that we decide on ourselves. And obviously, we are constrained by the money that we make. So the revenue that comes in, we can just throw out the money with the bucket. So I earlier talked to an attendant here about the compromise of our own values, maybe, because we want to make phones last as long as possible. And we were not always able to provide the means to the users to make this happen. And so the example was that we sell these modules, and the modules are used for spare parts to repair a broken module in a new phone. So the existing phone does need to be thrown out, but only a very small part of it can be replaced. And thus, saving resources and the environmental impact of the phone. But that's not my question. And that didn't always work. And that didn't work because of two different constraints, I would say. And one is the monetary constraint. We didn't have the resources and the monetary way to buy the spare parts to provide our users with the spare parts necessary to actually make the thing that we want them to do happen, to use the phone for as long as possible. And the second one is the systemic limitation as well. That the spare parts might not even be there. The product that we use, the component that we use, might be discontinued. And so, yes, we have to compromise on our own values for very different ways. And that's, yeah. François, you are also a great fan of open source. But as far as I know, fair phone, OK, open source would probably apply mostly to fair phone in its hardware. Open source hardware is apparently also a thing. But fair phone is not open source hardware. Can you tell a bit more why fair phone didn't use for an open source hardware strategy? So with open source hardware, you mean that we share specifications, that we share the cut drawings, whatever, or the specs for our components. Yes. So why don't we do that all the time? We, it is really, maybe it is exactly the question you were asking for earlier. So there is priorities that we have. And we don't always align on the same priorities that could really further the agenda that we have because we need to maybe look at a different angle for the time being. And we try to be as open as possible with what we provide. And we try to facilitate and enable our users to repair the phones themselves. And I totally agree. It would probably be even better if we said that we also enable the repair shops or maybe a different manufacturer to produce an alternative that fits into our phones and can be used with them. We did do that once with Fairphone 2. So they used to be, for instance, external connectors that you could connect peripherals to, not just a USB connector, but we also had behind the back cover, essentially USB, different kind of these examples, which the community in our own forum did pick up enthusiastically. And they played with it. Can I quickly finish this? I want to ask a follow-up question on that, no worries. Okay, then please go ahead. Do you think the potential for open source for our hardware company is then the same as for a software company like NexCloud because we also have a community who can pick up these type of projects and we benefit massively from that? Yes, so I think the benefit really stems from a healthy community, right? There is only a benefit to it if there is people who actually pick up on these opportunities and who actually utilize the resources that you give them. And that was what I just want to follow up there is what didn't happen with Fairphone 2. So there was some individuals who were enthusiastic about it and played with it and made amazing projects with it, but there wasn't really an ecosystem building around it also because we didn't have the time and resources to actually build it, but it didn't happen and that kind of a little bit burnt this topic internally. So that's why we don't focus on this so much anymore because there's so many things we want to fix and we want to work on and this is kind of now not so en vogue anymore. Yeah, I had priorities come and go. Will the hardware of the Marena phone be open source? Yeah, we're planning on showing your presentation that you are going to provide a phone, a Marena? Yes, the new Marena phone is our own hardware that we have introduced this year. The thing is that we have not designed it from scratch. We bought a reference design and we don't have the opportunity to make it open source, but I agree that it would be very beneficial for everyone and if we have the opportunity to do this, eventually we will do. I can see that if you take the example of the Raspberry Pi, it's a hugely successful project and that building an ecosystem around this kind of open hardware is really something that is beneficial to everyone and that can trigger some new projects that we wouldn't have imagined at the beginning so I'm only supporting this idea of open source hardware. Sounds promising. Renata, I saw that you were making some notes so I have to assumption you have something to add. I have a provocation. I think that I will be lynched by the co-panelists and so I'm ready to run. I'm already like I like you already. Do we need mobiles? Actually, I was thinking about the planet and thinking about the future of privacy. Do we need a tracking device on our pockets following us everywhere and it's one per person logic even. I remember the first computer that we had at home. Everybody used it. One computer. One computer, entire family. Okay, we were fighting over access sometimes. But it was one computer sitting at home. Then we had lives outside. We could go. If we wanted to go to a place, we would check the map before going and we would exercise our brains to get to the next place. Maybe we would see the nature on the way. As kids, we were playing in the streets in Guatemala. We had very fun games. It was a very different, more connected to the human and less connected to a screen situation. If we think of, okay, the massive amount of resources that we'll take to give updated phones to everybody the next 20 years, critical years for the future of climate. Can we share our device? Can the device be like, can we go back to the logic of public telephone boxes and go back to anonymity and having just to carry our numbers? I don't know. I'm thinking about that a lot because many of the problems of privacy come from the permanent data collection, one device per person. I had this thought and provocation and I'm thinking, can we conduct our lives? Does everybody need a telephone and can we conduct our lives without it? Now, Franz, do you want to reply? I don't have to prepare. No, no, I like the provocation. I mean, I do agree. I also have fond memories of my childhood. I didn't have a smartphone. We had a computer at home. I played outside, but don't children play outside? I mean, so I don't disagree. Let me start with that. I don't disagree at all. I think having these trackers in our pockets is a big problem. What I want to first focus on is then, do they need to be trackers? So can we disentangle that maybe? Can we maybe first, like Gael is doing, approach the problem of the device tracking us and then see and disentangle that question from whether we actually need the computing device in our pocket? Because that's the fundamental broken thing about it for me. That other people are exploiting us using this device and not only using it for our benefit or for the benefit of society. And the second question then is, so that I think we should ask separately. And the second then is, do we need these devices at all? And I think, yeah, I totally agree and that's also one of the major focuses of Fairphone. We need fewer of these devices to be produced and sold. And one obvious way of doing that is having less of them around. Just not buying one for everyone. Like if all of us share 10 maybe and we scale that all over the world, we have fewer devices that need to be produced, fewer expectation of the resources of the earth and fewer waste in the end. That would be a solution. I don't see how that practically works. That's just so freaking convenient. And I'm attached to my phone as a communication device with my family, for instance. So, yeah, but what we do is making them last longer. So that also tackles the environmental impact that you mentioned earlier. And I think we can work around these things. We can tackle these problems individually. And then maybe also detox a little bit every now and then. You don't need, if you go out with your friends, maybe you don't need the phone because you can converse with them anyway. If you are around your family, I feel so guilty of that. I went home and my child came home from daycare, picked up and then I sit there with my phone on my hand. Yes, I should not use my phone in this situation, definitely. But that doesn't mean that I shouldn't have the phone at all. I think there's nuance to that. That wasn't really captured by your question, by a provocative question. In the end, it's probably a little bit like my grandmother who doesn't want to go off Facebook because she doesn't want to miss out on the bingo party. In the end, we all want to play Flappy Bird. Let's move to... Let's move to a topic of security. Gael, I tried to find back an interview with you a very long time ago that I remember, but I couldn't find it back, so I hope I don't imagine this question. The interview I remember it was that a privacy-friendly phone is not necessarily the same as a secure phone that is suitable for bad people when you have something to hide. Can you explain... This was a true interview, I didn't picture it. Yes, that's true. Can you explain a bit more about what type of security concerns could be exploited if a Murena phone would be used by someone who does have something to hide, like an activist? Yes, that is a real question. Thank you for raising this topic because it's a question we are often asked. But why aren't you doing more hardening security on your smartphone? The thing is that if you want some privacy on the smartphone, you have to have some good security, but it's just good security. It's not hardened security. It's not very useful, actually, because on the other hand, you can have very hardened security devices but with just one purpose, it will be to securely send all your personal data in a safe way to Google and that's what they keep on explaining all the time. So it's two different things, actually, and the purpose of what we do with EOS and Murena smartphones is to give users and people the opportunity to escape the massive global permanent data collection for business purposes from Google and many other companies. But yes, it's true. We are not doing this project for people who could be targeted, like people working for governments or activists or criminals, maybe. We are not doing this project for them. We just want to make a product that will be easy to use, attractive for the largest audience, but I mean common people, common usage. And there are other projects for security, I don't know. Okay, so it's a matter of focus of the project. Franz, I think I read that you are also very passionate about security, but if I think of Fairphone, then I can't really imagine what kind of security concerns you are dealing with. So could you tell us a bit more about the type of security issues you have encountered at Fairphone? So yes, I do care about security, definitely. I think our involvement is on a similar level to what Gael just explained. So we really want to have a security system for the everyday user. So we sell a phone that we want to sell to as many people as possible and to make them use those phones as long as possible. And that means making these phones also secure on a day-to-day level. It doesn't need to be... We also don't address... Again, I had this nice conversation with the attendee. Thank you, Gael. About activists who might complain about lack of hardening of our phone. And that is not who we cater to. That's not the threat model that we envision. That's not the focus we have. The security that... The level of security that we deal with is really fixing these known vulnerabilities in the operating system and in the firmware and applying those patches. We don't do the research ourselves to find these. That's the job of other people. And, yeah. Renata, are you, after hearing all of the security concerns comfortable to use these devices or software? I'm pretty... I'm pretty more confident. And I don't want to sound like an informercial, please. But it is different. I mean, I'm good for options, decentralization and knowing who's built in the technology I'm using. So it's reassuring to be sitting next to real humans not egomaniacs who are trying to fly out of the planet. As the people building our technology and it goes back to what I was trying to say at the keynote, I think that that's... If it's not working, I know where their offices are. I know them. And I can engage in a dialogue or organize my consumer association in my locality or I can get a group of users together. And it's not... When the actor on the other side is a giant, your voice is not heard because they're up in the sky with the head in the clouds. And they only hear to big investors and they have money up there. And they are like... They are in a level up, like higher, than even policy makers, than even president sometimes. So bringing the technology down to earth, maybe those telephones do not have many of the things that are like convenient or exciting or blah. But I think that this closeness, engaging with providers that are like, you know, on a human level or approachable level is far better when we are trying to exercise rights and we want to have technology that works is much better. It's a little bit like when you buy fruits in your local market versus buying GMO fruits in the massive chain. A little bit like that. It's a healthier choice with a closer level of accountability that still feels human. So when are you going to become a next-cloud user? No, just kidding. I am a next-cloud user, so... Oh, look at that. And again, I know where you live. I have friends high up in the cloud. So let's slowly reach to a conclusion. So the last question of me to all of you free is what are important considerations or appeals that you would like to add to today's discussion? Franz, do you want to start? Yes, thank you. The initial question that you had was about your conclusion even to those few questions where you need to change the system. And we shouldn't be either politicians or technologists. And I think that doesn't really catch it. I think we are changing the system to some extent. We are, I mean, within the system, but from within the system, and we don't completely work out the house, sorry, Renata, but we are striving to make a difference to change the, in our case, electronics industry bit by bit by example. Just showing that it's different, that it's possible to do something different, that it's possible to be responsible and we shouldn't just hide because it seems impossible to change capitalism. So we can change the stuff and that's what I want to finish on maybe. Renata, do you want to continue? Yeah, going back to reimagining the way we communicate and the multi-users experience. Imagine how amazing it would be to bring back the idea of the public phone in the corner but done by technology similar to these ones to rethink a public device available for everybody, like with anonymous users, that you can use passing by, you know, that you are not conditioned to ownership of one device that is tracking you all the time and that if you are out of money or you lost your telephone or you are in a precarious situation that gives me many circumstances that you are not completely disconnected from the digital sphere and this public infrastructure supporting your connectivity. I think that we lack that. We saw it in the COVID crisis. We lack that infrastructure. We used to have it, at least a way to reach the other end and we stopped having it. It would be amazing to rethink it and with free software and OpenHour create a version that enables us to think about technology one device used by multiple people in a different way. Thank you. Do you want to close? Yes, maybe I wanted to raise quickly one topic is I think the main issue today is probably that all the big techs their purpose is not to do good products actually. Their purpose is to make money and as much money as they can and everything is organized for that purpose and in particular I am thinking about social networks they really know what they are doing to capture everyone's attention all the time and the maximum brain time as they can for what? Just to sell advertising and I think this is something that needs to be fixed because there are people like you and me who are aware of this but there is a lot of people and I am thinking about young people, children teenagers who are not aware of this and they are betrayed in some way they don't know that all the system is organized to capture their brain time for some business purpose and I think that for this the only way today is to improve the regulation and I am pretty optimistic because things are changing now we see in Europe, in the US many people are really supporting the fact that we need more regulation it shouldn't be the far west anymore so that goes to my initial question should we be doing code? No just kidding thank you very much for your attendance to this panel and I would like to hand over the microphone back to Fabrice