 We heard at Congress, we hear not only talk about technology, we also talk about social and ethical responsibility, about how we can change the world for good. The Good Technology Collective supports the development process of new technology with ethical engineering guidelines that offer a practical way to take ethic and social impact into account. Yannick Leretai, and I hope this was okay, will tell you more about it. Please welcome on stage with a very warm applause Yann Leretai. Hi, thanks for the introduction. So before we start, can you kind of show me your hand if you like working tech, building products as designers, engineers, coders, product management, okay. So it's like 95%, 90%. Great. Yeah, so today we kind of tried to answer the question, what is good technology and how can we build better technology. Before that, shortly something about me. So I'm Yann, I'm a friend German, kind of a hacker, member of the CST for a long time, entrepreneur, like co-founder of the startup in Berlin, and I'm also a founding member of the Good Technology Collective. The Good Technology Collective was founded about a year ago, almost over a year now actually, by a very diverse expert council and we kind of like three areas of work. The first one is trying to educate the public about account issues with technology, then to educate engineers how to build better technology, and then long term, hopefully one day, we were able to work like in legislation as well. Here is a bit of what we achieved so far. We have like 27 council members now. We have several media partnerships and published around 20 articles. That's kind of the public education part. Then we organized or participated in roughly 15 events already. And we are now publishing one standard, well, kind of today actually. And if you're interested in what we do, then yeah, sign up for the newsletter and we keep you up to date and you can join events. So as I said, the expert council is really, really diverse. We have everything from people in academia, to people in government, to technology makers, to philosophers, authors, journalists. And the reason that is the case is that a year ago, we kind of noticed that in our own circles like as technology makers or academics, we were all talking about a lot of kind of voice on development and technology, but no one was really kind of getting together and looking at it from all angles. And there have been a lot of very weird and troublesome developments in the last two years. I think we really finally feel like the impact of filter bubble, something we have talked for like five years, but now it's like really like, you know, we're deciding over elections and people become politically radicalized and society is kind of polarized more because they only see a certain opinion anymore. We have situations that we only knew like from science fiction, which is kind of, you know, pre-crime like governments kind of overarching and trying to use machine learning to make decisions on whether or not you should go to jail. To jail we have more and more machine learning and big data and optimization going basically every single aspect of our lives and not all of it has been positive. You know, like literally everything from e-commerce to banking to navigating to moving to the world now goes through these interfaces that presents us the data and a slice of the world at the time. And then at the same time we have really positive developments, right? We have things like this, you know, like space travel, finally something's happening again. We have huge advances in medicine. Maybe soon we'll have like safe driving cars and great renewable technology. And it kind of begs the question, how can it be that good and bad use of technology are kind of showing up at such an increasing rate on such extremes, right? And maybe the reason is just that everything got so complicated, right? Data is basically doubling every couple of years so no human can possibly process anymore. So we had to build more and more complex algorithms to process it, connecting more and more parts together. And no one really seems to understand it anymore, it seems. And that leads to unintended consequences. I have an example here. So Google Photos, this is actually already two years ago, launched a classifier to ultimately go through all of your pictures and tell you what it is. So you could say showing the picture of the bird in summer at this location and would find it for you. Kind of really cool technology and they released it to like the entire user base until someone figured out that people of color were always marked as gorillas. So of course it was a huge PR disaster, right? Somehow no one found out about this before it came out. But now the interesting thing is in two years they didn't even manage to fix it. So their solution was to just block all kind of apes so they just not found anymore and that's how they solved it. But even Google can't solve this. What does it mean? And then at the same time, you know, sometimes we seem to have kind of intended consequences. I have another example here, Uber Grayball. I don't know if anyone heard about it. So Uber was very eager to change regulation and push their services globally as much as possible and kind of starting a fight with all the taxi laws and regulation and taxi drivers in the various countries around the world. And what they realized of course is that they didn't really want people to be able to investigate what they were doing or like finding individual drivers. So they built this absolutely massive operation which was like falling data in social media profiles linking like your credit card and location data to find out if you were working for the government and if you did, you would just never find a car. It would just not show up, right? That was clearly intentional, right? So at the same time they were pushing like on like the lobbyism, political side to change regulation while heavily manipulating the people they were pushing to change the regulation, right? Which is really not a very nice thing to do, I would say. And the thing that I find kind of voiceless about this no matter if it's intended or unintended is that it actually gets worse, right? The more and more systems we interconnect, the worse these consequences can get. And I have an example here. So this is a screenshot. I took off Google Maps yesterday and you notice there are like certain locations that are kind of highlighted on this map and I don't know if you knew it but this map and the locations that Google highlighted look different for every single person. Actually, I went again and looked today and it looked different again. So Google is already heavily filtering and kind of highlighting certain places like maybe this restaurant over there, if you can see it. And I would say like from just opening the map that's not obvious to you that it's doing that or that it's trying to decide for you which place is interesting for you. However, that's probably not such a big issue. But the same company, Google with Weimur is also developing this and they just started deploying them. Self-driving cars, they're still a good couple years away from actually making it reality but they are really in terms of like all these are trying it at the moment. The farthest I would say and in some cities they started deploying self-driving cars. So now just think like 5, 10 years into the future and you have signed up in your Google self-driving account probably you don't have your own car anymore, right? So you go in the car and we were like, hey, Jan, where do you want to go? Do you want to go to work? Because I mean obviously that's where I probably go most of the time. Do you want to go to your favorite Asian restaurant? Like the one we just saw on the map which is actually my favorite but the first time I went to so Google just assumed it was. Do you want to go to another Asian restaurant because obviously that's all I like and then McDonald's because everyone goes there and maybe the fifth entry is an advertisement and you would say, well, Jan, you know, that's still kind of fine but that's okay because I can still click on, no, I don't want these five options, give me like the full map. But now we went back here. So even though you're seeing the map, you're not actually seeing all the choices so at Google it's actually filtering for you where it seems you want to go. So now we have, you know, the car like the symbol of mobility and freedom that enabled so much change in our society that it actually reducing the part of the world that you see. And because, I mean these days they call it AI, I think it's just machine learning because these machine learning algorithms all do pattern matching and basically just can recognize similarities. When you open the map and you zoom in and you select a random place, it will only suggest places to you where other people have been before. So now the westerns that open around the corner will probably not even discover it anymore and no one will and will probably close. And the only ones that will stay are the ones that are already established now. And all of that without being really obvious to anyone we use the technology because it has become like kind of a black box. So, well, I do want self-driving cars, I really do. I don't want a future like this, right? And if we want to prevent that future, I think we have to first ask a very simple question which is who is responsible for designing these products? So, do you know the answer? Say it louder. Yeah, we are. That's a really frustrating thing about it that actually it's us, right? As engineers and developers, you know, we are always driven by perfection. We want to create like the perfect code, solve this one problem really, really nice, you know, chasing the next challenge over and over, trying to be first. But we have to realize that at the same time we are kind of working on frontier technologies, right? On things, technology that are really kind of on the edge of values and norms we have in society. And if we are not careful and just like focus on our small problem and don't get the big picture, then we have no say in on which side of the coin the technology will fall. And probably will take a couple years, so by that time we already moved on, I guess. So, it's just that technology has become so powerful and interconnected and impactful because we are now building stuff that is not affecting like 10 or 100 people in our city, but literally millions of people. We really have to take a step back and not only look at the individual problem, the challenge, but also the big picture. And I think if you want to do that, we have to start by asking the right questions. And the third question, of course, is, what is good technology? So that's also the name of the talk. Unfortunately, I don't have a perfect answer for that. And probably we will never find a perfect answer for that. So, what I would like to propose is to establish some guidelines and engineering processes that help us to build better technology that kind of ensure the same way we have quality insurance and project management systems and processes to kind of this your task within companies that what we build is actually has a net positive outcome for society. And we call it a good technology standard. We've kind of been working that over the last year. And we really wanted to make it really practical and what we kind of realized that if you want to make it practical, you have to make it very easy to use and also mostly actually what was surprising, just ask the right questions. So, what is important though is that if you adapt the standard, it has to be in all project phases. It has to involve everyone. So from like the CTO to like the project manager to actually legal. Today legal has this interesting role where you develop something and then you're like, okay, now legal, make sure that we can actually ship it and that's what usually happens. And yeah, down to the Inval engineer. And if it's not applied globally and people start making exceptions, then of course it won't be worth very much. Generally, we kind of identified like four main areas that we think are important, kind of defining kind of an abstract way if a product is good. And the first one is empowerment. A good product should empower its users. And that's kind of a tricky thing. So as humans, we have very limited decision power, right? And we are faced with, as I said before, like this huge amount of data and choices. So it seems very natural to build machines and interfaces that try to make a lot of decisions for us like the Google Maps one we saw before. But we have to be careful because if we do that too much, then the machine ends up making all decisions for us. So often, when you develop something, you should really ask yourself, like in the end, if I take everything together, am I actually empowering users or am I taking responsibility away from them? Do I respect the individual choice? Why does it say I don't want this or they give you their preference to actually respect it or they still try to, you know, just figure out what is better for them? Do my users actually feel like they benefit from using the product? It's a question that actually not a lot of people ask themselves because usually you think like, in terms of are you benefiting your company? And I think what's really interesting in that aspect doesn't help the users, the humans behind it to grow in any way, right? If it helps them to be more effective or faster or do more things or be more relaxed or more healthy, right? Then it's probably positive, right? If you can't identify any of these, then you really have to think about it. And then in terms of AI and machine learning, are we actually kind of impacting their own reasoning so that they can't make proper decisions anymore? The second one is purposeful product design. And that one is one that has been kind of a pet PV for me for a really long time. So these days we have a lot of products that are kind of like this. I don't have something specifically against Philips Hue, but there seems to be like this trend that is kind of making smart things, right? You take a product, you put a wifership on it, just slap it on there, label it's more than you make tons of profit, right? And a lot of these new products we've been seeing around us, everyone is saying like, oh, yeah, we will have this great interconnected future. But most of them actually not changing the actual product, right? Like the Wi-Fi connected washing machine today is still a boring washing machine that breaks down after two years, but it has Wi-Fi, so you can see what it's doing in the park. And we think we should really think more in terms of intelligent design. How can we design it in the first place that it's intelligent, not smart, that the different components interact in a way that it serves the purpose well? And kind of the intelligent by design philosophy is when you start with a new product, you kind of try to identify the core purpose of it. And based on that, you just use all the technologies available to rebuild it from scratch, right? So instead of building Wi-Fi connected washing machine, you would actually try to build a better washing machine. And if it ends up having Wi-Fi, then it's good, but it doesn't, has to. And along each step, actually try to ask yourself, am I actually improving washing machines here? Or am I just creating another data point? And yeah, a good example for that is kind of a watch. So of course it's very old analog technology. It was invented a long time ago. But back when it wasn't invented, it was something you could have on your arm or in your pocket in the beginning. And it was kind of a natural extension of yourself, right? It kind of enhances your sense, because it's never there. You don't really feed it, but when you need it, it's always there. And then you can just look at it and you know the time. And that profoundly changed how, like, we humans actually worked in society because now we couldn't meet at the same place at the same time. So when you built a new product, try to ask yourself, what is the purpose of the product? Who is it for? Often I talk to people and they talk to me for one hour about the little details of how they solved the problem, but they can't tell me who their customer is. Then does this product actually make sense? Do I have features in here that distract my users? I maybe just don't need. And can I find more intelligent solutions by kind of thinking outside of the box and focusing on the purpose of it? And then, of course, what is the long-term product vision? Like, where do I want this to go, this kind of technology I'm developing in the next years? So the next one is kind of societal impact that goes into what I talked about in the beginning with all the negative consequences we have seen. A lot of people these days don't realize that even if you're like in a small startup and you're working on, I don't know, a technology or robots or whatever, you don't know if your algorithm or your mechanism or whatever you build will be used by 100 million people in five years, because this has happened a lot. So already when starting to build it, you have to think if this product would be used by 10 million, maybe even a billion people like Facebook, would it have negative consequences? Because then you get completely different effects in society, completely different engagement cycles and so on. Then are we taking advantage of human weaknesses? So this is arguably something that is just bad technology. A lot of products these days kind of try to hack your brain, what we understand really well, how engagement works and addiction. So a lot of things like social networks actually have been focusing and also built by engineers. Trying to get a little number from 0.1% to 0.2% is that you just extensive A-B testing quite an interface that no one can stop looking at. You just continue scrolling, right? You just continue and then to our self-past and you haven't actually talked to anyone. And this attention-grabbing is kind of an issue and we can see that Apple actually now implemented screen time and they actually tell you how much time you spend on your phone. So there are definitely ways to build technology that even helps you to get away from these. And for everything that involves machine learning, you really have to take a really deep look at your data sets and your algorithms because it's very, very easy to build in biases and discrimination. And again, if you apply it to all of society, maybe people were less fortunate or more fortunate or they're just different, you know? They just do different things, kind of fall out of the grid and now suddenly they can't, like, go through this anymore or use Uber or Airbnb or just live a normal life or do financial transactions. And then kind of what I said in the beginning, not only look at your product, but also if you combine it with other technologies that are upcoming, other certain combinations that are dangerous. And for that, I kind of recommend to do, like, the black mirror litmus test. Just try to come up with the craziest scenario that your technology could entail. And if it's not too bad, then probably you're good. The next thing is kind of sustainability. I think in today's world it really should be part of a good product, right? The first question is, of course, that kind of obvious. Are we limiting product lifetime? Do we maybe have planned obsolescence? Or if we build something that is so dependent on so many services and we're not only going to spot it for one year anyways, that basically it will have to be thrown in the trash afterwards, right? So maybe it would be possible to add a standalone mode or a very basic failback feature so that at least the products continues to work, especially if you talk about things like home appliances. Then what is the environmental impact? A good example here would be, you know, cryptocurrencies who are now using as much energy as certain countries. And when you consider that, just think, there may be an alternative solution that doesn't have such a big impact. And of course we are still capitalism. It has to be economically viable, but often there aren't. Often it's, again, just really small tweaks. And then, of course, which other services are you working with, right? For example, I would say like as European companies, you're in Europe here, maybe try to work mostly with suppliers from Europe, right? Because you know they follow GDPR and strict rules and want to stay in the US or check your supply chain if you build hardware. And then for hardware specifically, that's because we also do hardware in my company. I found that interesting. We're kind of in a world where everyone tries to save the last little bit of money out of every device that is built and often the difference between plastic and metal screw is like half a cent, right? And at that point it doesn't really change your margins much. And maybe as an engineer, you know, just say no and say, you know, we don't have to do that. The savings are too small to redesign everything and it will impact the product quality so much that it just breaks earlier. These are kind of the main four points. I hope that makes sense. Then we have two more kind of additional checklists. The first one is data collection. So really just especially like in terms of like IoT, everyone focuses on kind of collecting as much data as possible without actually having an application. And I think we really have to start seeing that as a liability and instead try to really define the application first, define which data we need for it and then really just collect that. And we can still start collecting more data later on. And that can really prevent a lot of these negative cycles you have seen by just having machine learning organisms and one of it kind of unsupervised and seeing what comes out. Then also kind of really interesting, I've found that many times like a lot of people are so fascinated by the amount of data, right? It's just trying to have as many data points as possible. But very often you can realize exactly the same application with a fraction of data points because what you really need is like trends. And that usually also makes your product more efficient. Then how privacy-intrusive is the data we collect, right? There's a big difference between, let's say the temperature in this building and everyone's individual movements here. And if it is privacy-intrusive, then we should really, really think hard if you want to collect it because we don't know how it might be used at a later point. And then are we actually collecting data without people realizing that they do it? Especially if you look at Facebook and Google, they're collecting a lot of data without really implicit consent. Of course, at some point, you all agree to the privacy policy, but it's often not clear to you when and which data is collected. And that's kind of dangerous and kind of in the same way if you kind of build dark patterns into your app, that kind of fool you into sharing even more data. I had an example that someone told me yesterday, I don't know if you know Venmo, which is this American system where you pay each other with your smartphone basically to split the bill in the Western. By default, all transactions are public. So there are like 200 million public transactions, which everyone can see, including the description of it. So for some of the more maybe not so legal payments, that was also very obvious, right? And it's totally unobvious when you use the app that that is happening, right? So that's definitely a dark pattern that's employing here. And then the next point is user product education and transparency. Is the user able to understand how the product works? Of course, we can't really ever have a perfect explanation of all the intricacies of the technology. But these days, for most people, almost all of the apps, the interfaces, the building technology is a complete black box. And no one is really doing an effort to explain it to them. Most companies advertise it like this magical thing, but that just leads to kind of this unionization where you just look at it and you don't even try to understand it. I'm pretty sure that no one ever like these days is still like opening up a PC and trying looking at it components, because everything is in tablet and it's integrated and it's sold to us like this magical media consumption machine. Then our users inform when decisions are made for them. Empowerment that we should try to reduce the amount of decisions we make for the user. But sometimes it's a good thing to do. But then is it transparently communicated? I would be totally fine with Google Maps filtering out for me the points of interest if it would actually tell me that it's doing that. And if it can understand why it made that decision and why it showed me this place and maybe also have a way to switch it off if I want. But today, we seem to kind of assume that we know better for the people, right? So we found the perfect algorithm that has a perfect answer. So we don't even have to explain how it works, right? We just do it and people will be happy. But then we end up with this very negative consequences. And then, and that's more like a marketing thing. How is it actually advertised? I find it, for example, quite worrisome that things like Siri and Alexa and Google Home are like sold as these magical AI machines that make your life better and are your personal assistant. But in reality, they're actually still pretty dumb pattern matching. And that also creates a big disconnect because now you have children growing up who actually think that Alexa is a person. And that's kind of dangerous and I think we should try to prevent that because for these children, basically, it kind of creates this this whale and it's humanized. And that's especially dangerous if then the machine starts to make decisions for them and suggestions because they will take them as if a human did it for them. So what is that? So these are kind of the main areas. So of course it's a bit more complicated. So we just published the standard today in the first draft version and it's basically three parts. It's like the introduction, kind of the question and checklist that you just saw and then actually how to implement it in your company, which processes to have, at which point you basically should have kind of a feature gate. And I would kind of ask everyone to go there, look at it, contribute, share it with people. We hope that we'll have a final version already kind of in Q1 and that by then people can start to implement it. So even though we have this standard, right, I want to make clear having such a standard and implementing it in your organization or for yourself or your project would be great, right? It actually doesn't remove your responsibility, right? This can only be successful if we actually all accept that we are responsible, right? If today I built a bridge as a structural engineer and the bridge breaks down because I'm miscalculated, I'm responsible and I think equally we have to accept that if we build technology like this we also have to kind of assume that responsibility and before we kind of move to Q&A, I'd like to kind of tell you this citation. So this is from a Facebook from the really early times and also around a year ago when we actually started GTC, he said this in a conference, I feel tremendous guilt. I think in the deep recesses of our mind we knew something bad could happen but I think where we define it is not like this. It now literally is at a point where we have created tools that are ripping apart the social fabric of how society works and personally I hope the same for you. I do not want to be that person that five years down the line realizes that they built that technology. So if there's one takeaway that you can take home from this talk, then to just start asking yourself, what is good technology? What does it mean for you? What does it mean for the products you built? And what does it mean for your organization? Thanks. Thank you. Jan Leretail. Do we have questions in the room? There are microphones, microphones number one, two, three, four, five. If you have a question please speak loud into the microphone as the people in the stream want to hear you as well. I think microphone number one was the fastest. So please. Thank you for your talk. I just want to make a short comment first and then ask a question. I think this last thing you mentioned about offering users the options to have more control of the interface is also a problem that users don't want because when you look at the statistics of how people use online web tools only maybe five percent of them actually use the options so companies remove them because for them it seems like it's something not so efficient for user experience so this was just one thing to mention and maybe you can respond to that but what I wanted to ask you was that all these principles that you presented they seem to be very sound and interesting and good we can all accept them as developers but how would you propose to actually sell them to companies because if you adopt a principle like this to the individual based on your ideology or the way that you think it's great, it will work but how would you convince a company which is driven by profits to adopt these practices have you thought of this and what's your idea about this is? Maybe to the first part first that giving people choice is something that people do not want and that's why companies removed it I think we look at the development process it's basically like a huge cycle of optimization and user testing geared towards a very specific goal which is usually set by leadership which is like bring engagement up or increase user amount by 200% so I would say the goals that are today mostly misaligned and that's why we end up with interfaces that are in a very certain way if we set the goals differently and that's why we have like you and your ex-research I'm very sure we can find ways to build interfaces that are just different and still engaging but also give that choice to the second question it's kind of interesting so I wouldn't expect a company like Google to implement something like this because it's a bit against the business world by that point probably but I've met a lot of also high-level executives already we're actually very aware of the issues of technologies that they built and they're definitely interested also in more industrial side and so on, especially something like self-driving cars to actually adopt that and in the end I think if everyone actually demands it then there's a pretty high probability that it might actually happen especially as workers in the tech field we are quite flexible in the selection of our employer so I think if you give it some time that's definitely something that's very possible the second aspect is that actually if we look at something like Facebook I think they overdid it so they optimized it so far and pushed the engagement machine of triggering your brain cells to never stop going on the side and keep scrolling that people got too much of it and now they're leaving the platform and of course Facebook will not go down like they own all these other social networks but for the product itself as you can see that long term it's not even necessarily a positive business outcome and everything we are advertising here still allows you to have very profitable businesses tweaking the right schools thank you we have a question from the interwebs yes employee asks a question that goes into a similar direction in recent months we have numerous reports about social media executives forbidding their children to use the products they created work I think these people know that their products are made addictive deliberately do you think your work is somewhat superfluous because big companies are doing the opposite on purpose right I think that's where you have to draw the line between intentional and unintentional right if we go to intentional things like what Uber did and so on right at some point it should probably become a legal issue unfortunately we are not there yet and usually like regulation is kind of lagging way behind so I think for now we should focus on you know the more unintentional consequences of which they are plentiful and kind of appeal to the good in humans okay microphone number two please yeah thank you for sharing your ideas about educating the engineer what about educating the customer or the consumer who purchases the products yeah so that's a really valid point right as I said I think we think actually as a GTC like part of your product development and the way you build a product should also be how you educate your users on how it works generally we have a really big kind of technology literacy problem right things have been moving so fast in the last years that most people haven't really catch up and they just don't understand things anymore and I think again that's like a shared responsibility right that you can't just do that like in the tech feed you have to talk to your relatives to people that's why we are doing like this series of articles and media partnerships to kind of explain and make these things transparent one thing we just started working on is a children's book because for children like the entire world just exists to these shiny glass surfaces and they don't understand at all what is happening but it's also prime time to explain to them like really simple machine learning algorithms how they work or like filter bubbles work how decisions are made and if you understand that from an early age on then maybe you'll be able to deal with what is happening in a way in a better and educated way but I do think that is a very long process and the earlier we start and the more work we invest in that the earlier people will be better educated thank you microphone number one please thanks for sharing your insights I feel like while you presented these rules along with their meaning the specific selection might seem a bit arbitrary and for my personal acceptance and willingness to implement them it would be interesting to know the reasoning besides common sense that justifies this specific selection of rules so it would be interesting to know if you looked at examples from history or if you just sat down and discussed things or if you just grabbed some rules out of the air and so my question is what influenced you for the development of these specific rules it's a very complicated question so how did we come up with this specific selection of rules and also the main building blocks of what we think should good technology be let's say first what we didn't want to do we didn't want to create a value framework and say this is good this is bad don't do this kind of research or technology because this would always be outdated it doesn't apply to everyone we probably couldn't even agree on the expert council on that because it's very diverse generally we try to get everyone on the table and we talked about issues we had like for example as me as an entrepreneur it was like in developing products with our own engineers issues we've seen in terms of public perception issues we've seen on a more governmental level then also we also have like futurologists in there so we looked at that as well and then we made a really really long list and kind of started clustering it and a couple things did get cut off generally based on the clustering these were kind of the main themes that we saw and again it's it's really more of a tool for yourself as a company that developers and designers and engineers to really understand the impact and evaluate it this is what these questions are aimed at and we think that for that they do a very good job Thank you and I think microphone number two has a question again Hi I was just wondering how you've gone about engaging with other standards bodies that perhaps have a wider representation it looks largely like from your team of the council currently that there's not necessarily a lot of engagement outside of Europe so how do you go about getting representation from Asia for example I know at the moment you're correct the GTC is mostly European initiative European we are in talks with other organizations who work on similar issues and like regularly exchange ideas but yeah we thought we should probably start somewhere in Europe it's actually a really good place to start like a societal discourse about technology and the impact it has and also to exchange I think for example compared to things like Asia or the US where they have a very different perception of like privacy and technology and progress and like rights of the individual Europe is actually a really good place to do that and we can also see things like GDPR regulation that actually because it's kind of complicated it's also kind of a big step forward in terms of protecting the individual from exactly these kind of consequences of course the long term would like to expand this globally thank you microphone number one again hello just a short question I couldn't find a donate button on your website do you accept donations is money a problem like do you need it yes we do need money however it's a bit complicated because we want to stay as independent as possible so we are not accepting project related money so you can't like say we want to do a certain research project with you it has to be unconditional and the second thing we do is for the events we organize we usually have sponsors that provide like venue and food and logistics and things like that but that's an for the event and again I can't change the program of it so if you want to do that you can come into contact with us we don't have a mechanism yet for individual student aid we might add that cool thank you did you thought about Patreon or something like that we thought about quite a few options here but yeah it's actually not so easy to to not fall into the trap that like other organizations space have been where like Google at some point sweeps in and it's like hey do you want all this cash and then very quickly you have a big conflict of interest even if you don't want that it happens happening yeah right number one please hi I was wondering how do you unite the second and third points in your checklist because the second one is intelligence by design the third one is to take into account future technologies but companies do not want to push back their technologies endlessly to take into account future technologies and on the other end they want to compromise their own design too much yeah yeah okay okay I got it so you were saying if we should always drop these like future scenarios in the first case and everything and incorporate every possible thing that might happen in the future we might end up doing nothing because everything looks horrible for that I would say like we are not like technology haters right we are all from areas working in tech so of course the idea is that you kind of just take a look at what is there today and try to make an assessment on that and the idea is if you look it up and read the standard is that over time actually you try to when you add new major features to look back at your assessment from before and see if it changed right so the idea is you kind of create a snapshot of how it is now and this kind of document that you end up this is part of your documentation kind of evolves over time as your product changes and the technology amount it changes as well thank you microphone number two so thanks for the talk and especially there for it just to echo back the question that was asked a bit before starting with Europe I do think it's a good option what I'm a little bit worried is it might be the only option it might become irrelevant rather quickly because it's easy to relatively it's less hard to implement maybe in Europe now the question is it might work in Europe now but if Europe doesn't have the same economical power it cannot bargain as much politically with let's say China or the US and Silicon Valley so will it still be possible and relevant if the economical balance shift yes I mean we have to start somewhere right just saying oh can we balance the shift anyway Google and then singularity and that's why we shouldn't do anything is I think one of the reasons why we actually got here right kind of this assumption that there's like this really big picture that is kind of working against us so we all do our small part to fulfill that that kind of evil vision by not doing anything I think we have to start somewhere and I think for having operated for one year we have been actually quite successful so far and we have a good progress and I'm totally looking forward to make it a bit more global and to start traveling more I think that like one event outside Europe last year in the US and that will definitely increase over time and we are also working on making kind of our ambassadors more mobile and kind of expand to other locations so it's definitely on the roadmap why it's not like just staying here but yeah you have to start somewhere and that's what we did nice thank you number one please one thing I haven't found was how those general rules you formulated fit into the more general rules of society like constitutional rules have you considered that and it's just not clearly stated and will it be stated or did you develop them more from bottom up yes you are completely right so we are defining the process and the questions to ask yourself but we are actually not defining a value framework the reason for that is that societies are different like they have widely different expectations towards technology, privacy how societies should work all around the world the second one is that every company is also different every company has their own company culture and things they want to do and they don't want to do if I would say for example we would have put in there you should not build weapons or something like that right that would mean that all these companies that work in that field couldn't try to adapt it and while I don't want them to build weapons maybe in their way your framework that's okay and we don't want to impose that that's why I said in the beginning we actually cause a good technology collective we are not defining what it is and I think that's really important we are not trying to impose our opinion here we want others to decide for themselves what is good and kind of support them and guide them in building products that they believe are good thank you number two as engineer we always want users spend more time to use our product right but I'm working at mobile game company yep we are making a word that user love our product so we want user spend more time in our game so we may make lots of money but when user spend time to play our game they may lost something yep you know so how do you think about the balance in mobile game yep it's a really difficult question so the question was like specifically for mobile gaming where is kind of the balance between trying to engage people more and basically making them addicted and having them spend all their money I guess I personally would say it's about intent right it's totally fine to have a business more where you make money with the game I mean that's kind of good and people do want entertainment but if you actively use like research in how the brain actually works and how it gets super engaged and basically built in gamification and lotteries which a lot of things have done where basically your game becomes a slot machine right so you always want to see the next the next opening of a crate and see what you got kind of making it a luck based game actually I think if you go too far into that direction at some point you cross the line where that line is you have to decide yourself some of it could be a good game dynamic but there definitely some games also I would say where they push the limit quite a bit too far and if you actually look how they did it because they wrote about it they actually did use very modern research and very extensive testing to really find all these patterns that make you addicted and then it's not much better than an actual slot machine and that probably we don't want so it's also an ethical question for each and everyone of us right I think there is a light and I think this light means the interwebs has a question there's another question from ploy about practical usage I guess are you putting your guidelines at work in your company you said you're an entrepreneur it's a great question yes we will so we kind of just completed some there's kind of a lot of work to get there once they are finished and released we will definitely be one of the first adopters nice and with this I think we're done for today perfect Jan, people, warm applause