 The first talk before lunch is from Herb Lübke and according to his mini-CV he works as a translator between machines and humans. Today he is going to talk about some bots and probably their security issues and how to correct them, how to make them talk. Close. Okay, let's give a hand for him. So everyone who expected like fire-breathing piles of metal, I have to disappoint you. It's mostly software robots. Okay, so we lost one. So yeah, I am the psychotherapist. Please describe your problems. I have to give a talk at First Asia. Why do you say that? Bots do exist, but there are funny concepts too many. Is it because there are funny concepts too many that you came to me? Yes. I see. Well, what makes you believe so? The term is used very differently by different people, something they are brand new. Others have known, built and used them for a while, even for decades. Earlier you said there are fuzzy concepts too many. Yes. Hmm. So are you sure that is so? Yes, because people talk about them using different terms, making very different and sometimes wrong assumptions and draw strange conclusions. Does it bother you that people talk about them using very different terms and making very different, sometimes wrong assumptions and draw strange conclusions? Yes. Since it shows that the phenomena is culturally not very well understood, this in turn enables misguided decisions based on ignorance. When did you first know that the phenomena is culturally not very well understood, this in turn enables misguided decisions based on ignorance? People talk about bots as if they have agency, which they do not have mostly. They ascribe a lot of power to them without understanding the underlying principles. It reminded me of how humans invented gods to explain thunderstorms. People spoke about bots like humanized technology, which is responsible for a lot of the unpredictability in the world. And they also use phrases used with narcissism, which made me think a lot. Maybe your life is something to do with this. Well, I have been explaining humans to machines and machines to humans for the better half of my life now, so I do see misconceptions reoccurring. You have your mind on machines, it seems. Yes, I do like their power when they are useful to the good of humanity. And I'm scared of them when I ponder how much suffering they can enable by the sheer scaling and contributing to the by-center effect. Is the fact that much suffering they can enable by the sheer scaling and contribution to the by-center effect the real reason? Are you a bot? What do you think? Currently I think so. Do you really think, currently you think so? Yes! Because I have read your source code and started the execution. Does the fact that you have read my source code and started the execution explain anything else? Dot, dot, dot. So, this talk is titled The Common Bot as if there is a prototypical bot, which there isn't one. There are usually a set of characteristics that get embodied into the world to different degrees. The smallest common traits of all these things of such computer programs basically are that they are persistent, that they have autonomy, and they are reactive in their nature. Bots tend to be triggered by something, either time or machine-readable events, and then they produce some output based on it. And it's a simple bot might just tell you every time the International Space Station passes overhead, and that's okay, but all this is often wrapped into a conversational interface. So I will still run with the idea of species and to make more sense of the whole topic more approachable. So let's look at the Tree of Life spawned by a simple hello world. Hello world, the start of most programming journeys, is actually already a computer speaking to humans. It is not just some bland number crunching, it is a greeting which is addressed to everyone and everything, at least on Spaceship OS. Like, hello universe is like a grand stroke to start your career with. So is hello world already a bot? Mostly not yet, and yet it is the amoeba of computers having a conversation with humans. So one of the most prominent examples historically of human perception clashing with electronic brains is ELISA, which is a program which was written by Joseph Weitzman in the 1960s. You have actually seen ELISA in action at the beginning of the talk. The program is basically two loops that are intertwined processing language in a very simple way, and it's simple enough to be adapted to very different environments. Hence I was using Emacs in the beginning. And indeed it's astonishing on how people reacted to it, and they even expected psychiatrists to be replaced by a simple self-script on a large scale. There was some indication for that since humans did develop a personal, sometimes a private relationship with ELISA. But on the other hand, what is your approach to the world if social and emotional labor can be replaced by simple mechanical parity? So even decades ago it became apparent that humans tend to exaggerate the properties of technologies they don't understand, especially when it's outside their domain of expertise or day-to-day experience. Machines up to the 1960s have always been primitive extensions of physical human abilities, like robots basically just pushed things around. That was it for the most part. Even calculators were used since 1623, but they have mostly been similar to clockworks like here by Blaise Pascal. Yes, they're complex, and yes they may be made of silicon, but they were still a very predictable composition of discrete parts, and they were very well understood. Humans grasped how to use pulleys and levers to extend their physical capabilities. Primitive labor became easier and even automated to some degree. But the computer was not just another new thing used by humans, it was actually taking on the management floor of the human self. People were looking at the amazing feat of ELISA and were only able to explain it by drawing parallels how they would be able to accomplish those things, i.e. the human process of thinking. This not very sophisticated pattern enabled the projection of everyone's favorite utopia and dystopia onto computers. This was not only the case in academic circles. In the late 20th century it's littered with the trope of the electronic, with the natural brain. Pop culture gave us the eerie neon illuminated computer bit of craft work, time-traveling killing machines, and the sentiment, I can count every star we have on the bar, but I don't have a heart I can't fall in love. So the perception of bots in the early 2017 is heavily influenced by popular media, and often in an adversarial tone. As someone who spent a lot of time with bots and used them, I enjoyed Egg Drop and Megahorn IRC, if that's a reference for anyone. I also spent a lot of time in different subcultures. I can't help but look at the current reporting on social bots as a kind of moral panic, similar to things that happened in the early 90s, when there was supposedly a Satanism craze. Anyway, news agencies are fine with using good bots like Quakebot, which just generates reports on earthquakes and draws maps, or the BBC weather forecast bot, there's also a Singapore equivalent, but they condemn bad bots, which supposedly swing elections. Yes, automating a hate machine is distasteful to say the very least, but yet it has been coming for a long time and is an offspring of increased digitization, albeit a cancerous one. So people seem to be surprised by the scale of machine-generated content, but digitization enables botification. More and more communication is machine-read and writeable, so using a computer to do both is just where the whole inertia of the system is heading at. Narrowing the communication channel down also reduces the barrier of entry for automation. A face-to-face conversation between humans is based on a lot of subtle clues and not just the pure information being transmitted in speech. But all that gets stripped away when you use 140 characters in a tweet. And 140 characters, like humans can still express themselves in it, but also this map is simply enough to be used by a bot and generating 140 characters' meaningful characters is a task you can't conquer on a weekend. It's not that hard. And an API on the service level, like on Twitter or Facebook, basically screams to be used and making digital servers an integral part of almost your day-to-day life and social fabric of course invites machines to take part in it. And also computing power is cheap, there's plenty of bandwidth, and everyone can pick up a program in language these days. There are even services like Cheap Bots Done Quick or AirBot I.O. which help you do all this. And even here in Singapore there's Boss Uncle who gains some notoriety. Okay, I see some people are reacting to it. And there are actually companies like Narrative Sites which sell auto-generated content. And if you're interested in how to build bots, there's also a session at 230 at the Digital Design Studio, I think. So going back to IRC Actrop, which had this nice logo, is a bot that basically sits on your IRC channel so basically weighing out with your friend. And it automates a lot of administrative tasks like managing your channel, greeting newcomers, doing statistics, and so on and so on. Mega Hall, I also mentioned earlier, is a kind of different breed. It also hangs out with your friends, but it picks up different languages, different phrases, learns the tone, and tries to create new utterings from it, which is highly entertaining. If you've never been on IRC, the underscore e-box bots on Twitter are basically the same phenomenon. So they are sort of the cheap entertainment branch of the bot empire. Skit bots on Reddit, for example, is just a simple single-purpose script. It just corrects the use of the word skit, since most people probably mean sketch. And if you're not into a stand-up company, you probably don't know what you're talking about, but I recommend you look into it. There are also activists bot on Twitter, which leverage the political nature of technology. They have bots like Congress Edits, Bundes Edits, Pharma Edits, like shown here, or Oil Edits, which bring forward the little changes that are made to Wikipedia by powerful real-world actors. Pharma Edits, for example, just like tracks the IP ranges of pharma companies and always spouts out when someone changes this. Here, for example, Pfizer has edited the world's grand prize on Wikipedia for some extent on the list of companies in Pakistan. Okay. Of course, you can still convey the same information by staring at all the recent changes in Wikipedia, but you can also just use some JavaScript, put a filter on it, and put it on Twitter. And what do you think is more engaging? And Wikis are actually a prime example of an ecosystem that is conducive to bots, especially Wikipedia. There's just a core media-viki installation, which has about 600,000 lines of code, but there's a lot more code needed to make it function the way it does. Some argue that the bespoke code running along this platform is actually in order of magnitude more. That means 6 million lines of code are used to sustain Wikipedia as it is. And this is code not run by the Wikimedia Foundation. This is not code running on their servers. It's code people wrote customily just to make this ecosystem what it is. The obvious examples of these are bots, which counter spam and vandalism, but they are more niche applications. But bots do have a profound influence since they are algorithmic in nature, never tiring, always alert if they don't break. And they also enforce how Wikipedia, both the Encyclopedia and the community around it, is imagined by programmers. So if you want to get involved with that, there's also try to find 3-3-3-3-3. She gave a talk yesterday on Wikipedia and she's running around here. Speaking of platform as such, of course there's art leveraging this medium. There are bots based on simple node modules, basically JavaScript like newsnecks, bot utilities, canvas utilities, level cache tools, and so on. You basically send a message to them and you get a reply, the simple reply bot you often find on Twitter. At Unicodeifier takes, for example, text and exchanges characters by glitching the Unicode. Bad PNG, which you can see here, is an example where it just takes in an image and spits out a glitched version. So these bots are on Twitter, but Tumblr seems to be a native habitat. And then also going back to skitbot, I mentioned earlier, it has a confederate called botskit. And if you own Reddit, botskit reacts to certain phrases and certain conditions. And so what you have in the end is a theater performance unfolding in front of all the unexpected users. So all these art boards are like digital incarnation of the Heath Robinson construction or Rube Goldberg machine, which are often used to counter the corporate non-personality of the medium. They are funny and non-oppressive, like the breakfast machine we can see here. And these wacky mechanisms often showed up in art, like Duchamp used useless machines in his work. One of the most high-profile art boards was the one of Eastill Random Darknet Shopper by Media Group Bitnik. It's a bot with a budget of, like, 100 US dollars in Bitcoin per week, and it goes on a random shopping spree in underground markets. And all the purchases are delivered and pretend a different exhibition. That's why you can see fake shoes and X to see whatever turns up. And this, of course, raises some interesting questions, like, is this legal? Who's responsible? Can it be considered to be malicious? Which part of the outcome was intended? All these questions are raised and issued as open to experimentation and a playful discovery, but they have serious implications down the road once you think about them. And while all that sounds nice and good, there are actually issues creeping in, especially if you offer services. If you look at bot as avatars, then there is a tendency to gender technology, and it's often expressed in the way that virtual assistants are feminized. Think about Alexa, Siri, Contana, or Susie. They tend to bring up the image of a female-bodied person. And they're also often referred to with female pronouns in the marketing material and have female-sounding voices. So this actually reflects a long line of considering un- or low-paid service work, actually as women's work. Apple offered a knowledge navigator back in the 80s as a concept which was very different from Siri. It was a male-voicing avatar, as you can see in the bowtie. It was portrayed as a research assistant, a librarian, or information manager, and definitely not the personal secretary. The aforementioned bots are marketed as tools for organization and personal connection. The knowledge navigator was a protective barrier between the user and the domestic sphere, while digital assistants do all the relational work. They do scheduling, check-ins, and remind you to buy flowers. So the labor of social reproduction, which was up until recently invisible and outsourced to women, is now outsourced to a device. This reproductive labor was not perceived as an effort, purposeful, or even valuable. Now that machines do part of it, it gets reflected back to the male user, and it seems that technology generates more tasks and responsibilities rather than saving labor. It is you who has to tame your email inbox. Now it is you who has sometimes micromanaged your digital assistants, and it is you whose attention gets chopped away between different tasks, and all these were delegated earlier on a higher level. So if a bot just for fun, just look around you. Which cultural frames do you reproduce and should they be embodied this way? A simple way to do it could be just choose male-female names just randomly when you've been people signed on to a service. It would be a simple mitigation, albeit not a very good one. One of the best strategies overall is just to avoid the human framing around bots entirely. A human-like bot raises a lot of implicit expectations, and bots usually are short of it. When you raise the expectations, you also agree to be judged by a way harsher level and harsher rules. And there are a lot of implicit rules in human social interaction. And as we've seen with Eliza before, humans want to believe in AI, but their hearts are fragile. So perceiving a bot as a proxy for human agency also raises legal issues. Having someone acting on your behalf is a common concept around all legal jurisdiction, at least I know of. So if you use a bot to enter contracts, the degree of automation and predictability is actually very important. Entering a contract is actually just as easy as buying something or like offering to deliver something. And here I want to also refer to the legalese people who gave a talk yesterday and last year if you want to look into machine-readable and negotiable contracts. I also want to point to the Sveriges Reichsbank Prize of Inner Comic Science in Memoriam of Alfred Nobel, which was awarded in 2006 2016 to Oliver Hart and banked Holsterum for their work on contract theory. To spoil it for you, contracts are incomplete. So you might want to run damage control on your bot. It also is the same for whatever utterings they produce. Also, you might want to put away some money for insurance because depending on your nature and your endeavor. Having a public emergency shutdown button is a protection measure of Wikipedia opted for. It's aiming at simple bots, both in terms of algorithm implementation and also helps legal issues less painful on the road, which, but that's also a different discussion. So if you want to know more about building good bots, I have a talk coming up at Reichsconn and Brussels titled The Making of a Bot, Considerations for Social Interactions by putting the references in the slide that are online, which I am going to explore all the issues in depth. So as we have seen throughout the talk, there's a wide variety of bots out there. They are all perceived as having an autonomy and performing tasks previously associated more with humans than with machines as they are faster and never tiring. Computers are stealing our jobs, finding a viable path to a better society if profits and wealth gets distributed fairly, but I digress. On a more abstract level, bots stitch together disparate platforms. For example, posting different things on RIC and have done this since the dawn of the internet and are sometimes quite neat. Bot work is very data-driven and you can leverage APIs like WordNIC or datasets published by TateModern, Wikimedia, your government potentially, etc. So there's also a new, literacy needed in this world. The quantified expression like retreats, etc. are like perfect prey for what is that malicious social boss, which are powerful in numbers, but not on their own. So there's no need to be confused by the moral panic and there are techniques to build better bots. These techniques are informed by history and other fields, everything before. In general, bots, like all software, cannot be reduced just to the code they are composed of. They will always reflect the conditions under which they are deployed and developed. In the end, bots create the illusion of complexity, sort of like stage magic. It can be baffling, disappointing if you find out how it works, entertaining and intriguing, but there's always craft behind it and this craft is informed by history and can be understood. It's effects reflect the culture, so don't be afraid and shape a better future. Thank you. Two more minutes. I shouldn't have expected. The first button is for questions. Are there any questions? Are there any questions? So if bots is... let's say it moves in one direction of personal assistance, that you have got someone who does some funny, whatever small tasks, but someone who does... it helps me to manage my whole life. In this case, the bot or the transition needs to access to all my data to help me with all my life. So how do you see the trends there? I mean, obviously Google is very good with it, because they are already informed later. I mean... They are more hard-speeding in my chest with that. My condescending view on the average user says they won't care, since a lot of people already use Google products and use Facebook, which is basically a commercial entity aiming at, like, exploding you and then selling you stuff in the end. It's not a free service out of the good for humanity. I've seen a lot of research in the last three years when it comes to managing personal data based on zero-knowledge concepts, so that might be a way so that the bot can still make sort of right decisions but being oblivious what feeds into it. So that's something I would dig into, if that's of your concern. What would you like to rephrase and specify the question, Dieter? No, I mean, it's a big question. I didn't expect... Okay. I mean, if the thought is, what should help me with my... I don't know, with my calendar, my context of birthdays of my friends, my health, my shopping, my insurance, all my stuff, they need to access quality. Yes. Yes. So all my personal files, calendar, everything, all of my friends, all my communication. And the data, as you already pointed out in the end, it's also that of your friends. They're already simple privacy implications, like people are tracked whether their lights are on in their house or not, but it also means that they know when their spouse is coming home or not coming home. So there's already a simple bit that has potential serious implications, social implications down the road. One way to sort of mitigate is like run something like NextCloud on your own machine and then you can at least be reasonably sure that your personal data stays within your boundary you trust and control. Actually, I'm not even thinking about that because I find that the idea of this bot's personal assistant I had when we mentioned in my life very compelling, whether or not we get out control of it. Actually, I wasn't thinking about adding something like it to NextCloud because light basically has the benefit to feature without the droplets. So if you talk in two weeks, it writes con which might help you. So... I would just want you to mention that that if you just have it under like in a trusted environment like if you run your own mail service and you can have your bot reading your emails and knowing that your flight is going to be at level 38 and reminding you to work with CAB because the bot hasn't seen that there hasn't been any confirmation of the CAB yet or something. Also, which will help with that is like the increased bandwidth like the more bandwidth you have at home the less need is there to put something on the internet. If you need a mental framing around that think about do you know where a computer is and do you realise when someone steals it if you can't figure that out then you should probably host somewhere else if you want to entrust it with personal data. Because you tend to get noticed if people have kicked down your door because of you. Believe me, you notice. It's different in a data centre. More questions. Everyone's mind blown. I'm sorry. I don't know. If you look on the internet and you think about personal assistance everybody seems to think that personal assistance equals voice recognition. I'm always puzzled by that. There are some things I want to do with voice but a lot of things I don't want to do with voice like typing or something like a metric screener you can see the information or something like that also with the internet. Everything's voice. Amazon, Alexa and all the stuff is all in the series. It's all built around voice. The voice is for me which has a type of voice. The voice is for me which has any part of communication. Yes. It's sort of like biometrics. It's built around the coolness factor. It's really awesome if you can use your fingerprint to open your personal safe or to start your car. Which is cool. And someone... The fingerprint thing isn't cool. It's actually really, really practical. I don't have to go place, place, place, place and start. If I have to do this 50 times a day it saves me a lot of time. Until someone chops off your finger. If someone chops off my finger as you said before, I notice. I have to make a picture. I know, I know. And then I come to you with a gun and I say give me your password and you'll give it to me, right? Yeah, but as I see that the narrative around biometrics it's like, yeah, this is the future. But if you actually think about it it's like how do you place your fingerprint? Or your face. You have to scan your face to get into a high secret thing. Someone still needs your face which we can all do. Just like a mini camera around. Boom, screwed. So it's more about the narrative. It's sort of like I think self-driving cars. We have had Night Rider in the 80s so there's an established narrative about self-driving cars. But everything else, not so much. And also if you look around how personal assistants could be used and like as you said with a dashboard-like thingy there has been a lot of research in the 70s as usual in Xerox PARK with about like ambient information. So it actually is no need to be bothered. Also, especially going back to voice there are also privacy implications much rather prefer typing into a search machine. Weird pain bottom location instead of say Hey Siri, what do you like these weird things I just photographed? So yeah voice by nature is almost always broadcasting with text as an example. It also depends on your culture. So if you're in Japan there's no way on earth anyone will say anything on a train. But if you're in New York City people are yelling at each other in the subways of the cares. Yes, but still it also is humans clashing with machines and all different cultures imagine machines and humans and also their fellow beings. Ok, let's give a rather long session talks and our next talk will be at half an hour Unmeasured us will be talking about licenses in Python.