 It's the last five days. I was constantly in the evening and my family over there. They can confirm that instead of making vacation, I was revising my slides because every day I got input. And I said, I have to change my slides. And also, my title, unfortunately, has changed over time a lot. So it's a living title. And at the end, I said, it doesn't fit anymore. So I decided to quickly change it to that. How smart is swarm intelligence, group intelligence, and social intelligence, really? And of course, that's a provoking title, but I think that fits because it's 9 o'clock in the morning last day of a long conference. So probably it will wake you up a little bit if you get a little provocation and challenging here from the stage. So usually, we look at the bright side of swarms. So we tell journalists, it's so smart. It's so super. It's flexible, robust, intelligent, whatever. This is how we usually sell swarm intelligence and swarm robotics. And probably we will see today also the other side of that. And but let's start with the usual program. So looking at the good things. And for investigating for honeybees, for example, how intelligent honeybees are, we made a little, let's call it intelligence test. It's more like a stupidity test because this is a really, really simple intelligence test. We placed a group of young honeybees into a cold area where they don't like it, they like it warm. And then we made one heat lamp with one heat spot where it's warm, where they like it. And we just wanted to observe, are these guys capable of approaching the nice spot, the sweet spot, the warm spot? And you see, it's so easy. Would you call this an intelligence test? For a tiny little bit of intelligence, maybe. So let's have a look at that. And surprisingly, that was the first surprise and this was already quite some time ago, we found out that not every honeybee is equal to all the other honeybees. So it's obviously not a class-free society. They are very, very different. And you see the composition of the group here, the diversity and one of these honeybees or one type of these honeybees was rather smart starting here, here is the warm area. It was perfectly finding the warm spot directly more or less approaching it. So that seems to be, I don't know, Albert Einstein of the honeybees or something. So very, very quickly solving the task. Some other of these bees, 7% about, they do nothing. They just hang around, lazy, not caring for anything. And probably also know such people. I don't know, I don't want to give names for that, but that seems to be common. But there are also these guys and they are far more, they don't care for the set goal at all. So you set them a goal and they do something else. They care for the wall. They don't care for this temperature spot at all. We call them the wall-fallers. So very, very strange bees. And then finally, the majority of the bees was doing that. So how would you call them? I call them crazy bees, yeah? They run around like wild. They don't care for the temperature spot. They don't care for the wall. They care for nothing. They are just running like crazy. And as you see interestingly, two out of three bees are behaving like that. And if you look at the total group composition, you see that only 7% of the bees are capable of actually solving the task. 93% of those bees are not capable of solving this very, very simple task. So swarm intelligence, do you expect swarm intelligence from that? Crazy bees. I would say perfect example for swarm intelligence because obviously bees are very, very stupid when they are alone. So there is huge hope that they are much smarter when they are in the group and that would be swarm intelligence. And this is what we look next. So we made another experiment. But this one was very, very difficult compared to the first one. We have a global optimum here, a local optimum here and all the bees quickly as a group aggregate at the global optimum, the warmest spot. And then we switched off this warm spot. It got cold now. And you see the group is even able to re-decide, to revert its prior decision, abandon from this former good choice. And now starting to aggregate on the other side where it was a little bit colder. So they totally neglected the colder side before. But as soon as we turned on the global optimum, they switched to this now best solution. So they can avoid local optima as a group and always pick the global optimum. So that is my point of view, swarm intelligence at its best because the individual single bees, they failed in a very, very simple task. But as a group, they were successful in a much more challenging, let's call it optimization task. Do you agree? Smart guys, right? So usually I would tell you now, oh, we developed an algorithm from that. We looked to the behavior, we deciphered how it works. We deciphered swarm intelligence and this algorithm, which we call bee class, we then put into a swarm of robots and they also run in a temperature field. They only do this little algorithm that we extracted from the bees and they also find the warmest spot, which is unfortunately on the other side here in this video. But as a group, they can then solve this. And while we cannot look into the real program of the bees here, we know what we have programmed into our robots. This was our model that we derived from the bees and clearly we have extracted the swarm intelligence of the honey bees. Yeah, so all of that would be true if we would be looking on the bright side of the swarm today. But unfortunately, we aren't, I'm too far away from this, obviously. I have to stand up here because today, as I said, we are going to look also at the dark side of the swarm and there is no free lunch, you know that. So if there is a bright side, then there is also a dark side and we want you to look into that. And one thing that we did is we caged, we took a cage and put just a few bees into that cage on the colder side of the arena, which usually the bees don't choose. And so we forced some bees to stay in the suboptimal decision. And you can see that here also from on top. Here we have the caged bees. This is the colder temperature spot. Normally they would go to this side. And you see just a few bees, three, four, five bees in this cage are enough to trick the swarm and to draw the decision making, the collective decision making into the suboptimal side over here. That's very bad, that's very bad. If these were robots, this could be robots with failure. If you are dealing with a large society system, this could be just some strange people that make different decisions. And you see the comparison here. Without a cage, 80 something, 90% are going to the global optimum and only a few go to the local optimum. But as soon as we cage a few bees on the local optimum, they are evenly spread. If you look across all experiments and here in the one that I selected here, even the whole swarm switches to the suboptimal decision making. So that's very bad for swarm intelligence. And if you think a little bit about our society, you will see special customer programs like the genius bar in Apple where you get special treatment or you probably, I don't know, you have this golden black platinum customer cards at your airline or whatever. You get a special treatment. You're a special customer. We take special care of you. Yeah, you have to think about that because probably you are just one of the bees in this cage. So you receive this special treatment because the rule seems to be if you get some of them, you get all of them if it is a swarm intelligent group. And that might be true if you, oops, sorry. If you look into how many, come on, come on. So here we are. If you look at how many Google hits you find for special customer program, that's one billion Google hits. So this is obviously a hot topic and all of these hits are about how can I get the whole population if I have a special treatment for just a few of them. So we thought, okay, but how do the bees know that there are some special VIP bees that we put into this cage? Because this is what the bees see in the hive and also in our experiment. They see nothing. It's absolutely dark for them. So they can of course smell the bees but they can also feel the ground vibration of the bees moving in the cage. So vibration is a very important communication channel in the dark for the bees. And so we thought, maybe we can exploit that and just put some vibration motors here. We put a vibration motor here in the cold side and one time we turned it off. So it's doing nothing. Just as a control and one time we turned it on. So it's vibrating. And if you look at the videos, everything tested before. Yeah. It's running here. It's not running on my monitor. Okay. You see that if the vibration motor is off then the bees choose the optimal side. And I have to look over here. If the vibration motor is turned on then they first also aggregate here at the warm spot. But due to the vibration they start to reallocate the group and go to the suboptimal spot. So just vibrating the ground we did not really put bees there. We just told the other bees so to say that there are some bees in the other side. And that was enough for tricking almost the whole society to the suboptimal solution. So you can see that here are, if we have vibration they aggregate at the local optimum the majority and if we have the vibration turned off then we have at the global optimum the majority. So actually what we did is we introduced fake news into the society and it was working very, very well. But it was like permanent fake news. And fake news is only good if the lie is credible. So if you permanently always tell the same thing again and again you will just look weird. So it has to be better to be like a dialogue where you really react also to the responses. The lie is even better if it's adapted and it's getting its own life. So actually what we are looking for is a little robot that can react to the bees and not just like a vibration motor not only vibrating all the time. So we are looking for chat bots, fake news chat bots that we could introduce into the animal society. And of course it has to be real robots in this case because the bees they don't look on internet they're not on Twitter. So we have to present them a physical device. And what you see here if it starts is exactly such a device that we built in the project Assisi. You see the bees. You see a whole array of robots. They can vibrate, they can heat, they can produce air flows. Most of the stuff is below the table like an iceberg. And the bees are running around. The robots can see the bees. They can sense the bees and react to them. And of course the bees can sense also what the robots are doing. And we run special programs to attract them to let them find specific configurations where we would like to draw the bees into these specific configurations. So really to obtain a crowd control over the honey bees. And as you can see, this is a non-random typically a non-random choice of the bees. This is what we wanted them to do. And this is ongoing research. We are currently investigating what we can do which information can trick the swarm. And how we can make it maybe also a little bit more robust against that. So support its own resilience. So why is this important? Because it's not only about honey bees. I mean, we humans, we are also animals that are socially interacting. And as you can see on this picture, we have new gadgets for social interacting. And these gadgets, they are autonomous robots. They run their own algorithms. So it's information that we feed into these systems. Of course, they exchange information amongst each other. And then they feed information back to our society. So you can clearly see that we have a feedback loop here. But unfortunately, we are not only communicating via these devices. This is how these social network algorithms are sold to us. That it helps us to interact and to communicate. But we have to be aware that these are software robots that run their own program. And Facebook has shown that it can modulate even the psychology of its users by presenting them filtered specific messages. They published this like two or three years ago. So it's not just connecting us and giving us a tool. It's manipulating us. And you have seen how that works in the honey bee society. And yeah, I'm playing devil's advocate. So they will use it for getting our money or whatever, our votes, whatever you want. But for sure, I'm very negative in this point of view. So show you that it's not just honey bees. Like every organism almost, you can trick like that. In another project, Flora Robotica, we built with architects these funny things. They are robots. These braided structures are robots that interact with plants. And on these braided things, we have again such nodes. Just similar to the honey bee example that you have seen. They can see the plants growing towards them. They can affect with light, for example, how the plants grow. And here you see a plant that is sort of controlled, tricked to grow in specific directions to reach specific targets that we have preset. So we can already grow the plants along specific trajectories. And we didn't even program that. It was machine learning observing six plants that were growing. And after six plants, the machine learning algorithm has found enough knowledge, a good enough model for applying control through such devices to guide the growth of the plant to specific targets. So it didn't take much to learn about these organisms. And another example, this is a robot swarm that we are currently developing in Venice Beach in subculture on. You see several types of robots. There are also videos here. Three different types of robots. Some of them will go to the ground of Venice Lagoon. Some of them will swim around like fish, like in the middle of the water. And some of them will stay on the surface. Next year, it should be 150 plus of these robots, all together as a swarm, not in a laboratory out there in Venice Lagoon and monitoring the environment. So that will be a huge application. I guess it will be the largest autonomous underwater robot swarm that ever was operated out of the lab in a real environment. You have to imagine, there are boats there. There are people there. There are fission nets there. It's like working there is like hell. So we had divers getting back our robots. And it's very, very challenging, but it's also very interesting. So what can you do? How applicable is a robot swarm? And where are the problems? And when the robots are down, they are more or less isolated into smaller groups, like you see it here. Hopefully you can see that. And the problem there is that these smaller groups, they exchange information all the time and they might drift away. So they might believe their own picture of the world and they might become like a sect or something that has a totally different belief to other robotic groups that are isolated somewhere else in Venice Lagoon. So we also wanted to see how these dynamics operate. And this is about the emergence of subcultures and this is why the name cult is in subculture because this is what we actively are going for. If the swarm is large and running for weeks, very surely there will be subgroups of the swarm that has other beliefs than the rest of the swarm and also that gives you new dynamics. And for investigating this, I produced a model where I thought maybe let's look at something which is also interesting for society. And this model looks at the network of people and the emergence of fundamentalism in this network of people. And why did I choose that? Because I observed like in my personal environment that there is a growth of fundamentalism of all kinds. People that usually saw the world like with all the colors, all the, let's say different aspects that are there for some reason people started to degrade their view of the world into like a gray-scale picture or at the end, if that runs for too long, probably even to a black and white picture. So it's either this or that and nothing in between. And this is something that happens obviously quite often. And if we let that happen for too long, we probably end up with a world that looks like this, only black and white and only two groups of people or three groups of, I'm speaking about Apple and Microsoft obviously, for example, fighting each other or something like that. So how does this emerge? How does this appear? And of course it's cultural learning and cultural learning is very easy explained. If you look to literature, one aspect of cultural learning is innovation. Innovation means you invent something new, even if there is something around perfectly working, try to invent it again. Maybe this is crazy, maybe this is funny, but after some time there might be real innovation that really changes the world. So trying from time to time to find new ways to work for me I think is a very important task. We should do that every month once at least. We should try to do that. Of course if everybody tries to work funnily, nothing happens in the world. So we should maybe also exploit what's around there, what's really has proven to work. And we like to copy over across generations, kids learn from their parents, students from their teachers or the other way around. So copying in both direction is a very, very important task. And these three things together if they run in parallel they lead to cultural learning. Yeah, so that's the easy part. Now comes the really, really difficult one. I'm not from psychology, I'm not from sociology, I'm just a stupid biologist. And when I looked into fundamentalism I wanted to learn from literature what is it about and it was really a mess to find out how do you define fundamentalism. But I found one interesting source, Negata, and she said it's a set of irreducible beliefs. So I know it's right, it's like this. And it forestalls any further questions. So logic like asking why and how doesn't matter, we know what's the right way and what's the wrong way. And because we know what's right, we know that all the others are wrong. So we start to alienate the others and demonize them to sort of isolate our own sect, our own belief against these others. And that after some time might lead to special cultural development like having special clothes, special language, special greetings, it might be meeting every year to exchange like how much we like it in our group and you clearly know such people, right? You have seen such people. It's, they are dangerous, you have to know it's really dangerous people. They have strange clothes, strange greetings, they meet on a yearly basis. And you cannot even understand them when they talk to each other. You don't know what they're talking about. So, and they are violent. They are violent against the others. You don't believe that, but I can tell you, there is violence, very, very strong violence against the others which they don't like much. So we have to be careful with those fundamentalists. And you often, you cross the border to something else if you look into that, that's extremism and it's very hard to discriminate fundamentalism, which is more about having your own beliefs to extremism because extremism is that. Extremism means you don't have your own beliefs, but you have extreme measures for propagating your little beliefs that you probably have. So this is extremism of behavior of the measures that you take. And clearly my model is about those fundamentalists and not about those extremists because the model had to be probably different than if it was about that. So fundamentalism is a tricky problem. I'm pretty sure there are some trackies here in the group and they get offended then and say, oh, he was talking bad about us or something. So you have to avoid politics. You have to avoid sex. You have to avoid religion because you don't want to step on somebody's toes and some people have very long toes. So you easily step on them. And so I had to find something else and I thought maybe these things are not so difficult to transport. And then at the end I said, eating. Everybody knows about eating. Eating is a topic where I can place my model upon and have a look at it. And then I found a special area of eating where fundamentalism is an issue and that's the eating of meat. Vegetarianism, veganism and so on. So these are clearly religions that you can study with a model like that. And yeah, it's about the dietary spectrum. So you can eat like this at least for some time. You can eat like that also for some time or you can eat like in the middle like this. So there is a spectrum ranging from a lot of meat to some meat to no meat. And it's not about health because what is the most unhealthy food here? The most unhealthy quite surely is this one in the middle. So it has nothing to do actually with being healthy. It has something to do with beliefs, what you want to and what you think is right and not right. So where is the problem here? The problem is if you have to make choices so you're not free to choose because of your fundamentalism. The world initially looks like that. So that's a normal menu from an American restaurant that I took and if you have an allergy, I don't call people with allergies fundamentalists. I myself have a food allergy. But it restricts your choices already. So I know it gets hard to choose the right menu then. You see some of the stuff already is kicked out. I did this for my case. I have a soy allergy. So I could not eat those things because of soy flour or soy sauce and other stuff like that. But if I am a vegetarian, other choices are gone because there is meat inside. And if I am even more fundamentalist, I'm vegan, then of course all other animal derived products are gone. And if I'm lucky, I can eat these two side dishes that you have seen there, but only if you're lucky. Depends on the oil and so they used. There may be nothing from this menu is really working for you. So even in a free market that goes to maybe people from the US who believe like free market solves everything and other capitalist countries. No, a free market does not give you all the choices. It does not solve all the problem because fundamentalism of any kind narrows down. And it doesn't matter if you avoid petrol cars and if you have some special food choices, there will always be a narrow down market because of your own set of beliefs and the free market alone doesn't solve it. So why did I do that? Here comes my main hypothesis. I thought fundamentalists have to struggle all the time because it's difficult to stick to the fundamentalistic rules. So if you struggle all the time, you're probably more unhappy because things don't work out. So you are often more, oh, there's a typo that should be limited, not imitated, yeah? Oh, a typo. And fundamentalist behaviors will be limited because you cannot freely choose. And so my hypothesis was concerning meat that if you eat some meat from time to time, yeah? You will be more happy, you will struggle less. People will copy over that because that's an important aspect and because people copy over that more from you, in fact, you will save more animals by eating some meat. That was my main hypothesis, but if you're a super fundamentalist, everybody will look at you and say, nah, I'm not going to try that. And then actually more animals are eating across the population, eat 10 across the population. So here is the model, it's a super simple model. It only has four variables. M is the meat content of the food that changes over time. Psi models the like dietary strategy that people have. So how much meat they eat or how much meat they would tolerate in their food. And then we have omega, which looks like in horseshoe. So that's the measure of happiness. The higher omega, the happier the agents are. And finally we have mu, which is the meat content of the market because also the market will react to that and offer you more or less meat, depending on these choices. And in this slide, you will see the whole model in one slide. So this is one of our agents, I having its own happiness level and its own dietary strategy. And this agent has to live and life is nothing else than an endless dream of food choices. They come every day, three times a day, and next day you will again be confronted by food choices. So this is how life looks like. And in this moment in time, this agent is confronted with a menu of about 50%, 48% of meat content. And the question is, will the agent eat this? Whether or not it depends on his dietary strategy, whether this is too much meat or not. If the agent rejects the food, of course he will stay hungry. His happiness level will go down. He will become more unhappy because he stays hungry. But if he eats the meal, then his happiness level will go up. He will be saturated. But this will lead to more consumption of meat and this affects the market. And the market is then changing the stream that in future will approach this agent. So here we have a feedback loop. But if more meat is consumed, then the global meat consumption also goes up. Like meat industry, for example, will grow. And probably our agents will see some TV documentaries about how these animals are treated and will say, oh no, I shouldn't eat so much meat. I should probably reduce my meat consumption. And so you see the next big feedback loop arising here. And this agent is not alone. So this agent has other social partners, coworkers, neighbors. They have their own strategy that leads to their own happiness. And of course the agent is influenced by them. So if one of the neighbors is super happy, our agents will copy over, at least develop a little bit into that direction and try to do the same. So this is the whole model in one picture. Very, very simple. And this is how it looks like at runtime. It's a randomized graph of agents and they have interactions in a certain radius. And starting conditions are these dietary strategies are in a bell shaped curve distributed randomly in the beginning. And all the agents are happy with 50%. So they are absolutely in the middle, half happy, half unhappy. Then we have the same data shown here as a face diagram, omega happiness here and food strategy on the Y axis and also a time plot of the global market, how much meat is produced over time. So let's run the model. It runs slow at the beginning. You see the agents making their first choices. The network is rewiring, so they're interacting not always with the same partner. And you see most of the agents get quite happy. They drift to the happy side, but some guys over here, they seem to be unhappy and drift to the other corner of the face plot. Some of them escape, they give up, they say, oh, no, no, no, no, no, that's not my, that's not my club. But after some time, those guys are stuck here while the others are here and they are all super happy. Also they are half vegetarians, many of them as you see. So I was very happy to see that because I saw my hypothesis somehow working predicted by this very simple model. And here comes the first take home message of that, yes, fundamentalists do emerge in very small numbers just like we observed in reality. No, it does not make them happy, that's good probably. They are not isolated, totally isolated in the network. That was a surprise. Some fundamentalists give up after some time as you have seen, even the moderate vegetarians are super happy. And fundamentalists do not grow after some time. So there is an initial phase, but in this model they don't grow in numbers. And here are some snapshots of the final picture, how the world looks like. So the brightness of these guys indicates the directory strategies and they pick three isolated ones and you see they range from bright to dark. So it is not a matter of isolation whether the agents adopt this strategy or the other strategy. Even when they are in isolated pairs, they can come out very different, depending on how it developed over time. And here we see, even if they are closely embedded in society, we see very different pairs that very frequently interact, but still they have picked different choices here and stuck to that. So this was like the rough results that we have seen at the beginning. And now some detail if we still have time, some detailed analysis, not everybody lives in the same situation, in the same configuration. So some people live like this and some other people live like that. So you see the density of the population, the size of the population can be very different and also the connectedness of the population can be very different. And, but I have to say even in those buildings, the density can be very high but people are sometimes very isolated and people live here very remote but are very well connected. So these are two different aspects. So I run the model with these two settings which led to four different types of runs with a small interaction radius and a double interaction radius here in columns and a small population and a high population here in rows. So I get four combinations of these two parameters and the result looks, it looks more or less similar but we have to look here at this side when the radius is small. So you don't interact with many of your neighbors then you see that more fundamentalists emerge while if you just spread a little bit wider and interact with a few neighbors more then obviously the number of fundamentalists go down. So maybe an important message, you don't have to be like super socialized just a little bit more and that probably can change the whole society. But it's not maybe just the environment that we live in it's maybe ourself so how we start, how we are raised so you can start like this happy baby here or you can start like that happy baby here. They start from very different education from their parents how they should eat and probably they're raised in countries which have a totally different food market. So there are some countries offering a lot of vegetable food and there are some countries offering a lot of meat based food and both of that the starting condition of the market and the starting condition of the players in the market might be important and I can express this in the model easily because just the starting value of psi has to be changed to reflect these two situations and the starting value of mu has to be changed to reflect these two and this is now the final graph that I'm going to show and it's an awfully complex graph this is why I will talk you through it. So the final run might look like that the faceplot that you have already seen and I now split it up in this graph that you will see in a moment we have three columns which indicate the market situation so the left column is the vegetable rich market this is the meat rich market and in the middle is the like average market and each of these graphs has an X axis going from starting agents which eat vegetables mostly to agents that eat meat mostly in the beginning and when we look now just at the fundamentalists down here the lower percent in this 10% of this graph then we see that the market has a huge influence so this market here produces much more fundamentalists than this market over there so the market that is there at the beginning has an influence on that if we look at the other groups of the population let's say like most of the population here in the middle and then the extreme on the other side we see that the market has little effect but the starting configurations here they have an effect as you can see indicated by these two boxes so whether like most of the normal people develop this or that food strategy depends mostly on how the parents educate their children how they start in the family with the dietary options and that's an important message also I think so to sum this up yes connecting to others prevents the emergence of fundamentalism population density seems not to have a huge effect on that connecting to others makes agents more happy because they can copy over more favorable strategies it does unfortunately not lead to less global meat consumption so that's a bad thing starting preferences have a huge effect towards vegetarianism and the market starting conditions don't have a huge effect on the majority of the population so that's bad news because my main hypothesis is gone you know so people develop just like I expected but the market is not really affected by it so it's not that animals are rescued by eating meat so that does not work but it is nice to see that education has an effect and that is maybe one of the most important take home messages and if you combine it then you can probably see that education has the first effect but the market will react afterwards and then probably society can drive into a configuration which is not fundamentalist but beneficial for the people and beneficial for the animals also so you see it is important society is a super complex substrate and it's super complex it means there is a lot of power for emergent phenomena that are hard to predict not all of these emergent phenomena it's not like hooray there is this happening all the time there might be emergent phenomena which we don't like which are not good for at least how we see society and it just got more complex because we have social media and the software agents around for understanding those things we need models and we need mathematical models that was clear all the time but I think we also need animal models we have to have animal models, laboratory models also for social phenomena if you want to bring out the drug on the market you have to test this on cell cultures, on mice, on whatever before you release it to people Google, Facebook, Twitter was released without any lab experiment and we have all seen what happened because of that and it's really urgently needed that we get lab experiments also for such social to study such social phenomena and robots are a way of developing such lab models so these are my collaborators in this works and thank you