 imagination activity, imagine that you can take all of your experiences that you have made at this Congress, take them home with you and share them with your friends and your family, everything. The first time that you tried mate, the moment, endless moments of going through the labyrinth of this Congress hall and getting lost for the hundredth time, trying to enter the club downstairs and feeling like you can cut the smoke with a knife or actually being very surprised and seeing Snowden appear on stage. So all of these wonderful memories, imagine you can record them and you can take them home with you. Today, this kind of recording activity or possibility is actually possible to make, there is technology that could theoretically record everything 24-7. Kai Kunze will give us an overview of sensual and actual actuations, hello and welcome from the translators, we translate of course also in German. Kai Kunze will tell us what the future brings to us for advanced and virtual reality. He has experience with projects in this area and will give us an overview of how we push the boundaries of shared human experience today. Kai is a scientist in wearable clothing and is also the founder of the Japanese Superhuman Sports Society. Thank you very much. Kai Kunze, enjoy. Thank you very much for the great introduction and thank you for waking up early in the morning. It's always a special occasion for me to talk at the, it's always a special occasion for me to talk at the Chaos Communication Congress and for the next 30 minutes I won't waste your time. The presentation will be about Superhuman Sports and the human senses to expand. As the introduction said, we are available via our devices at the beginning of the general availability, like Google Cardboard and so on, and we are on the brink of getting consumers available. My grandparents already have the HoloLens, I always like to show pictures of my grandparents because they always try everything that I give them, and then they give me valuable feedback. But I have a big disclaimer to start with. I will disappoint you in terms of the introduction, especially if you don't trust me, don't trust me in terms of what you are saying. I worked in this field for 14 years and in 2004 I thought, today we would run around like this. And I see you, you have no one of you looks exactly like that, so I'm not so good with saying before. That was me at the time, at the time I had a one-hand keyboard and at that time I didn't have any smartphones, and so we had to bring our computers around differently. And a short time ago I thought that Google Glass will be the next big trend, so don't trust me on my previous statements. But you will probably think why I got up so early in the morning, what I will tell you today. The hot topic in Japan is super human sports, super human sports, the idea is just to take technology to expand human abilities, so more than we can normally do. You know, since we are in Japan, we have of course founded a society and then we had the super human society and I am of course one of the members. The idea came from Imani Sensei and Nakamoto Sensei and Nakamura. I like it normally, but it's not just to introduce names, but if you are interested in human expansion and virtual reality, then you are in contact with these people. They are my personal favorites. Now we have a society that we are always trying to expand. For a few months we have been working with TU DEVT and I don't want to go further on, but I would like to present another person, Robert Reiner. Robert Reiner saved 2016 for me. It all started with Brexit and Trump, but Robert organized the CYBATHLON in Zurich. I wonder how many of you have heard of CYBATHLON? Oh, some. Nice. Has anybody been there? Yeah, even great. One was there. My favorite audience. That was an international competition for people with disabilities who used exoskeleton and other extensions. It was great and that's what I want to discuss with you today. Superhuman sports are extensions of our bodies, extensions of the field where you play. I will show you examples of scientists in society. First of all, we expand our bodies. There is a lot of work to be done mechanically. It's not so good and exciting for us computer scientists, but it looks cool. That's why I'm showing you this. These are exoskeletons. They fabricate these exoskeletons and they are pretty cool. Another thing is the bubble jumper that came out of a hackathon. We started with the first drawing, but the important thing for our hackathon is you have to demonstrate, you have to use it. You see the combination of bubble bowl and sky flyer to make a very safe and nice sport. The coexistence of security and power. I'm not so sure about the security. And now? Getting over this to something a little bit more serious. This is work from Inami Sensei. And in this case it's really about the human visibility to expand. They work with a Decay One and they have two cameras, one in the front and one in the back. And they overlap the two video feeds. So as you can see here, at the beginning it looks really terrible, but at the beginning it looks very scary. But after two or three minutes you are able to separate the two video feeds from each other. So after two or three minutes you can perceive your environment consciously. The only problem is that what you see here, you see a, because you are wearing a display, there is an offset from the camera to the eye. And it's very difficult to interact with the environment. The next job I want to show you is a synesthetic suit. A feedback suit, a haptic feedback suit. This was first made to bring rest-vital-reality games to the market for the PlayStation. What you see on the right side is a haptic feedback on the whole body. So you can get a feeling on your whole body. You can get a feeling sensation on your whole body. The idea is that you should get visual, haptic and touch feedback at the same time. And then you can get into a trance. And if you try with this suit, then it's a great experience. You get all kinds of input at the same time. Yes, then it goes on with the expansion of sports. Now I show you a work by Rikimoto Synthé. You see a waterfall, you can swim in this tank here. But the swimmer or diver has 3D goggles on. And you can really dive in the Great Barrier Reef and see high and similar things. In addition, the training continues. I'll show you a little bit of old work here. The effect you know, but I think it's cool to show it again. This is Galvanian vestibular stimulation. What they did in 2004, they put a small current behind the ear. And this small current, it disturbs the balance of the anode. So you lean towards the anode. And why I'm showing this is because you can't do it in Europe or the USA. You see the diver here and the professor with the remote control. And he just controlled a person. I thought it's not sure yet, we don't know exactly what the long-term effects are. But it's interesting and it depends on the user and the skin. Some need more or less of the electrical energy. But it's just an interesting thought, especially for sports, where you need weight at the same time. Another work, this is the work of one of my students. This is a Brilleford peripheral vision. You can give hints that you see, while I look into your eyes. You can get peripheral hints. Because if you look at the clock or so, then the other person notices that you don't look into your eyes. You start with this idea and that it's boring, it works for alarm or notifications. You can have eight different notifications. The other person doesn't see that you get them. But where it really goes and what's interesting here is that he would like to influence the movement. Through the peripheral vision. And there's also other research by FUKAWA. At the human conference, they had to put a lens on the ground to control the foot. Takuro asked if it would be possible to do a peripheral vision glasses. This is a first test. It's not yet publicized. But I wanted to show you what he does. He uses a few lines that move like this and let people run. And think about what happens when the lines move faster. What happens? People run slower. It's possible to influence the speed and he also wants to try to influence the direction. The next thing is the expansion of the eyes and the eyes. We call it the facial expression. You can see that in a simple way. You can see here with the different lenses that he draws with the self-made glasses. It works. There are photo-reflective sensors in the glasses. And with this you can get user-dependent very well. So you can find out depending on the user what kind of expression he is making right now. Here you can see the photo-reflective sensors and the distance between the sensors. Before he starts, he has to calibrate the sensors. For example, if you smile, the glasses go up. And then you can see that it is recognizable. Eight different facial expressions are visible and recognizable. This is the normalization phase and the training phase. It is user-dependent, but it is very cheap. And above all, smiling is easy. You just need a photo-reflector and maybe another one for control. So what you can do with this is you can install it in a VR headset and then my VR character and the other participants can see the mood. Why doesn't it work here? Yeah, but I think you understand where it works. What is also more interesting, you can also use this for science. You can use it for science. So if you are in a cave, you can see how people react to other elements in a VR environment. And you can also use it for day-to-day activities. You can also see what kind of activities you are doing, how much you are laughing and how much you are angry. And that is interesting. That is in connection with the work of my colleague, who invented the number of happiness. In the morning, when you wake up and your alarm goes off, you don't need to turn off the alarm, but you have to laugh at the alarm. That is a bit... Are you a little bit scared or are you a little bit crazy? The next thing is a little bit funny. If you don't laugh enough, we can also bring you to laugh enough. This is now activated over the electrodes. And as I said, this is a little bit of a joke. I don't recommend anybody or anything. But I really don't recommend you to build this kind of technology. It's a bit of a joke. But students have played with EMS, electronic muscle stimulation. Most of the muscles we need to control our face come from this side of the face. And what you do, you make electrodes on this part of the face and that's how you can activate muscles. The students wanted to laugh, but when they tried it out on me, I was the first to blink. You can also blink to do that. This is a little bit of a joke. That's a bit of a joke. But for more serious muscle stimulation, I suggest you to Max Peitha and his colleagues. They have a kit. If you want to build your own UL, if you want to do it yourself, and the QR code is on BigBucket and GitHub. And this is also just cool. It's just cool. Cool control, movement control for footers. And you can use your body as an output device for computers. Now I'm coming back to the Super Human Sports Society. The idea is to work here at Olympia 2020. We want to prepare further sports and games. Not just the people who use these sensors, but also the society around them. We organize workshops. This is an example of Wycombe from the beginning of the year. You get together, scientists, artists, local people who are just interested. We try one thing at a time and try different technologies. And the scientific output is... You see all four people, you see the output of all four people. At the end we try to create these games together. Here are some pictures from the workshops. And the idea here is, first of all, as I said before, we want to create a proof-of-concept, so that the concept works. And we do that by really operating and demonstrating the sport. And then also getting done so far didn't really work for me. And maybe in the future we'll try to create the expanded games. And now you might wonder on these slides, what's kind of the purpose of this? Why are you interested in this? And if you've heard my previous lecture two years ago, I'm interested in cognitive expansion. And I concentrate on eye computing. This is my background. I worked with gins on sensors that can detect eye movement. And since December I have good news. I have support from the Japanese government, from GSC, to establish community open eyewear platforms. And I'm looking for collaboration so that we can start now. I think it's very important that we start now. I don't want to get into a situation where we have two providers today on your mobile phone and they tell us what we can do on our mobile phone. I think eyewear for me is the next step in the long run. And I really want to have an open eyewear platform. That's my background. I think for me, the super human sport is a great place to try out. Not only for sensors, but also to expand human experience. I have to go a little bit further. Here is the idea that digital sensors in some parts are much better artificial sensors than human sensors. You can achieve higher image rates. You can see a wider spectrum. But what is missing from our side is the interface. The interface. Can we create new, expanded senses based on digital technologies? The digital camera system. The digital camera system analogy. I want to show you a short example. If I would be standing here, there's a sign in the back and I want to read the sign. I cannot zoom in on the other side. I will squint. With my normal footwear, if I can't see anything, I will squint. And if I have some glasses and squint, I will squint and then enlarge what I want to read. Unfortunately, I don't want to talk about technology anymore. I wanted to show you a demo of the HoloLens, but I just couldn't get it to work on the HoloLens. I am happy that I have a student who was able to prepare a demo on a laptop. It gives you a feeling of how it works. In this case, it is only on the laptop. But you can imagine that I use the eye tracker to track my eyes. I have a camera and an open CD to track my face. If I squint, for example, I can zoom in on a document. If I squint, then it zooms in. If I listen, then it stops. That is a very simple type of interaction you can do. I had hoped that I could show it to you on an AR system, but I will show you the laptop as a desktop eye tracker. I will show you a little bit how it works. As I said, it is an open CD system. I think George uses the size of his eyes and how they behave to the eyes. If I look down, then this relationship changes and then I listen, then it stops again. This just gives you a small introduction to what I am trying to do here. Technology that is easy to use, where you do not realize that you are using it like your analog glasses. Not just for the sense, but also for the hearing. I will come to the end of my lecture. There are a few obligatory slides. These are my colleagues who helped me with research. I have a very special slide for the students who have done their own work and especially to George, who just made the demo. Also to Marseille for the effective wear. This is the end of my talk. This is the end of my lecture. Now it is time for questions. Thank you for your attention. You have a chance to talk to Kai right now to ask him questions. There are four microphones in the room. Put yourself behind one of those microphones if you have a question. Also the internet is always able to... You can ask questions on the internet if you are already awake. Does the internet have any questions? Yes, one question. Is that code for eye-tracking available? Is the code for eye-tracking available? Which code? For eye-tracking I can recommend Pupil Labs. The eye-tracking code I use is TobiIX. They have an API. It is a desktop eye-tracker. But if you want a mobile eye-tracker I would recommend Pupil Labs. This is open source. And only the last version of your hardware is closed source. The code is all on GitHub. It is really easy to adapt. They also have some songs for VR headsets. It is really easy. You can build your own cameras. You can use the code if you don't want to be too expensive. TobiIX is fine. But I don't like it that much because it is closed source. You can't access your own images. For example, if you want to measure something, you can't do it. I have a small question. How open are you to work with different universities and your technology and your research? I work in Genoa. We use a lot of motion-tracking. Eye-tracking and eye-tracking would be very interesting for us. We use a lot of facial recognition. How interested are you to work together in the real world? How open are you to use it for science? We are really open. I have a little bit of money to pay for it. I really want to find people to work with. Especially when you look at how it is going in the Silicon Valley and how it is going to be invested. The next thing we see are eye-trackers in VR. You will see how much they are. I would really like to work with you. I will meet you after the lecture or send me an e-mail. It would be great if we could stay in touch. Thank you for the lecture. How important would it be to combine external signals and similar signals with brain waves? It is important to combine them. I have also looked at EEG signals in electroencephalograms, but it is difficult to get into a shadow. Electroencephalograms and computer interfaces take a while. The signal processing is also very difficult. I am lazy, so I started with the eyes. The eyes give you a lot of information that you need from the brain. I would like to see a combination. EEG is important to find the right type of sensors so that it works in real life. For example, there is the JSC project. What we really want to do is to find simpler sensors that give you some information about the brain activity. It is a combination of different sensors. We have time for one more brief question. Number four, please. When I was wearing an AR glass, my brain massed the AR glass. The distance between the glasses and the object I wanted to touch changed a lot. There is still this problem. There is still this problem. There is still this problem with perception. I think you will always have the problem. Augmented reality, like the HoloLens, is pretty good. But you can't touch things there because the distance to the object is too far. I haven't seen anything that is really tangible, believable, where you have the right distance. You can put your brain on it, which is usually the case. Or you can use other tricks. I don't know what to do. That's the end of the lecture. Thank you very much, Kai.