 Hi everyone and thank you for coming out because I know how cold it is. So yeah, so Alex and I are going to talk a little bit about a project that we've been working on for quite a while and the MeePlusPlus is all about data ethics, biometrics and creating augmented reality ballet. So we've been working together since 2012. We are definitely, well I say he says we're not domain tech experts. He is. I'm not. We've done three large scale data driven dance performances before all about computer science theory and pain gateways and biometric data beforehand. We had a little bit more of a challenge due to Covid but this one because we presented back in 2018 that we were going to do this and the technology stack was very different and we had to overcome and are still overcoming some challenges. So I think yeah I just wanted to say about the tech stack thing is I'm a programmer, sure but all the stuff that we've picked up along the way we've just been doing it for the sake of the project and so suddenly you have to learn a little bit about machine learning and suddenly learning a little bit about you know inverse camera projections or whatever it is and so don't expect somebody who's really knowledgeable on stage today. We're doing this for the creativity and for you know for the performance and yeah. So one of the things that where it's actually part of my PhD research which is looking at new creative ways for raising awareness of data ethics in the computer science classrooms for trainee teachers and so data ethics is not really covered in the teaching practice. It used to be but it's no longer anymore and we look at things like the new children's code and for those of you who are interested about the theory side of it. So I have a philosophical framework that kind of draws in things like post-humanism with Karen Barad and Anushka Bailey. We also talk about Papas constructionism so he's all to do with computing and education and then if you're interested about information and the ethics of information that would be Luciano Floridi. I've treated out some stuff already they'll be coming through in the next half an hour so you'll have all of those links if you're interested in that. And then one of the things that we have here is the emotive brainwave headset. So this is a commercial headset it uses the particular one we use that you can see on the dancer here is a five channel headset whereas if you were doing it for neuroscience you'd need something like 32 channels and again they're not really designed for dancing because the amount of motion you get on the data which I'll show you in a minute is it just makes it actually unusable but it's actually great for data arts great for manipulation of data sets because it just exports to a CSV. We haven't quite fully anonymized those data sets yet in order to share with you because of the data ethics protocols of my PhD. Once Cambridge sign it off then I can share the data sets with you. So for those of you don't really know much about EEG the headset's just capturing it's a passive device that captures the electrical activity expressed in different frequencies and they use a particular algorithm called a fast Fourier transfer to actually record the raw signals and it has four main categories of signals that it does which is the beta alpha theta and delta so the beta waves are like of your conscious state and they are 14 hertz to 30 hertz alpha waves are 7 to 13 with being relaxed and calm if I have mine on now which I left at home it's right by my front door very sad it wouldn't be relaxed I can tell you that and then the theta waves which is often found in young adults we don't really necessarily see these waves the 4 to 7 hertz in adults and the data is the sort of so Delta is associated with kind of sleep so yeah so this is what it looks like when you record it and the rows represent the particular positions of the points and this is a standard called the 1020 international system of labeling all of these different points so you have a really big you know 32 48 channel headset each one of those points has a little label and that's what it's showing so hopefully it'll run so this is me in a ballet class on zoom and as you can see the motion and again I was just doing bar work but the amount of motion that you use with your head just creates massive amounts of artifacts that you don't normally get in a sort of neuroscience state and it kind of links in with the data ethic side of it of the sort of cognitive liberty which is Dr. Nita Farane's she's a professor of law and philosophy at Duke University she's done some amazing research on that whole area since 2008 and then also we don't really have any frameworks for ed tech companies to kind of take these devices into education and it's one of the things that those as educators are quite cautious of so there's a couple of frameworks that are coming out or have come out so the Australian neuroethics framework of 2019 and also then the IEEE brain initiative neuroethics framework which is probably coming out next year so for me it's really about how can we think about sharing our data when especially when young people they're so datified in the education sector they're datified from there so we're talking about Sonia Livingston's kind of work and you know every time they go into a classroom they're registered on there like whether they were there how what their behavior might be like whether they completed their homework so us as educators need to think about what who owns and what is happening with that data later on so that we can educate the students to then think about what is their personal data so this is the idea of the me plus plus there's my human self and my data self and a lot of the term we're not really aware of it and what could be done with that data I mean obviously we can go and think about the kind of real dystopian side of it let's draw that back and maybe think about how the cool things what we can do with these data sets when we're using the emotive one it just exports to a CSV which means that we can use this with the Raspberry Pi to light up some amazing lights you can actually use it to connect a stage lighting you can do data sonification I mean the world's your oyster really because it's just a CSV so I just need to go back then so one of the things that I've been playing with is how to humanize these data sets this is a really fun website and basically you type in your name and capitalizations change it you can have any particular style on that this is just actually an image and there's actually a github repo if you want to repurpose this for yourself so for my co-collaborators I'm not really going to call them participants because they inform exactly what I do and how much information that we transfer and make with the project and each one of the each one of the collaborators created their own little boring avatar and that's how I associate the data with them when I'm talking about each of the participants so feel free to have a play with that because it's quite fun and I'm going to pass over to the actual fun bit which is Alex and all of the makeup stuff that we've done and so Alex over to you yeah so thank you this is a screenshots central from open pose which does two two dimensional a recognition from two dimensional pose recognition from an rgb camera just like an ordinary webcam over there in the top left there is a screenshot from mocap net that does converts those points from 2d to 3d so they are effectively posed in a 3d space and this rather uncomfortable avatar here is us using that data to create an augmented reality model that you can watch perform and move around the stage so as part of all this we've developed quite a an unusual text act because most motion capture certainly 3d motion capture doesn't happen with an ordinary two dimensional webcam and this is a hard and new technology so this part of the regulations because of covid we couldn't actually meet in person so therefore we couldn't actually motion capture them how we would have done normally so we actually had to go we had to find a way and that took longer than expected but this is the text act so that was the reason why we didn't do it so this is just reiterating why I haven't checked my slides so yeah that's an ordinary video that is 2d points like here is my elbow here's my arm that's 3d points and that's and that's rotations so we don't store the data as points anymore points anymore when we create a an animated skeleton of the people we just store the elbows at 90 degrees 45 degrees um create a model um which is usually done in um make human and exported to blender and then we're using unities to combine all this stuff together to actually make it interactive and to be able to watch it on a mobile device um oh I have lost connection um cool um um but yeah so a lot of this is based on machine learning and using tensorflow uh so we have a lot of built-in kind of this is based around um oh thanks me cool um oh yeah so this is based around a um a set of basically it mimics how a human would recognize the you know a particular pattern so if you saw you know a set of dots like dot dot dot you'd realize that my hand you know it and they were marked as arm you'd realize it was an outstretched hand fully to the side computer doesn't have that intuition but you can train it and that's kind of the principle that we've um we um we've we've been piggybacking on in order to develop a lot of this so yeah that's um an example of inside blender um the outside is a representation it's the 3d model and the inside is what the the skeleton rotations and the animations are applied to um we use a specific model that mocap net exports um which is the cmu body and face model which is a very detailed um you can see it's got like face and hands um it's got points for all those parts of the body um much more detailed than a computer um game which is kind of what unity is expecting um so we have had to write our own custom bvh animation software um that took a long time because like I said I'm not a domain expert and so I had to learn a lot about quaternions and global versus local rotation spaces and generally blew my brain um just trying to get this all in um that's an amiga by the way the amiga is not running this um and the idea is that by running this making this in unity it can be deployed to a um a mobile phone or a tablet or something so you can see something like like this um where you are empowered to move around the scene and you can explore the ballet from whichever angle you want which adds a layer of immersion and a sort of a personal depth that you can't really experience from watching tv yeah so the challenge is is that mocap net was written by academics and not games software engineers so they don't necessarily match up so that's one of the challenges that we've come across um so we had one bit where we were testing the data and so the legs were here and then within the 3d space the torso was five miles away so there's some really funny and also some very scary test data uh we'll show you a little bit later the least scary stuff that we've created um but yeah uh it's not exactly what we wanted it to work out like so we're still working on but this is the challenge when you're dealing with trying to motion capture via basically things like zoom or or recording on your mobile phone so we tested the the video rgb data at 30 frames and 60 frames a second because dance has a lot of rotation um and you need that kind of frame rate in order to capture a lot of the data um so that hopefully these actual um as you can see definitely a duct tape process yeah you get a lot of motion blur is is the big issue and if you're you're sending a frame to basically a very simple human who has to then figure out what's going on with all that blur it's like is it going to recognize an arm maybe not so trying to keep your your capture rate as fast as possible uh means that you minimize the uh the amount of motion blur which is a bit of a consideration something else that we did which has suddenly crept in on this slide is the yolo v5 yolo is called you only look once and it's another layer of neural networks that detects objects and so we say just give us where the dancer is and cut us out um the reason for doing this is a lot of this processing is very very cpu intensive or gpu intensive and it kind of um so normally what we want to do is reduce the rgb video to a really low resolution so it can process it in a reasonable time and if you just let it do that like from your um your high resolution video capture you'll end up with like you know a few pixels on the screen which is not good when you reach the open pose stage because it just goes well i asked a few pixels um so what we do is we use yolo to extract the bit that we're interested in in the dancer and expand that out to full size before we pass it to uh open um open pose and uh you know so that can have a more optimal um resolution to work with yeah and also most of the um so most of these kind of motion tracking machine then and algorithms like open pose or or mocap.net i've been trained um not on ballet dancers who don't move in a normal traditional way um so things like trying to get ones that articulate the ankles so your feet are in the wrong position for a start. Nobody does also any of them. Nobody stands like this and the other thing is actually going up on point um so they they're not they're not trained so they assume that no matter what you're doing whether you're jumping or whatever you're always going to have your ankle flexed even maybe slightly um and then also when you jump or so today um with a ballet dancer they tend to have very straight legs and close together that's not naturally how we how we jump um so the training data tends to go okay okay oh where's the arm where's like oh there it is and so you end up with like and especially if it's above the shoulder height it's not expecting your leg to be up here um so we've had legs coming out of heads and lost arms for like you know several frames and then eventually it pops back up again somewhere um which is funny and brilliant for research but terrible for trying to create an augmented reality ballet but we'll get there in the end but yeah go ahead next slide yeah okay so this if you're uncomfortable with um sort of yeah an unusual human movement um you might want to shut your eyes for 20 seconds yeah they're not attached where they should be or in the right orientation so just warning you so the skin of the avatar was chosen by one of the performers whose whose data this is this is what they wanted to look like in the test data they didn't really want to look like this in the test data but they kind of you know the aesthetic shall we say um but yeah so as you can see it's not quite working but this is technically the um the first ballet performance displayed live on the so um so in order to as you can see this is highly complex highly technical um kind of understanding of machine learning algorithms um also requiring a lot of installed software and beyond me um but also beyond a lot of our computer science um kind of capabilities in the classroom generally uh to do with networks and maybe some of the and a lot of the programming languages might not be familiar to computer science teachers so I use something called p5.js I don't know if you've heard of that but you can edit online it's all web enabled so um and they use a machine learning algorithm called ml5 which is based on postnet um and so Alex you're gonna dance so bear with me a second um well I just get this back so here we have um a couple of the um meshes that we're going to show you live but this is never live demo should you anyway um but bear with me um I've treated these out so you should be able to have the links and I can share it later so there's one which is just face mesh so these work on mobile phones as well which is really cool um the next one is with filters and this is really talking about the me plus plus so it's got an offset of the skeleton tracking it's got a mirror of of me this is me in my ballet lessons no judgments um and then the black and white so we're using all the ideas of the filters and stuff so Alex first of all I'm just gonna um load these up um so all of this code is available um and free and there's a million and one training videos um um on the coding train takes a second obviously because they're they will work hello here we are and here's me and Alex having a chat and then I'm just gonna check yeah okay and then I'm gonna move it slowly so I can see Alex it'll take a second or two and so you can see it's picking up the dots here um um and what you can see is that it's still picking up but there's no feet and there's no full hands so for ballet that doesn't quite work but from a trainer from a teacher perspective it's a really awesome tool um and the code isn't too complicated if you used to do in JavaScript but also I've done this with people who've never looked at JavaScript before and what we do is we get you just go through and talk about sorry I need to get my stick so what we really look at are things like how the how the code function works and if you go into the nl5 site um you can actually see how they do that but the main thing that we look at is the function here and where it's you're going to and then we just look basically for the teaching perspective how we're going to actually position these on the screen is that we're just taking a circle and we're doing a fill on it and then we're drawing a line between the points which is actually how it works um don't and then we can get them to play around with a lot of the different filters as well so we're playing it so we're linking with all of those other different aspects of the computing classroom and again you can draw in um you can draw in data into these um as well so it's a really lovely platform to use because it doesn't need any installation it just runs off the web so these are the post nets if you wish to have a go yourselves and then I'll just go back to my so it's also kind of like how the open pose stage of the the main pipeline we develop works so it's a good way to sort of get into that and understand sort of how much data you can get that looks like a human being but is in two dimensions yeah so um so this has got the um x-ray threshold on it and different colors and things like that so there's different ways that you can do it so one of the challenges that we can do um either talking to you guys or in the classroom is bringing in a little bit about what is machine learning and what is ethical machine learning the fact that it makes assumptions on body types that you are actually you know you do have two arms and two legs it doesn't it's not trained on ballet um movement so therefore it's expecting very normal you know maybe running maybe a little bit of kind of like you know dab dancing whatever I mean they've done some really cool kind of breaks on stuff but it still doesn't work with the the way that our ballet dancer moves and then the challenge if you look on the kind of dystopian side is where are they using all of these facial recognitions body tracking because you've got to remember your brainwave data um or your brainwave fingerprints is unique to you the same as your gates so there are some a lot of implications in terms of the ethical kind of negative side that might be used in terms of you know the police and the military um and then obviously the deep faking driver yeah so when you've um if you're not familiar with deep faking at this point it's basically being able to represent somebody else using or represent somebody doing something they wouldn't normally be doing using an ml um platform to you know create their representation and as we're as we're doing here we're just kind of replacing a fairly ordinary looking computer model um and animating it with a natural human motion but uh it could be used to represent people in different ways and yeah I've not encountered a use for this particularly yeah this is this is a rant I'm getting this in now because I've been trying to debug stuff using some of these tools which I will not name um and because of the weird open source not quite open source academic policies some things are just you know they they published a paper they've published the results open source but you can't actually get into the process of developing your own models and things because some parts of this is kept hidden by university policy and you can go and talk to the people who developed this and they'll be like I really want to share this with you but institute policy and it is such a pain yeah that's my van over so so what we can also do with the data is data sonification so you can do this with the brainwave or you can do this with the skeleton tracking um and we use um in teaching we use um sonic pi but you can do it in python and and there's lots of different ways that you can represent the data in terms of emotion etc with that we will share the code but again the data hasn't been fully anonymized yet for us to share that with you um but it's a really nice tool I'm a massive fan of sonic pi and obviously p5.js so so yeah so just to wrap up um it's really we just wanted to share with you that actually um the things that you can do in the computing classroom you can bring in really complex subjects such as data ethics and it can be really fun in terms of how we can look at data manipulation and things like the hidden nature of of biometric data through your skeleton tracking and the fact and your beautiful brainwaves thank you very much thank you for listening