 Thanks for coming everybody. So yes, I'm fortunate enough to have about a year at POV to experiment and build things and try different things. So I'm going to show off some of the things that I've built and we'll ask for your feedback. There's a couple of themes that you'll see kind of throughout the work. One is trying to take the kind of expressive aesthetic power that we have with kind of traditional media, whether it's through, you know, the software tools like After Effects and editing tools and things like that and sort of all the kind of work you put into something offline to make it truly polished and expressive, you know, get the artist's voice out there, trying to bring that same level of ability to interactive, mostly on the web. So my first couple of projects, we'll talk about that. And the other is kind of meeting the audience where they live, whether that's adapting to the device they're on, the time commitment that they're prepared for, the sort of level of experience that they may have with, you know, whatever the interactive kind of components are. And finally, I think I'm interested in finding an analog to the traditional media have these sort of conventions that help orient the audience to know where they are in space and time. And I think we don't quite have that yet in interactive. So I'm trying to kind of develop some of that as well. So the first project I'm going to show off is this is one I've been working on since before POV. It's a few years old, but I've integrated into some of my work at POV. And this is a JavaScript library called Seriously.js, which is, it aims to be kind of the web equivalent of something like After Effects, it does video post processing and effects like that. So it starts with this sort of basic library, this is just a, you know, a color pattern of different effects that we can apply. So this is an example of a bleach bypass, right? And you can kind of turn it up and turn it down and you do things like, you know, exposure adjust, which is, you know, it's simple effects and stuff like that. I'm trying to use the camera working here. The camera is not working. So anyway, we'll show that off later, but it gets more interesting when you start to kind of apply it to video and combine the effects. So this is an example. This is a music video that was put out by the band OK Go. And the actual video that's coming into the browser looks like this. This is what gets downloaded from the network. The band put out their sort of raw green screen footage, and I can process it in the browser. So by combining different effects, I can create these different moods. So what this is doing is it's all kind of node based. If you have ever seen like Quartz composer or Nuke, some software like that, every sort of node in the, you kind of create this graph of nodes and each one is either a media source or some kind of effect or an output point. And you can kind of combine these and chain up these different nodes. So what this is doing is it takes, starts off with this video of this band doing things in front of a green screen. And it runs, first it runs through a chroma key filter, then it runs through, we kind of composite that on top of this background, then convert it to black and white, and then run this kind of TV glitch effect on it. And each one of those is a separate effect and it runs one on top of the other. And then if I change over here to a different effect, it's a similar, it takes that same chroma key node and puts it on top of a different background and then runs a different color effect. It's kind of night vision effect and some scan lines and some vignetting around the edges. I know when I kind of go between them, there's that kind of Star Wars wipe effect. As it's doing that, it's kind of running the two different chains of effects in parallel and combining them up at the end. So this is using a technology called WebGL, which gives access to the graphics processor, which allows you to do a lot of really kind of advanced things in parallel and gives you these, the ability to basically edit high definition video at 60 frames per second in real time in the browser. So the idea is that you can rather than having to render this stuff offline, which if you can render it offline, you may as well render it offline, but you can render it as it goes and react to what is the user doing or I can bring in some new data or new video clips or new images kind of right up to the second that the person is watching it. So it's mostly kind of experimental and it's a cool technology and we're kind of continuing to see what uses does it have. So I'll show you something, another kind of experiment that we built with it at POV, which is to do these transition effects. So this is running a video clip. Again, this is all in a browser, right? And I can switch between one video and another and it runs this flash. Well, let me, this should be playing. So it runs this kind of flash effect and we see that sometimes on, well, the videos pause because of network issues, but sometimes we see these flash effects on, I think they do it on like ESPN and reality shows and things like that. So what this does is you switch between the videos whenever the user wants to switch between the videos or you can kind of pre-program that in based on preference or let's say you want to have maybe a short version of your video and a long version of your video and it adapts to what the user is doing. So maybe you say, okay, if I'm on a mobile device, maybe we show them the short version because it's likely they're on the go. Or if I'm on a desktop or like a giant screen, maybe we show them a long version, you kind of splice it as you go and say, we're going to build in these transitions and you can have the same level of expressiveness that you would have if somebody had done this in Final Cut or After Effects offline and you can have them online. So there's a couple of other different transitions that we built in here. So there's the whip pan, which I like. And of course the channel change, which is my favorite always. So yeah, the channel change uses the same TV glitch effect that we used in the other demo. The flash, what it does is again compositing multiple different effects. So we start off with you have your input, your video that you're transitioning from and we ramp up a blur and an exposure over the course of maybe like a quarter of a second or an eighth of a second and then switch to another video and then bring the exposure and the blur back down to zero. So it's the same kind of stuff that you would do in traditional editing, but we do it in code and it makes it much more adaptable. So that's a cool toy. What else? The next one comes back into the area of adapting to the user. So this was an idea. I was very frustrated with the way the experience of watching videos either in a browser or especially on a mobile device where we've ever seen where somebody wants to show you a video they took on their iPhone and they hold it vertically and it's got these giant black bars at the top and the bottom. I'm one of those people, I always have to grab it from the person and rotate it and say, please, please just hold it right and fill up the screen. It drives me nuts. Or even if you're doing a web experience and you have a video and you want the video to fill the screen, that question of what do you do with it, it's tricky because you either have to have black bars on the top or the bottom or the side or you cut off chunks of the video. Now if you're subject to the videos in the center of the video, that's fine. Okay, so you cut off the sides, you cut off the top and the bottom, okay, whatever it is. But the one thing that to me does not work is this assumption of, well, I just think whoever's going to watch my interactive video web page should just always have, let's close all your other browser tabs and maximize the thing and show it on a big screen like that. I think that's disrespectful of the user's experience. I think somebody, maybe they read about your work on a blog and the browser window looks super narrow like that, right? And then you say, oh, well, sorry, you have to maximize it. I think that's rude at best. So I came up with this idea of, well, I was remembering back years ago with the little experience I have shooting film and remembering the time, especially around sort of when we were transitioning to the four by three TVs, to the widescreen TVs, and you used to kind of have to shoot for both, right? You shoot your, you know, so you have the guides when you're shooting. And I thought, well, what if the guides could be different every time for every shot? Because maybe one shot, you want to, your center of the screen is, or your center of focus is on the left side or another shot, your center of focus is on the right side. So I built this software that kind of, for each shot you get that data is specified along with it and the cropping of the video reacts to it. So this shot, for example, I thought this building on the right, I want to keep that in frame. So when I, so if I turn off the effect, right, just the traditional, and we get these bars, right, on the sides, and if I go too skinny, then I get them on the top, right? But if I turn on the effect, it fills the screen. And as I close it, as I shrink it, it keeps the right side of the frame in, in frame, and it crops on the left side, because the left side is extra, right? And if I play into the next shot, the next shot I thought, well, this bridge part here is the most interesting. So as I narrow it like that, I keep the bridge in the frame. Or if I go kind of up like that, I figured this wall kind of would, that tells the story best. So this video clip has a bunch of shots. Again, with this one, we keep, we keep our subjects in the frame, whichever way I do it. Hang on, I lost. So we can do this shot by shot. And the next, let's fast forward a little bit, we can even kind of animate it, which I think gets interesting. So here's this bit where this, this is from a POV film, Cutie and the Boxer will be coming out in our next season. Keep an eye out. And so he's going to kind of walk across this frame, filling up this painting as he goes, right? Which, so if I'm going to watch this in full screen like that, this is just more or less the original shot. It's fine, it's great. But if I narrow it down, if I think, all right, I'm watching this on, say, a vertical tablet, right? It's actually going to pan with him as he goes. You can see it moves really slowly. So something I learned from this, which is interesting, is that I do have to pan really, really slowly. Otherwise, you can kind of see that pan and scan effect. Because if I had, let's say I had the camera on a track and we were moving the camera, there'd be this sort of very subtle kind of parallax difference that you would see. And if I moved it quickly, you would really see it. So if it's not quite as good as being on a track, but if you move it slowly, it's not going to, those differences aren't going to jump out at you. So this, you know, and this adapts to a tablet. It adapts to not quite every phone, but you know, it works pretty well. Now something's going to happen shortly. So I don't know if you see the flashes on the side. So this is the boxer and Cutie is taking pictures of him. And it should, in a second, it's going to cut back to her. There it goes. So it cuts back to her, right? And she takes the picture and then it's going to cut back to him. Okay, great. So, but they're in the same shot, right? So if I were to do this, if I had the frame like that, and we wanted to cut back to her, it's not really enough of a cut, right? You just, it would just sort of jerk a little bit in the middle. So it's intelligent enough. So I tell it for that shot, only apply that cut if it's within a certain aspect ratio. So if I go back and play it again, it just, it just continues to kind of pan across. So again, this is another scenario where, you know, I think that sometimes I get some resistance from filmmakers on this. They say, look, I frame the shot the way I frame the shot, you know, deal with it, watch the way you want to watch it. And I think that's kind of the traditional approach is we assume that people have nothing to do on a Friday night and they're going to go to the cinema because, you know, what else are you going to do? But we know they have other choices. And I guess the kind of statement I'm saying is like, yes, it is ideal if people watch it the way you want them to watch it. But audiences can be picky now. So if they're going to watch it the way they want to watch it, you should at least have some control over that, right? Rather than, you know, I think I think the worst scenarios is just saying, you know, I'm going to hold my phone vertically and have these giant black bars, or we're going to cut off half of somebody's face in the shot. So I think this is a good kind of compromise. At worst, it's a good compromise of making, serving both the audience and the audience. And I think it's best if you can shoot the film with this in mind. I mean, this is, we happen to be lucky that we found a cool clip that it works with, but it doesn't work with every clip. And it's, I'm distracting. No, no. This is the beginning of the film. And the filmmaker was cool enough that yeah, to let me do it. This is, this is actually an example where it gets a little bit tricky because we don't want to cut off the title. So I do, I sort of, so for every shot, I essentially draw a rectangle around what is the most critical part of the shot. And it will keep that rectangle as close to center as possible or area and frame. So, you know, outside of that, sort of temporarily outside of that, it'll let you be right. So here, so, yeah. So that part, that's one of my favorites. Okay. Do you have any questions while we're going or anything? That's about three minutes. That's the longest one I've done. So I would love to see that. Yeah. Yeah. I mean, would you do that to a like an hour and a half long or an hour long documentary? I mean, I don't see why not. We haven't really built out the authoring tools for it. The authoring tool is basically I edit the JavaScript by hand. I had to go through that shot, you know, frame by frame. Yeah. It was the kind of thing where, you know, this was an experiment to see if people liked it. And if somebody wants to use it, then, okay, we look into building the authoring tools for it. But yes, you could do that. Yeah. Yeah. There are there are algorithms for taking an image and sort of finding parts of it that are of interest. And I think the way that would work is you could say, I would do it once offline and kind of run a first pass at it and say, you know, here's my video, let's find all the interesting stuff, and then output the kind of first draft, and then I would go back and have a human tweak it, I think. I mean, the way I imagine it is, you know, I mean, I imagine some intermediate steps. I think, I think, you know, if this were to, if people were to want to do this, I would think you would probably want to do something like this in an edit suite. You'd have an editor doing it. And it would be part of that, you know, maybe like a final cut plug in or something, and it would output the data file and you'd upload it to the web. But I think ultimately, if it was the sort of thing that was going to become standard, I think you'd want to have this in mind as you're shooting to whatever degree is possible. I mean, if you're in the field and the documentary, I think it would probably be easier in a studio or in a sort of fictional film, you know, where you have a little more time in your shooting schedule. But ideally, yeah, if you were going to shoot something and you had time for a setup, I would keep that in mind when composing the shot. The same way they did when, you know, when you had to shoot TV for four by three, and they said, okay, but the DVD is going to be like this. So you keep that in mind when you're shooting it. You know, you kind of build in, you know, if you're doing like a talking head, for example, you say, all right, I want to, I know that somebody's going to watch this vertically on a tablet and somebody's going to watch it horizontally on a TV and who knows what for a browser, right? Like a desktop browser. Yeah. Do you have another question? Yeah, that's interesting. I mean, shooting a closeup that fills the whole screen, that would be, that would be challenging. I think it's up to the sort of artists to whether or not they want to do that. I mean, you know, you can still have a shot that says, okay, this shot needs to, you need to have the whole thing. So if somebody's watching it, holding their iPad vertically, you're going to get the black bars at the top and the bottom. And if that's the best way to do it, that's the best way to do it, that's okay. You know, or you say, okay, I'm going to build, you know, I'm going to build it in, I'm going to build in that extra space. You know, another thing I see all the time is like on TV news when they like to show a, you know, somebody shot their video, you know, on a vertical cell phone like that. And they, so they, they put it in the middle and they kind of blur out the sides, right? You've seen that. Right, they have a second image behind it. And that feels sort of hacky. And it's like, okay, but what are you going to do? Because this is primary source footage of somebody else who holds their phone vertically, which I beg people not to do all the time. But you know, you work with you got what you got, or, you know, if you have the, if you have the time, then, then I think it's best to kind of play it that way. Yeah. To be like a really interesting A-B test to do, like if you were to push out like a single phone or like half the users just get the standard version, half of them get the smart crop version, and then just like, if you can get feedback on like whether they're like, how much does this actually qualitatively improve at the end of the day? Right. Okay. Yeah. Talk to my agent. He's over there. He'll set it up. Cool. Yeah. I've also thought that I haven't tried this yet, but one idea was, was also even if the aspect ratio doesn't change, maybe taking into account the size of the device to, you know, you might have, for example, if there's a shot that requires a certain amount of detail, right? And if you're showing on a big screen like that, you know, I can have maybe a small thing in the middle with a lot of detail in it. But if it's on a very small device, maybe even though the aspect ratio is the same, you might want to zoom in more. So that might be an area for future research as well. Do you have a question? Well, I have not, but so this uses, so I think if you're going to do something like this, you would, you would render the subtitles dynamically. So this is something that I had done in other work. So this uses, so it's totally custom software for managing the aspect ratios, but it uses popcorn.js for the timing, which is a JavaScript library comes out of Mozilla for running different effects at different times on video. And, you know, I've used that for subtitles as well. So I think, you know, this is the sort of thing. Yeah, once if you're going to do something like that, I would absolutely dynamically render the subtitles in HTML. And that will allow it to adapt to that. It does get a little tricky with, like, you know, the contrast between the subtitles and the picture. And so you have to do something that's going to be, yeah, that's going to be adaptable to that. And there's, there's different approaches you can take. But yeah, I think I would not want to do it with something that has burned in subtitles. Yeah, right, right. But the dynamic subtitles has, has so many other benefits anyway. You know, like I did a project a few years ago where we had, it was like, part of it, people were speaking English and part of it, people were speaking German. And part of it, people were speaking, I don't know, French or something. And so we had, like, if your audience speaks German, like we would, the browser would auto detect your language. So if your language is German, we would only show the English subtitles and the French subtitles. If your audience was, you know, Spanish, we'd show all the subtitles, right? Or if your audience was German, but they could turn it on like captions for, you know, hearing paired or whatever, we could turn it all on. So you get a lot more sort of finer grain control. You know, you could also adjust the size of the subtitles if the audience wants it bigger or smaller and things like that. So there's a whole lot of other benefits to that as well. Cool. So why don't I move on to the next one? So let's see what we got here. Okay. So the next thing, the next batch of stuff, we get into, we start to get into virtual reality. So this is something else we've been experimenting with over the last several months. So you guys probably all, I've heard some of you talking about it. Has everybody, everybody fairly up to date on what's going on with this thing? Yeah, who's tried on one of these? Okay, we got a few. Not quite everybody. I recommended trying it. You have one at home. So virtual reality is hot right now for the last year or two. You know, the Oculus Rift is a work in progress and there are some other devices coming out at much lower prices than they've historically been. We're talking in the area of hundreds of dollars as opposed to tens of thousands of dollars. And cardboard. Yes, we'll get to cardboard. Don't worry. That's coming. And right, so it's getting better and better and better and a lot of work is going into the visual quality. But a couple of things that are sort of left out, among them that people are really struggling with is how do you interact with stuff? I can put this thing on your head and you can look around and then you go, well, what do I do? And people have been sort of starting off with some of the same interaction models from kind of like first person shooter video games, things like that. And I find that they, I'm really frustrated with them. They don't work at all. So the first experiment, which come on browser. So I'll give you, while I'm dealing with this, I'll give you another kind of bit of talk about this. So a lot of people are using video game engines to build work for virtual reality. So there tends to be sort of two approaches. You either have, you build something like a video game where you model this 3D world and you put the user in the middle of it. That's one approach. And another approach is you get these fancy camera rigs with many, many cameras around in a circle and you kind of, you shoot video in synchronization, in synchronized in all these multiple cameras and then you stitch them together in this horrible complex format that supposedly makes it 3D. So that is interesting and there's some good stuff happening there. It's got some limitations and the process for both is difficult. Do you have a question already? Oh, sorry, I'm just, I subscribed on as it's visible. Oh, hi. Sorry about that. Okay, that's all right. Let me restart. So the reason I'm having so much trouble with this browser right here is that there's another approach to delivery of this content, which is delivering it in a browser as opposed to a game engine. And where this has been interesting is that the native game engines, they're pretty fast and you get these really high frame rates and low latency. So when I say latency, I mean as you move this thing, there needs to be like less than 20 milliseconds for each frame. Each frame needs to be less than 20 milliseconds about behind where the actual position of your head is. A little bit more than that, it starts to not quite look real. A lot more than that, you start to get sick, which is a whole other interesting thing is that I've been working with bugs in software for years, but this is the first time where a bug in my software can make you car sick for like two hours. So that's just a new thing I'm learning about the process. So anyway, so the browser is an interesting delivery platform because it does a couple of things. One is that it, so yes, the latency is a little bit longer and the frame rate is not quite as high. But one is that it's a really great delivery form because if you're going to get something that's built in a game engine, you usually have to go and you download some file and it's 500 megabytes, a gigabyte, whether it's on your phone or your laptop and you got to like wait and you download this thing and then you run it and who knows what software you're running on your computer. It could have security issues, it could be buggy, it could take over your laptop or do whatever. Whereas on a browser, you bring it up, you load the page and it's just, it's there within less than a second. If you have the available bandwidth. That's the big issue. Yes, yeah. Because you're going to have the boot. Right, but what you can do, and you could do this in a native app as well, but it's a lot harder, is you can build it to sort of gradually display. You can bootstrap your basic scene and your head tracking immediately and then fill in the scene as it goes, which we do some of that in another demo. And the other thing that's good about it, and I'll get more into this in a little bit, is that it's a lot more adaptable. A browser runs on so many different platforms and we can build something that, if you say, all right, I need to run some Unity game engine thing and it's going to be geared, okay, I have to have the Oculus Rift and I have to have a fast computer and it has to be on this and you have to send out the PC version. It's like, oh, I didn't make a Mac version. Let me see if I can make a Mac version. So this can work. If I have the headset, if I don't have the headset, if it's a phone, if it's a tablet, if it's a laptop, as long as it's a relatively recent browser, it works fine. So the first interesting experiment we built with this is to go back and address that question of how do we navigate and control stuff. So let me make sure I'm on the Wi-Fi here. So this is a fun trick. So I can look around, right, and I can even, I can look around in 3D, right? Okay, so that's all good. But how do I move around? So one of the things that happens, so Unity game engine, for example, has this kind of built-in controls for moving around based on what you would do in a first-person shooter. And the way it usually works is whatever direction you're looking, use the keys, the arrow keys on the laptop or desktop, whatever, and whatever direction you're looking is forward, right? Now, this is passable, but it stinks, right? The majority of all the, any virtual reality piece you've seen out there that lets you move at all does this. And it's totally unnatural because nobody moves like that, right? So imagine Sean and I, right, right, exactly. Oh, you know who does it? Batman. Batman does it because if you ever look at all the Batman movies before the dark night, he can't turn his head. So he always got to do, he's got to do that. Next time you watch Batman, any Batman movies, you're going to see it. Although Michael Keaton was, all right, anyway. So, right, so that's, that's the thing. It's like, if I'm, if Sean and I are walking down the street and we're going to talk to each other, I'm going to turn and talk and look at him as we're walking and I'm going to check in every so often, make sure I didn't step into traffic. But for the most part, I'm kind of turning my head. But imagine Sean and I are walking down the street and I look at him and I just start walking into him, just like that. It's totally unnatural. It really just throws you off. So I was like, maybe we can do better. So I figured I've got this thing, right? And this thing has an internet connection. It's got a touchscreen. It's got an accelerometer in it. So what I do is I scan this. I know QR codes are cliche, but bear with me. So I look, feel like I'm connected. So now let me get centered. I can point at things. And so I can, it's tracking where I'm pointing it. So I can grab stuff. So I grab this cow, right? And I just tap on it. And now I'm moving the cow as I move my phone. Right? So that's kind of cool. So I tap it again. I can, I can grab any of these objects and manipulate them, which is pretty cool. And it also does a little thing, which as I hover over each object, I get like a tiny little vibration in my hand. So it's a little bit of kind of feedback that way, which is neat. And the other thing is, so now how do I move around? So I can just kind of drive like this, right? And wherever I point my phone, that's where I go. So I can either do it by pointing my phone or by moving my thumb in any direction. So if I put my thumb down and I move it forward, I'll move forward. And if I move my thumb to the right, I move to the right. But then if I rotate it, my thumb is still to the right. So it's sort of, it works whichever way you do it. And it's better if you're kind of looking around as you do it. And it's pretty, I think the alignment got screwed up, but it's pretty natural. Let me see. There you go. The other thing that I do with it is kind of neat. After some experimenting, so there's, there's this whole best practices guide that Oculus put out that tells you what things make you sick and what things don't make you sick. And the biggest, one of the big things that makes you sick is moving backwards or sideways. So what this I had to do is if I'm, if I'm moving, let me restart it here. And so if I'm moving forward, I can move forward pretty quickly. But if I look to the side, it slows down quite a bit. Or if I look backwards, it's going very, very slowly. So it's always kind of tracking the difference in angle between where you're moving and where you're looking. And it's, and it's the fastest as if you're walking straight ahead and it slows down as that angle increases. Which if you think about it is mostly how if you don't walk, I encourage you to walk that way. Anybody doing full sprint backwards, just please be very careful. So yeah, so I feel like this was a pretty cool experiment, fairly successful. And that was cool. So I took the same thing and I used it in my next experiment, which I'm also kind of, kind of psyched about. Let's see. So let's get, I gotta get this set up again. Okay. So this was getting into data visualization a little bit. So where this comes from was again, so if I go back and talk about the way that people are making virtual realities, like you either have kind of this, these expensive and difficult to use camera rigs, which limit you to one point of view to one spot, or you have to construct a 3D world more or less by hand. You know, you can scan some things, but it's difficult and you need, you need to get the lighting right and you need to have artists really constructing these things and that's difficult. And I had neither an expensive camera rig nor a good 3D artist. I'm certainly not one. So we, I worked with some data here. So what I got was I used these open street maps has these giant databases of buildings and maps of most of the world, like every major city in the world. So I can drop you into Tokyo, London, wherever you want. This happens to be Manhattan because that's where I live and it's awesome. And so we can fly around and this works with this thing too. So I can get aligned. So yeah, I can kind of fly Superman like all over New York City. And here's where this is, so this loads up progressively as well. So as I go, I didn't load up the entire world as much as I would like. But as I've screwed up my, I've got to get my alignment right here. Sometimes the, the orientation of the phone gets out of whack. So, but anyway, yeah, as you, as you fly around the city progressively loads up. So that lets you keep a pretty high frame rate without, sorry, every time this is a thing, if you're sitting in a chair, it becomes difficult to turn all the way around. So I just did a thing where you shake the phone and it does, it rotates you 180 degrees. So that's kind of neat. So anyway, yeah. So, right, so it loads up progressively and it's cool. What's that? Yeah, yeah, yeah, with, with boxes over their heads. Yeah. Okay, so, so the first part, the first part was just getting the city rendered. And that was, that was tough work. And it's pretty cool. And if you can fly around it, it's super fun and weird. But now what kind of data can we bring in? So the first piece I got was income data. So this is from the United States census. So we can go anywhere in the United States and look around. And we charted income data. So here on the upper east side, we've got some lot of income happening. And if you look up in the distance up in the Bronx, some of these bars are not so high. So that's kind of cool. It's kind of excited about that. We did the same thing with bubbles. So instead of bars, we've got bars and we've got bubbles. Okay, giant bubbles on the upper east side. It also happens that this is like the best spot in the world to see like wealth disparity. This is uptown Manhattan. What else? Okay. Here's where I think it starts to get really interesting. So let me go back to the beginning here. So here I got, this is the database of, excuse me. This is, each one of these is a personal injury claim against the New York City police department in 2013. So there's something like 3,800 of them from that year. And I plotted them just all over the city, wherever they happen. So here I am on the upper west side and we fly uptown. They're from different databases. So this one, right. So the buildings and the maps, that's from open street maps. The income data is from the US census. This is from the New York City, some office of the controller. So there's various data sources. The census was a good one. Got a lot of stuff from that. So as we fly up the upper west side, we see, okay, there's a couple of incidents here. The upper west side is a pretty safe place. A lot of wealthy people and I'm surprised that there's even as many as there are. And then we start to get uptown and of course what happens when we get to the Bronx is a lot. There's a lot in Brooklyn as well, but the Bronx is just, it's totally lit up. So where this gets interesting, I think, is we get uptown here. We have, this is kind of flying up over the city a little bit. And I can do another view where I'm sort of lower down, right. Get closer to the ground. And as I kind of play around here, so I also, we move a lot slower when we're close to the ground because it just makes you more sick. If you're faster at that height, you're just crashing into buildings, which is super weird. But as, you know, I played with this a little bit. And I found myself, as I was sort of roaming around the city, I find myself imagining that, and again, this is just, these are just these big red cylinders, right? You know, it was this quick and dirty visualization I had in mind. Like I wanted to make it into like these fires or like lit up spaces and make them more maybe a little more expressive and elegant. But it's just, it's just this clunky example. But I find myself trying to avoid them, trying to think, can I walk through a neighborhood without running into a spot where this violence supposedly happened, where somebody was, presumably somebody was injured by a police officer. And you can't, like it's just, think about how many Starbucks there are, and it's like multiply it by 50, they're just, they're everywhere. You walk around. And this is where I think the kind of VR pays off, where it's different from, you know, plotting this on a map. Because this somebody did plot this on a map, right? And that's how it was inspired to, to try it. And okay, yeah, it's cool on a map, but this feels very different. And, and I discover there's sort of this balance, right? When you start off, if you think about the map as sort of the God's eye view, right, from space of any data set, your, you get this, you get just tremendous overview, you see all of it, right? And you can say, okay, there's a lot going on here, but there's not a lot going on here. And as you kind of come lower into the space, right, and it's sort of with the high flying high up as sort of the middle ground, as you come lower, the, you sacrifice that overview because you can't see what's going on even three blocks away because there's buildings in the way, at least if you're in a city with tall buildings. And so you sacrifice that, but in exchange, you kind of get this a little bit more of a feeling of intimacy and presence. So I think there's a lot of potential here. I think, you know, what I would ultimately like to see is rather than just say, okay, here's a city, go explore it, is a narrative project where we kind of alternate. You say, okay, we're going to take you on a tour, we're going to fly you around. There's going to be a voiceover or something presenting. Here's what the city looks like. Let's fly down. And, you know, as that goes and say, okay, now you can explore when you've been through that. I want to show you one more data visualization here, which I think is also pretty interesting. So this is population data sorted by race, also from the US census, also inspired by a 2D map that somebody did this. So this is interesting. Let's get back up to the upper east side. Actually, I'll just do upper east side. Okay. So here we go. So this actually looks even denser. This is what it looks like in 3D. So the blue dots are white people. If anybody's ever been to the upper west side, upper east side, this is not surprising. And as you fly through, it's sort of, you know, people are represented kind of like, you know, I think of them as like kind of pollen in the air. And you see what happens when we get uptown towards Spanish Harlem. You can imagine what will happen. Sorry, I'm crashing you through all these buildings, which is super weird. And people. And it starts to look a little bit different. And once we get up to, and now it's pretty different. You know, this comes from some of my own personal experience sort of traveling the world. You go to, you know, I'm spoiled living in Manhattan, and we have, there's quite a bit of diversity, you know, but it does vary by neighborhood. And then, you know, when I went to Copenhagen a few months ago, it was a little bit different. So I was trying to figure out like, can I replicate that experience a little bit? You know, so it's not quite rendering actual people and they're still abstracted, but I think it does give you the sense of like being there. So that's kind of neat. Do you have any questions about that? No? Okay. So we can move on, I guess. The next thing. Okay. So as I was presenting this work in Europe, there was a, I was on this panel, some, a couple of you were there, and there was this discussion of diversity in the field of virtual reality. And we were kind of worried that there was not enough so far. It happened to be a panel of five men, and you know, we were looking around at the industry and who's making what, and it's, it's not great. And that got me thinking, since somebody asked on the panel, they said, well, what about, you know, how expensive is this stuff to make? You know, can you, you know, do you need to be super well funded to produce VR, you know, content? And I was like, oh, well, you know, this is only $350. It's not so bad. It used to be, you know, you'd have to be thousands of dollars and you'd have to have like, you know, super computers everywhere. And, you know, it was really impossible. And now you can get this and you can get Google cardboard. Do you guys know about this cardboard? So this is fun. So Oculus came up with this thing. And then Google came up with this thing. This is usually sells for about $25, but you can get it for as low as four and a half. And you take your phone and you drop it in, and you put the Velcro on and you hold it up to your face like that. And it uses the gyroscope in the phone to track where you're looking. Right. So now it's not as, you know, the latency is longer, the frame rates are lower, but you can at least get in it. And it's really not that bad. It's pretty good. It's pretty cool. So, you know, so I started looking into it and I was like, well, what kind of assumptions are we making? When I say, oh, don't worry, this is only $350, but I'm missing something here. I'm missing this. This is like a $2,300 machine. Okay. I'm missing that I have to know where to get this thing. Right. I have to know where to get the software for it. And if I want to build something for it, well, that's like a whole other thing. Right. So I started thinking about the first thing I did was some research into what would it take to just like view this content at a minimum. Right. So I found out, I did some research and I found out that something like 77 or 80% of people even under making under $30,000 a year in the United States, people under the age of 30, about 80% have smartphones, like not just flip phones, but like actual smartphones. So it's not that bad. Could definitely be better, definitely be better. But even, you know, I found that it is possible to go buy a used, not this one, but like a kind of low end smartphone for about $45 that would be just enough $45 and one of these for $4, you can get in VR for under $50. Right. So yes, it could be better, but it's a start and that's good. And I think we need to be thinking about that. But that also raised the question of, okay, what if you want to make something? What if you want to build content? And I still wasn't really satisfied with that because, you know, it requires, again, thinking about like, what are the assumptions? So instead of saying, okay, well, I can build something really easily because I have this fancy pants MacBook Pro and I have hosting accounts and I have a computer engineering degree and time, you know, at POV to work on this stuff. Like what if I didn't have any of that. So I imagine a, you know, 14 year old kid who's got, you know, okay, maybe they've got a smartphone, but they don't have a computer. They don't have a hosting account. They don't have a computer engineering background. What do they do? Right. So maybe they go into a public library or the school library and you say, okay, well, I can get on the public Wi-Fi with this thing. So I don't have to use my data plan. That is an issue, by the way. As I was, the last time I was giving this presentation, a buddy of mine was downloading the Sundance set. Sundance, they released a bunch of VR pieces and he was downloading clouds over Sidra, which is one gigabyte. So he's, he's talking to me and he's got his cardboard. He's like, oh, this'll be cool. I'll set that to download and he puts it down and I give him my presentation and he goes out to take a walk. He sticks the phone in his pocket and he gets off a Wi-Fi, right? And he comes back and so he's from Canada. He comes back, he's got a text message that he's cost him $100 in roaming fees for his five-minute walk and he's capped out his plan for the month. So this is something we try to be careful of. This is why we do this adaptive streaming stuff. Like, so, you know, some thinking about all this stuff. So this kid can get on Wi-Fi and they can sit down at the library computer. Now, what do we assume about these library computers, right? So you can install any custom software on these things. You know, you can't put the Unity game engine on there, right? You can't, you can't put like a development, you want to build an Android app and you got to put on all the Android development software, right? You can't do that. You can't even really save files on them, right? You can save it for five minutes and as soon as you get up and log out, it wipes off all the files, right? So this is, this, you know, even if somebody, even if somebody, Santa Claus comes up with one of these things, like, the kid can't use it. The kid can't make anything with it. So what can we do? So I started with, so this is a JS bin. Does anybody know what that is? It's a, so it's a, it's a kind of a HTML JavaScript sandbox tool. So it lets you, so if I make like just a new one here, jsbin.com, right? Well, it helps if it doesn't crash your browser. That I can't promise you. So this, I'm, by the way, I'm running an experimental version of Chrome that interfaces with the Oculus Rift and because it's an experimental version, it crashes all the time. So hopefully they'll fix that one day. So, right, JS bin. So JS bin is this kind of sandbox. So it gives me some HTML on the left and I can write, hello, right, maybe, hello, and it shows up on the right and I can make it into, I can put some tags in there, right? So I can type in HTML and CSS and JavaScript on the left and it builds a web page on the right. So pretty easy. So what do we do is I start off with this one and let me zoom in a little bit so you can see it and all I got to do is I paste in this script. So this is one script file that I built and what it does is it bootstraps a VR scene, an empty VR scene with nothing in it on the right side and we can pop this out like that. Okay. And so far I got, I got nothing going on. If anybody wants to bring this up on their own machine, you're welcome to. It's JS bin.com slash W E M I P O slash one. So let's come back to that. So now, so I've got the output. So I get rid of the HTML. So now with that scene, I can start making some basic stuff. So first, I'm going to make a box. So I type VR dot box, right? And there's my box. And actually, if I reload, it should show up on here. No, not so much. Oh, it's just number two. Yeah. So, yeah, JS bin. So now as I even, so now I've got this box in my Oculus and 3D, right? So it's a, it's a sample. It's a, this script that bootstraps this scene, it works, it works on the, it works just on a browser, right? If I, let's say I turn this off and I can just look around with my mouse like that. If I don't have the device, I can use a device. If I have it, I can put this on my phone and drop it into cardboard, right? If I have cardboard, it'll do it in 3D. If I don't have cardboard, I can just, you know, I can just look around like that in one dimensional or, or I can scroll around with my fingers. It works on any of those. So this kid can, this 14-year-old kid that we imagine in this library can bring their low-end smartphone, sit in the library, build something. There's a bunch of commands. So I did a VR dot floor, right? So I've got a floor and I can say, I'm going to make the box, set material, wood, right? And let's move up two meters and move x negative one meter, move it over. So there's a bunch of commands for building simple things that this, what you can do is you, you don't have to have a hosting account. You don't have to have anything except for the phone, maybe. You don't even really need the phone. You go to a library, you look up the reference for these commands which are much more simple than, much simpler than most of the 3D libraries out there. And you build something and you take that URL, you post it in your phone, you send it to your friends, you send it out in social media, you send it to yourself. And as long as you have that URL, you can come back to it and edit again later. Anybody can fork it. So now this person is, yes, it's really basic shapes. Yes, it's lower quality because it's on this dinky little cheap phone and this piece of cardboard. But now we at least have this other range of voices making content and getting it out of the world. And if they do get it out of the world and they get it to somebody like us, and I have this thing, I can look at it in here. And now this person is, is building things in full proper VR. So that's, that's the latest project. And I'm finding that as a sort of nice side effect of looking at it from a point of view of an audience that's very different from me. It happens to be, I think, a pretty good prototyping tool as well. So, you know, it's, it's for people who have very basic JavaScript skills like code academy level. They can use it for a quick prototype, you know, or if I'm sitting in a cafe with somebody having a meeting and I left my Oculus at home and I'm like, okay, what would this look like? And let's sketch it out. It's like, okay, I can get a floor over here and a box over here and a sky over here. And there's, there's animation things I can get them moving around. And there's a, it'll play audio in 3D space. And I can look at something and it triggers the audio or, you know, I can shake the phone and it triggers something else. You can get something really basic going in a couple of seconds even. So I'm pretty excited about that. And, you know, hopefully get some workshops going, teaching people to do it. So that's it for that one.