 Thanks, Alex. I'm really grateful to be here today. Wow, this is more people than I'm used to speaking to. Yeah, so I am Ashi Krishnan. My handle basically everywhere is Queer Violet. And in a past life, long, long ago, I used to do 3D graphics for the US government. So this is the most photogenic project that I worked on. We also had some forecaster tools which sound very impressive to describe them. And then you look at the interface and it's like Photoshop and Maya kind of got together and someone put it into a blender and it just looks terrible. So this is a museum exhibit called Science on a Sphere. It's a big carbon fiber sphere that hangs in the center of a room and has four projectors on it. Kind of my first experience with projection mapped art done for the US government, surprisingly. This was my introduction at least to some of the foundational problems in graphics. Namely, we were trying to project a 4096 by 2048 video onto this sphere at a time when that resolution was patently insane. Like nobody was doing it. Half of the tools we were using couldn't handle it. If at the time to get FF MPEG to even decode that, you had to recompile it with certain flags. It was a nightmare and it was the kind of nightmare that I've sort of learned to love and associate with computer graphics. There are three, I think, main fundamental problems that you have to solve with graphics. The first is buffer management. So you have to deal with taking all of the data, usually quite a lot of it, describing a 3D scene or in this case, describing the pixels on a globe and putting it where it needs to be. So that's a problem that's common between computer graphics and supercomputers, actually. I later worked at Google and it was surprising to me how many of the data locality problems we experienced working with sort of very, very small, high-speed systems like GPUs were also applicable to very, very large distributed systems like Google Bigtable. The problems are essentially data locality and they're basically the same. It's just a matter of time scale. Like, does it take minutes to move data from one region to another region or does it take milliseconds to move it from one part of the computer to another part of the computer? Please tell me that's not gonna keep happening. No! Okay. So that's the first problem, is moving your data around. The second problem is telling the specialized hardware what to do with your data. And to do that in openGL land or WebGL land, we write shaders. We'll take a look at what shader code looks like later. But the thing to remember is that whereas your CPU is one or two maybe very fast cores, your GPU is like 48 or 128 or some large number of really, really crappy ones. And so you can write little programs that run on these little crappy computers that do incredible things because there are so many of them. And the last problem that you encounter with graphics is compiling the programs. The tooling around openGL was terrible and WebGL has improved it substantially, but as we will see later, it continues to be terrible. It is just very, very hard in ways that it seems like it should not be hard to get the right packages and get them to talk with each other and encapsulate your code properly and not end up somehow linking against a very specific vendor driver. That is not a problem you have with WebGL, but that is sort of the general scope of problems you have working in graphics. So these days I, oh, I can use this thing, right? Yay. So these days I don't work for the government anymore. I don't work for Google anymore, thank God. Instead I teach, and I teach primarily women to code at the Grace Hopper program at Full Stack Academy, as Alex mentioned earlier. And we teach JS node and React. And I really like this combination. We started teaching React last fall and for about a week I spent about a week learning it along with the students and sort of hacking with it and my first response was like, oh God, why do we need more, why do we need JSX? Why do we need this? Why is, what's going on? And then by the end of the week I was like loving everything. I was like, oh, it's like a new kind of literal and you can return, you can just use all the functional programming tricks you like and it's not Angular, which apologies for anyone who loves Angular. I was happy for the switch. And our students seem to take two and they invariably love to push the boundaries of what we're doing. And I like to think I'm a little bit helpful in this regard too because students will come to me and be like, we wanna like make a React app but we wanna use WebGL for everything and we also wanna use Web Audio to do audio synthesis and it's gonna be like a living visualization. And I'm like, yeah, you should totally do that. And so they did that and here it is with some obvious visual artifact. This is, they called it PGBVSU. This is a student final project. And it's a sequencer that you can go and throw samples around on. I'm gonna hit play on this and what comes out might not be so great but we'll just, or it might be nothing because sometimes these projects don't keep working for very long after they're up. Yeah, nothing's gonna happen. That's all right though. So they built this using React and using WebGL which is still kind of a, like the wild, wild west of frameworks right now. Like there isn't a really great way of gluing React to an alternate render target like WebGL or that is there is no currently accepted like here's the library you should definitely use. There are a few techniques. So one technique is to just say, okay. We want to output a scene, like a 3D scene. And so we should probably just, and React is very good at outputting DOM elements. So what if we just made some DOM elements that described a 3D scene? And if you do that, then you'll end up with A-Frame which is this like pretty little library from the Mozilla Foundation that lets you describe 3D elements and describe a 3D scene using DOM elements. And it also gives you, let me see if I can pull this up, control shift, nope, not that. It gives you an inspector that lets you sort of move the camera around and gives you a lot of nice tooling. The thing it doesn't give you if you're coming from React land is the ability to pass in props that you might reasonably expect. For example, in order to describe the position of this part of the building, we're gonna have to pass in a few data points or in order to describe the movement of this cube cloud, I think that's supposed to be a cloud, we're gonna have to be updating a position prop on that object. And the way that you pass in a three-dimensional position prop in A-Frame is number, comma, number, comma, number. You literally take your vector and turn it into a string by joining it with commas. And after you've done that about 20 times inside your React code, I start to cry. I just literally had a breakdown and was like, we're never suggesting A-Frame again. There's other ways of handling this, of course, right? You could wrap all of the A-Frame tags in React components and have those components pass down props in sort of a more reasonable way or if you give them, say, a function that corresponds to an event listener, they could attach an event listener after the component is realized, after the final DOM element is realized. But all of these are kind of another layer of abstraction on top of this layer of abstraction that A-Frame already gives you. And so what the students who made PGBVSU, the synthesizer did, is they found this React three library and kind of hacked it all to hell. That library was someone's science fair project, essentially. And they ended up making a lot of changes and in the process becoming very familiar with some of the intricacies of obscure features of React, like context, which maybe isn't too obscure, but it is one of those features that when you go and read the documentation, the first thing it tells you is you probably shouldn't be here. Like, go away. You don't want to use context. But today we're gonna use context to do some stuff. So here's, before we dive into a simpler demo, here's some of the neat things you can do with WebGL, which maybe WebGL doesn't need someone being like, there's a bunch of cool stuff you can do with it. So this is a part of Steve Witton's presentation called the Pixel Factory, which I really highly recommend all of the math visualizations on his blog. Here he's demonstrating that we have this shape, it's like a sinusoidal shape, and we are running the entire scene through a vertex shader. So in the graphics pipeline, we have the geometry, which is like the raw data. Here are all the vertexes in three-dimensional space. And then that gets run through a vertex shader, which is a little program that runs for every single point you've specified, which can be millions and millions of points, and turns it into a position on the screen. So in that step, you can do whatever you want. And when you run your geometry through a pixel shader, you are essentially warping space. And you can warp space very selectively to affect only certain objects. In this case, he's warping all the space in sort of a random way in order to show that you can. And as you warp space, you can also attach additional data that then gets processed by the last step, which is the pixel shader, or the fragment shader, more properly, which goes and takes all of the pixels that are covered, even slightly, by a given piece of geometry, and decides what color that geometry should contribute to the final rendered image. So I want you to keep sort of in your mind that when we are working with graphics, what we're really doing is writing a massive distributed program. There's a lot of libraries. We're gonna be working with 3JS today. There's a lot of libraries that sort of conceal that a little bit, and instead say like, oh, you're working on a set with a camera, and lights, and this, and that. But the truth of it is sort of much grittier that you are looking at math, and you are writing a program to describe how math becomes pixels, and that's it. Okay, so that, of course, is Firebase. We'll look at that in a second. Just one last thing I wanted to show you. So this placid lake is Evan Wallace's little swimming pool, and the swimming pool has a couple of neat features. It has a ball that you can drag around in it, and it has a surface that you can draw upon. And anyone who's familiar with graphics is probably looking at these lighting effects and being like, holy crap, that's pretty hard, and these are pretty hard. The way this visualization is working is the light front, the wave front is actually being rendered as a geometry. So this cool pattern of lights is itself a grid, and as it's passing through the water, which is another geometry, based on the distortion of the water plane, the light plane gets distorted, and then with that and a few other tricks, a few screen space partial derivatives, we end up with something that looks very much like a swimming pool, and we can also drop a ball down into our swimming pool. Yay. So these are the kinds of effects that you can achieve with WebGL. They're all rather hard to do by hand, and so there are libraries like 3JS, and there are sort of ecosystems like StatGL that aim to make some of the harder parts of it, like setting up a camera and dealing with the pain of writing a bunch of code, running it, and seeing just a black screen. They sort of aim to minimize the amount of time you're spending, like not knowing what is going on. So let's look at, okay, let's look at a little demo here. So this is something we're gonna be looking at the code for for a second. It's a pretty simple 3D demo of a camera orbiting around the origin, which is nothing, and there's all these platforms sort of scattered everywhere that give us the illusion of maybe a game space that something can run around in. This is all written in React. Each one of these is an object that is driven by a data source, in this case, like a random assortment of points. And so let's go take a look at the code for this. Halfway through. All right, so let's start here. Is that big enough? Yeah, it looks pretty big. So this is the code describing the scene. We have, we're importing some things from our 3JS to React binding that we're going to walk through in a second. And then we are requiring three, the 3JS library itself, not importing it. Because remember how I said that the tooling sucks? The tooling has a lot of problems and you can't import this because, of course, modules and babble and it doesn't have a dot underscore default and this works, so we're going with it. This works, so we're going with it is also one of the models of computer graphics. There are in, so 3JS gives us an abstraction on top of WebGL and in this abstraction, everything that's on screen, like every object, is basically the confluence of a geometry, which is an array of data that describes in some sense the vertices that are going to appear on screen. A material, which is a shader program that says for each and every single pixel what should that pixel look like. So materials are where you get lighting and shading and where things start to look shiny. And then a transformation, so a position is typically how it's specified, but you can think of it more generally as the model matrix, as a general transform that describes how the vertex shader should transform every point in this geometry. And so all of our platforms have the same geometry, they're cylinders, you may be like, these don't look like cylinders, but actually they are kind of cylinders, right? They're just skewed and the various parameters to cylinder geometry let us specify like how many different sides they have and so on. They use a mesh Fong material, which is a material, a shader program that uses Fong lighting and applies it to a mesh of some kind. The mesh is expected to have surface normals, which are vectors that point away from the surface and are what tell the program, hey, this should be dark and this should be brighter. And then we go and create 50 random, random platforms all over the place using the fill and map trick. Okay, now here's the exciting part. Here we have our three JS scene described as JSX, which on the one hand is kind of a dumb pet trick, like why not just describe it as three JS? On the other hand, it lets us use the same data binding that React gives us that we all love so much. In this case, we're mapping over some random positions and maybe it's not that exciting, but you can certainly imagine how I could map over like a Firebase data source or something from the Redux store, really anything, like it gives us all of the power of React, but within the scope of a three JS scene. So there's kind of a couple of layers of hierarchy we have to cut through. The first is we need the canvas itself. We need the canvas and a renderer inside of it and that's what the three tag gives us. It says, okay, put a canvas on the scene and initialize a WebGL context inside of it and the WebGL context we're going to fill with this stuff. So this stuff is, first of all, a scene. We could possibly eliminate the scene tag, but three JS has the notion of a scene and I think it would be nice to preserve it because if we are building a general purpose binding as we are kind of edging towards doing, then we might want the scene to, we might want to have multiple scenes each with its own camera and be able to switch between them without destroying our context. So here we have our scene. The scene has fog in WebGL because OpenGL was set up this way. Fog is kind of a global parameter that you can set certain slides to describe basically how attenuated a given pixel will be based on its distance from the frame. There's a lot of kind of weird things that you might expect to be handled in some sort of general purpose shader program that are not handled that way yet in WebGL, although I believe in WebGL, too, they are. We need something to look at or we need something through which we can look so we have a camera in the scene and the camera sort of by convention is going to be the thing that the renderer, which is this tag, goes and picks up on. So something you should be noticing is that we need for this renderer to attach to this scene and this camera. So there's some kind of, there's a way that data needs to flow here as we're implementing these tags that is not the traditional react, pass down props and live with it model. Although we could do it through passing props, we are going to do it through different trickery, however. So we've got a camera, we've got a couple of lights, two of them are directional, so they're cones, one of them is ambient, so it just sort of describes a lighting value that's going to be given to all of the materials within the scene. And then we have all of our platforms, these meshes that come out of that map. Okay, so this is nice and we can sort of imagine describing a more complicated scene, but let's look at how we implement all of these tags on the react end. So by far the most irritatingly complex, oh sorry, by far the most irritatingly complex tag is the renderer. This was imported as three over here and it gives us, okay, so in this render method, we throw a canvas on the screen. The canvas, once it is realized, is going to call canvas did mount, perhaps a terrible name, perhaps a brilliant name, not sure. In any case, this canvas did mount function, receives the canvas and then sets up a 3JS renderer on it. So it goes and instantiates a WebGL renderer, it sets up the clear color, this should almost certainly be a prop that we pass to the renderer. Some things are not currently as general purpose as they could be. And then we set the state of the renderer which starts with a width and height of zero to be the width and height of the canvas element. And the reason we do this through react element state is because we actually want to have our state get passed down as a prop and cause a re-render on our children in this particular case because the width and height of the renderer have, like, they influence things like our camera which needs to know the width and height of the whole canvas in order to correctly perform the projection transform. And so because the data is flowing that way, we're going to use props for this. And indeed, whenever the window resizes we set our state again. We have to handle retina stuff because nothing is easy on the web. And we tell the 3JS renderer also about our new size. Okay, now down to the dark magic. So our renderer passes down to all of its dynamic children to context values, set scene and set camera. These are both functions that let any descendant say, hi, I'm a scene, hi, I'm a camera and tell the renderer about it so that when the renderer renders a frame it uses that scene and that camera. We are not very smart about this in this demo. Right now any descendant can call set scene and that becomes the scene. Any descendant can call set camera and that becomes the camera. So if we look at our scene class here, when we mount, we call set scene on the, we call the set scene that was given to us through context. The reason we are receiving it at all of course is because scene has specified that it wants the renderer's child context as its context so it's able to attach in that way. If you don't set this then you don't get context. It's one of the ways that React prevents sort of spurious use of this backdoor data passing functionality within the tree. So as soon as any scene mounts, it's going to set itself as the default scene. Likewise for any camera. So if we had multiple scenes and multiple cameras this would not work so well. Similarly if we have multiple items that we want to get attached to the scene in a particular order, that is actually something that's going to happen here. So the scene passes the 3JS scene down to its children and then we're going to skip over the camera because that's crazy. And then here each entity where the mesh is an entity that gets loaded into the scene just adds itself to the scene as soon as it's constructed. So actually the stacking order of items in the scene is not going to be the same as the stacking order of tags in the JSX which is probably not ideal. It's not as big a problem as it would be in other contexts because we have a depth buffer and so the order in which we render things matters for certain classes of rendering problems. In particular we generally want to render from back to front if we're rendering translucent things but in the case of this demo it's not a big deal. Like we can just add things willy nilly as they get thrown onto the scene. Okay, this also you may have noticed is a higher order component because we want to be able to take any given 3JS object and create a React component out of it. So we sort of abstract this idea of constructing a 3JS object and attaching it to the scene into this component for higher order component and then we call it on all the various types of components that we want it on. Okay, so all that gets wrapped together and we end up with a demo that we can like, ah, a camera that we can drag around and a bunch of platforms on-screen. Nice, right? Okay, but that's only half of that sequencer, right? The other half was the actual sequencer part and so here is another little demo that's not 3D but that might play sound. Oh my God. Okay, that's all I'm gonna subject you to of that. You know, WebGL I think works better than Web Audio which on my computer at least, sometimes crashes Chrome, sometimes sounds like a dying hyena. Like it's very strange. So this is perhaps a better demo of using a bizarre render target but also using React's cool data binding capabilities because this demo, which let's look at a, where are you? I apologize, I had these laid out but then mission control is bad at its job. Okay, because this demo actually gets all of its data from Firebase. So this component receives a Firebase reference which we pull in from here and the Firebase reference gives us, let's see, new array of measures. Yeah, it gives us this value on state which contains basically a true if a given note at a given time location should be filled and false if it shouldn't and then we build that all out into a table which this is a very funny component indeed because it actually blends the visual presentational and the audio presentational parts of this little demo. So it is a table inside of a div, this little input box tells us if we're playing and then we also have this tone transport, voice and sample elements which if we go look at them use the same child context passing technique to give us access to tone.js primitives, tone.js being a library that sits on top of web audio in order to sort of declaratively describe a musical score. So here we're saying our score is going to move at 200 BPMs, it starts off not playing but then that'll change when this input box gets checked and we have only a single voice and we have a sample everywhere, everywhere the sample is true and I don't see where I'm making that check but it works so I'm sort of assured that I am somewhere. And again if we go look at these components we see a very similar pattern where we're passing down data through context and primarily, we're passing down actually primarily callbacks through context so that for example the voice is able to schedule its notes on the transport by saying for example tx.schedule and then a note is able to schedule itself on the voice that contains it by calling voice.schedule. All right. And so one of the things I tell my students is that React is a confluence of three things. We have the syntax which is JSX and we have the semantics which are the lifecycle methods, the sort of idea that you're gonna get well-mounted mount render and all of the state management calls and then finally the implementation which is the reconciler. So something that I hoped I would have time to talk about here but did not in the final analysis was writing a new reconciler which seems kind of like a good idea. Like when I first thought about making like a 3D binding for React I was like oh clearly the thing to do is to not touch the DOM at all and instead just write a reconciler that goes and takes, it basically does what react DOM.render does but it's like react3d.render. I think that might be a good idea if you work at Facebook for example or it might be a good idea if you really wanna get into React's internals. It turns out to be kind of a pain though because when you write your own reconciler you're basically committing to either digging into React's internals and using their lifecycle management or to writing your own lifecycle management for React components and that's just a very easy thing to get wrong. So changing the reconciler is not something I recommend but changing the syntax is something that is actually relatively easy and you can do quite fruitfully. So the last thing I wanna show you is this presentation itself which is right here and this is JSX, no it's not. This is React and it's a variant syntax for JSX the purpose of which is to make it easy to escape code snippets. So you can sort of tell by looking at it that this is indentation based and it has these little tags on the top of an indented block that say what JSX tag that block should be and by doing things this way rather than by using JSX's interpolation we make it very easy to like add a code segment somewhere in here that includes JSX tags which just in markdown land I have found that the markdown parser tends to trip all over itself once you have perhaps not this level of complexity but once you have this inside of an HTML tag inside of the JSX tag everything goes to hell. So in order to solve that problem it turned out that the absolute easiest way of doing it was to get into the syntax layer of React and come up with a different JSX syntax which in this case is called many minor matters a mechanism for maintainably managing many markdowns. That is it. I'm happy to take questions and I am happy to, I'm very grateful to be here. Thank you so much for that gorgeous talk Ashi. I remember you said this works or going with it. It's like the graphics designer motto that it's like also my personal motto so I really identify with that. It's really all of computer science I think. It's good enough, it's fine. I have a few questions from the audience. One of the most tricky things in computer graphics are vendor driver specific bugs. Would WebGL add browser specific quirks on top of that? Yes, absolutely. But it also takes care of some vendor specific bugs so if you go read the blink discussions on implementing WebGL they're doing a lot of work to smooth over driver specific bugs or at the very least to blacklist drivers and be like we're not even gonna try here so that you don't get a browser crash. On the other hand then on top of that they probably implement some things differently. I've generally found that actually WebGL compatibility between browsers is not so bad. Compatibility between mobile and desktop is of course a nightmare as you would expect. Where can we find the code that you showed us? Is it open source anywhere? Is it somewhere? All of the code I showed you is on my GitHub. In fact I think it's in a repo called Blast. Fantastic. One of our attendees was wondering if you considered wrapping BabylonJS instead of 3JS. If not, why not? I considered wrapping a bunch of stuff. I first started with StackGL because StackGL is not a particular library. It's a coalition of NPMs that kind of work together and I ended up not doing that because it uses a loader that doesn't work anymore under Webpack and I couldn't figure that one out or I didn't want to try. And then I didn't try Babylon but I would like to. The reason I picked 3 is because I've done it before and everyone kind of uses it. What's some advice you share with your students when working with these technologies? The biggest piece of advice that I try to give them is not even so much advice as a general attitude. I don't know if you could tell it when I'm standing up here and being like, oh yeah, that didn't work. This is terrible. You have to sort of accept that everything is going to be slightly broken, especially when you're building stuff. Like we're all builders. We're all working at the forefront of new creation, of new things happening. And it turns out that while it sounds exciting is often an experience in things just not working or not being nice or not being the sort of cushy environment that we've come to expect from all the Apple products that we use all the time. For example. And so becoming sort of very accepting of that jankiness and having a sense of humor about it is something I try and give them. Well said. On that note, thank you so much, Ashie. Let's give her another round of applause. Thank you everyone.