 Let me just get my clock started here. So today, we're going to talk about WebGL, GPUs, and math. And there's a little thing in parentheses there that will become more obvious as we go on. Get rid of that. So this is my website, in case you haven't seen it. It's generally known as the website with that header. People don't actually scroll down that much, which is a shame, but that's life. But it's made in WebGL. What's really cool about this effect is that it's kind of its own progress bar, that as the effect animates, it's still doing a lot of the computation. And that's why it's so instant, is because it doesn't actually do any work before animating, or barely any. So that's a nice trick. And I was playing with this sort of concept of streaming graphics into a GPU and found that it was really fast. Because this object has something like 45,000 triangles in it. And it runs fine. This is a modern machine, but even on an older machine, this runs pretty well. So I was really happy with that and wanted to see where I could take it. Now, I've been working on mathematical visualization. And so there's a quote that I ran into that I think is really inspirational and says a lot about what I'm trying to do when it comes from a mathematician named William Thurston, who died not so long ago. So I'd like to read that. And he said, I think a lot of mathematics is really about how you understand things in your head. It's people that did mathematics. We're not just general purpose machines. We're people. We see things. We feel things. We think of things. A lot of what I've done in my mathematical career has had to do with finding new ways to build models, to see things, do computations, really get a feel for stuff. It may seem unimportant, but when I started out, people drew pictures one way. And I started drawing them a different way. There's something significant about how the representation in your head profoundly changes how you think. It's very hard to do a brain dump, very hard to do that. But I'm still gonna try to do something to give a feel for this kind of stuff. Words are one thing. We can talk about geometric structures. There are many precise mathematical words that could be used, but they don't automatically convey a feeling for it. I probably can't convey a feeling for it either, but I wanna try. And so that is pretty much what I wanna do too, is to try and convey a feeling for things that are difficult and hard to think about. So let's begin. Gonna start with pixels. It's not a strange concept. And oh, there's a loading little thing. Nevermind. Pixels. We all know pixels. They live on a grid and there's an X and a Y coordinate system. And oddly enough, in WebGL and in mathematics, in general, Y points up. It's annoying that there's different conventions, but you just have to deal with it. And pixels have been around since the early 80s and we've had lots of fun little algorithms to deal with them, like for example, the Brezenhem line drawing algorithm, which you can imagine is just a simple for loop to fill in a couple of pixels and set the right color. Fat lines, vertical lines, diagonal lines, you can do that too. But you quickly discover that it isn't just about filling in pixels, it's about doing operations with pixels, like for example, blending them in true color as they call it initially. And this isn't gonna surprise anybody, but colors are made of red, green, blue primaries, which comes from the human visual system, right? Like we have red, green, blue cones in our eyes and they're sensitive to specific wavelengths. And that's how the colors get mixed and created. Now on the inside, in those pixels are our numbers and usually those are eight bit numbers, number from zero to 255, one for the red, green, blue and transparency channel. And those create those colors. And it's not that I don't think people in this room don't know this, but it's again, it's about conveying a feeling, like a feeling of what happens when all these pixels are being animated in real time with the linear interpolation operator, which I don't have to explain because Courtney did that in her talk. And so when you're changing a value in CSS or changing a transparency in Photoshop or something, you're really directing thousands of pixels with a single slider. And I think it's important to point out that this is actually what's going on. So human color vision is additive. The confusion about primary colors stems from the difference between additive and subtractive color where some people say the primary colors are red, yellow, blue, but actually it's cyan, magenta, yellow, which happens because on paper with ink, the darker it gets, with light, the more light, the lighter it gets, so they're just opposite. That's what's going on there. So that's how human color vision works. But the RGB model that we use is SRGB, standard RGB is actually not linear. So that means that 50% gray is actually not half as bright as white. It just isn't, there's a curve there. So you have to be aware of that, especially when you're doing computations with numbers. And so all of this is part of the general practice of rasterization, drawing things on a grid of pixels, like say a triangle. And again, this is decades old stuff by now. But there's a problem. Like if you're trying to rasterize a triangle like this, it kind of skips around a little bit. And the reason is because if you work with a pixel grid, it's natural to define your coordinates as the pixels, but that means you can only move them in integer steps, which is obviously not good for smooth vector graphics. So for that, we invented something called subpixel accuracy, where the shape can be anywhere, and the goal is to figure out which pixels should be filled in and which ones shouldn't be. And that seems maybe difficult, but it's actually not that hard because we use the principle of sampling, where the color of a pixel is defined by the point exactly in the middle. If that one is inside the triangle, then the pixel is red, otherwise it's white. So that's a very simple principle, and that seems like it solves a lot of problems. And that means that there's really two worlds, right? Like on the left-hand side, you've got the vector world, where everything is mathematical. It's precise, exact. You're working with shapes and algebra and geometry. On the right-hand side, there's the raster world, the pixels that have been sampled, and that's sort of a one-way final step. Because when you're doing computer graphics, you want to work as much as possible with the idealized representation on the left so that you can ensure quality of graphics and all those things. Now, when you've turned something to samples, now it's a grid of things. So the fact that we're drawing every sample as a square pixel is in itself a choice. We call that the nearest neighbor filter. That's not that strange. But there's many alternatives. For example, you could use gradients, horizontally and vertically like this, which is called the bilinear filter. Looks a little bit better. You don't get the sharp jaggies on the pixel edges, but it still doesn't look smooth. It's still skipping around a bit. So how do we fix that? Well, the problem is that in choosing to work with samples, we've kind of ignored the entire pixel. The fact that the shape is moving to cover a pixel and then uncovering the pixel again is not taken into account. And you could do that analytically, like compute the actual coverage of the pixel by the shape. That's a lot of work. And it doesn't work very well when, say, you have a million triangles because you have to process every one of them individually. So what we do instead is we use a super sampling where you put multiple samples inside a pixel, usually in this sort of rotated grid arrangement because it has some interesting properties where basically if a horizontal or a vertical line sweeps across, it's going to trigger each of those samples individually rather than multiple at the same time, like if they were all on a row, for example. So that ensures that you get smooth stair steps. And so even just four times super sampling, you can get rid of a lot of the objectionable artifacts of rendering because you would get four levels of transparency, right? Or sorry, three levels of transparency between solid and completely invisible. But now you're doing four times as much work. It's generally not a good idea because if you're rendering four samples per pixel, why don't you just render it twice as wide and twice as high with way higher resolution? Well, because in practice we use multi-sampling where we selectively apply super sampling to certain pixels, generally the ones on the edge, to make sure that the objectionable jaggies are gone and are mostly invisible. But in the middle, where everything's solid anyway, we don't really need the super sample. On the outside, we don't really need the super sample. And so again, it seems like we've solved a lot of problems and then everything's going great. But no, it's still not good because there's something really fundamental that relates sampling to mathematics and that's the sampling theorem. And so if you go and study signal processing and deal with Fourier transforms, this is the bane of your existence. Because in this case, for example, I've got a sine wave that generates these patterns of white to black and white to black and the pixels are sampling that color and it looks pretty good, right? Like it's moving from right to left and the representation of the pixels looks like what the curve is just sampled very roughly. Unfortunately now, if I squeeze the frequency, there's a certain critical point here which is the Nyquist frequency. Where if you look at the curve, it's going right to left but if you look at the pixels, they're not moving anymore. They're just alternating in place, up and down, up and down. So you can't tell if it's going left or right anymore and what's worse is if you keep going beyond this frequency, like for example, to the exact sampling frequency, now that it goes back to DC, direct current, that's the electrical engineering heritage of this lingo, where all the pixels are the same and there's no bands, there's no more wave, it's completely wrong. But you can see why this happens, right? It's because the sampling is only looking at individual places, not at the whole. Now we go beyond that, it gets worse because now it's moving from left to right. Pleat opposite of what we're trying to achieve and then the frequency is wrong, it's just completely wrong. And so if you move back to below there's a Nyquist frequency, in theory this works pretty well but you're already starting to see this sort of left to right effect that is objectionable and going to be noticeable. So if you really want to avoid all sampling artifacts, you need to sample much lower, like for example this, a quarter of your resolution to ensure that everything's gonna look good and this is kind of why retina displays became so popular and made things look so much better is because suddenly we have twice the amount of pixels and all those effects that we get when things aren't pixel aligned kind of become irrelevant. And so the general problem of aliasing creates these moiré patterns they're called where the further in the distance here, because I'm applying this pattern on a plane, eventually it goes completely wrong. And if you want to see that up close, I can show you that, you can see that there's a line of pixels that's moving in the wrong direction, right? Like this is the complete visualization of aliasing along many different frequencies effectively. And this is already bad but if we move the camera it gets worse. Like you get these really crazy patterns, right? They're just shifting and then distracting and looks completely wrong. And to fix that you actually would need hundreds and hundreds of samples for some of these pixels because one tiny pixel now covers a massive area in the distance in this 3D world. So in practice what you do is you downsize it ahead of time, you know you make copies of quarter size, eighth size, sixteenth size, you select the right one so you only need to take one sample or maybe two samples, one from each level and blend between them. Another topic that's really important when you're sampling, especially when you're rendering 3D things is what goes in front of what. So assuming that we can make shapes look good, there's still the problem of what goes first, what goes in the front. And it might seem like all you need to do is sort your layers back to front, right? The red triangle is bigger because it's in front of the blue one. But that's not really always the case because shapes can intersect. And when shapes intersect, you can't draw them, there's no correct order to draw them in. You need something else. So for that we have a depth or Z buffer, it's called, or Z buffer. That's my Canadian giveaway. Where we record the depth with every pixel. And so when you draw something you can just check, is this closer or further away than what's already there? And so in the case of these triangles if I start rotating them in 3D, they can poke through each other on a per pixel basis and the depth is always resolved correctly. So once again, it seems like, oh we've solved all these problems. No, things still aren't quite right because this principle only works for solid things. If you have transparent objects like in this case, they look solid from the side, from one side, transparent from the other because the order in which they're being drawn isn't changing. We're expecting the Z buffer to fix this, but it can't because once a pixel is there it's considered solid. So when you're drawing transparent things you have to manually sort them or apply other techniques. And this is relevant for my kind of work is because I'm trying to do everything on a GPU I can't sort, so I have to deal with this. So let's talk a little bit more about the actual GPU, the graphics processing unit that exists in your phone and your laptops and everything. And let's talk a little bit about what's actually going on here. So Mathbox is a tool that I developed a while ago to do mathematical visualization that was GPU driven in a browser. And so it had pre-built shaders that enabled you to do certain kinds of visualizations like a polar coordinate visualization. And the emphasis was not just on rendering things but on connecting them, being able to transition between different states, different shapes, but it was all still, you had a couple of Legos to play with and that was it. What I've been working on since is a way to enhance this so anything is possible. Where I can just add a transform to completely mess this up and everything just works. Now what's going on here is this effect has two parameters. One is time. It's very simple, time is just increasing and I'm wrapping it around the graph just to show that, but time is just increasing as soon as it goes up. And that again seems quite obvious, but if time stops and nothing moves, time should go on. And this kind of playing with time is one of those things that I like to do because I think time as a dimension is a very interesting quality to play with. There's another parameter which is the intensity and so we're gonna start animating that a little bit, go all down, see it's just a grid and now we wait a little bit. It's gonna shoot back up and get messed up. So that's fun. You got these two parameters that can just be animated and they control the whole effect. And so this is the code for that. This is the GLSL code that I'm injecting. I'm not really gonna go into much detail because it really is arbitrary. It's just a random mishmash of numbers and sine functions and some swizzles. That's the YZWX where I'm swapping channels and stuff. So it's just a random thing. But what matters is that this is 120,000 vertices that are being animated at 60 frames per second. So this one piece of GLSL code that I injected is being called 7.2 billion times per second. It's not a problem. And so what this is really is a mathematical function that takes X, Y, and Z coordinates and the time and intensity that I just said and produces a new one. And in doing so, you're distorting the entire space really where if I put a curve, for example, in here, that curve is gonna shift but it's really the entire fabric of the space that is warping continuously. So this is what you can do with mathematical functions. This explains or just hints at how, for example, the theory of general relativity talks about how space curves, how these things are modeled using bizarrely simple formulas. And that's because when you apply functions, you have the power to warp spaces and manifolds exactly like this. And what's worse, what's weirder is if you put, for example, a surface in here and it's, you know, I've turned the intensity up all the way just because that's fun. But it isn't just being distorted. The points aren't just being moved by the transformation. The shading is changing too. And that should be kind of strange. I mean, it makes sense in real life but when you think about it just from a purely computer graphics point of view, that's annoying that you can't distort things without having to relight it. And why is that happening? Well, in order to light something we need something called the surface normal. And a surface normal is quite simply an arrow that points up. So at every point on the surface we need an arrow that points straight up and because the surface is warping those directions are continuously changing. And then the lighting model that's being applied is very simple where it's just if it points straight at the light source, it's brightest. If it points a little bit away, it's a little bit darker. If it points 90 degrees or more away from the light source it doesn't get any light because it's assumed to be in shadow. So how do we get these normals? Because again, the shape is being distorted in real time and there's a lot of stuff happening. Well, we need to see how it's constructed. And then this gets into what's called UV mapping. UV is just a convention for the coordinates like X and Y. UV is just a different X and Y. Where we have the surface that is parametrized by in two directions, U and V, that's what those grid lines represent. And that surface is being warped by the warp function that we had previously. And out comes a point, P. That's the point on the surface. So in order to calculate normals we need to look at the tangents. And then this is getting into calculus where we take a partial derivative and then people's eyes are gonna start glazing over. But the thing is, this isn't that hard. What you're doing is you're drawing an arrow between two neighboring points on the surface. That's one little grid square and then I've stretched out the arrow a little bit so you can see it better. But that's all that it's doing. It's connecting two neighboring grid points with an arrow and using that as the tangent direction. That's the epsilon and the formula is the step. And in calculus, people like to take infinitely small steps, but in practice just finitely small ones are just fine, mostly. Because we only have 255 color levels anyway. Why would you need more? So those are the U tangents in the U direction. We can do the same thing in the other direction with V and that gives you this sort of metric of what's going on at every point. And using these two vectors, which are just arrows and then they're just a direction in space with a certain length, we can apply operations like the cross product. And the colors are very nice and there's a structure there that I won't go into too much. But it's a way of crossing two arrows to get a third one that's always perpendicular. And so that's where those normals come from. Always a 90 degree angle of the U and the V vector. And those are being used to light surface. So when the surface first appeared on the screen, this was already happening. This is what it's doing on the inside in order to make it look good. And now I've piled on even more visualization to visualize how it's visualizing, which is where this stuff gets really meta. In fact, so there's a new shader here where in order to calculate the normal, we need this little footprint, right? Those three arrows, which means we're actually warping points three times per vertex. On average, it comes out to about 2.5 because some of the surfaces aren't lit so they don't need normals. But in total, this piece of code, warp vertex, is now actually being called 127 million times per second, including all the extra visualization that's going on. So that's quite a lot. And we can do some more stuff here where if instead of a normal, for example, we look at the original up direction and we distort the original up as well, it's no longer gonna point up, right? It's just gonna point in a completely random direction. And those three vectors basically tell you how the space is shaped and distorted. This explains how angles change, how, you know, why the lighting changes is because the angles of the space itself are just completely going off in different directions. It's no longer straight. So this is called a Jacobian matrix. This is usually something that scares students and makes no sense to them. But I don't think this concept is that hard. You can look at it in action and sort of understand that when you apply an arbitrary transformation, your directions shift and they're no longer straight, that's the basic principle of what's going on. Little bit, just a side note about algebra. That there's a three dimensional matrix on the left. So that's just a set of three arrows. That's all that that is. On the right, there's a four dimensional matrix which is a set of four four dimensional arrows which we somehow use in computer graphics. And what's actually going on there is that we've invented a new dimension as sort of a formalism to enable us to move things around, not just in place, like not just shift the directions, but also move stuff around. It's confusing, but it really isn't that hard. You just get used to it. Just don't be scared by the word four dimensional. It's not that bad. However, you've been had if you were impressed by my numbers because calling something 100 million times per second is not that surprising in a world of 1080p displays where that's roughly 2 million pixels on a 1080p screen at 60 frames per second. Just to fill them all in once, to touch them once, is already gonna be 124 million calls per second. So your fancy hardware accelerated desktop, if you have overlapping windows and shadows, you're looking at hundreds of millions of pixels per second being processed. And this is kind of just always going on in the background. Now, specifically, what was just happening here is this transition effect, right? That's done in the pixel shader. And so let's go actually look at the code because I've been showing you a lot of things, but this is a graph of simple functions. So this returns color and opacity as RGBA from a three vector and the opacity being passed in separately because I wanna animate those separately. There's also a mask that is being calculated from a per vertex value that was set. Those are combined with a multiplication and then out comes the color that is written to GL. So if you're familiar with GLSL, this shouldn't be that surprising. If you don't and you're not that familiar with the C type languages, this might seem kind of strange, but again, don't be afraid of it. It really isn't that hard. The point though of math box is that writing GLSL for everything is kind of annoying because every GLSL program has to be written top to bottom. It has to be a main function, basically. No inputs, no outputs. And there's tools like GLSLify that bring, due to GLSL, let's say, browser-ified it for JavaScript. It allows you to include other code, but it's still all statically defined. You're writing a program ahead of time to do one thing. Whereas what I want to do is compose shaders dynamically, so I don't have to think about the whole. I can just think about tiny pieces, like one transform that I want to slot in. And so that's the thing that I've kind of been working on and exploring is how to combine shaders like this. There we go. So this is a vertex shader for one of the objects before. I believe it is the arrows, the vectors. And this is quite a lot more complicated. So this is a functional graph where this part takes care of the masking and that writes out the vmask value that the vertex shader, that the pixel shader was using. And this thing here at the bottom calculates the transformation. It starts off with two points, xyz and stpq. Add some padding, does a resample. And here's the code that I injected. So this is the only piece of code that I wrote myself here to make this effect work. Everything else comes from math box, so to speak. So this is the polar transformation that does the circular polar warping. The camera view and closing off the chain. And this then gets called by a sampler that leads to arrow position which orients the little arrow cones. And finally, you get the final position. And so if you had to do all this by yourself for every single 3D thing you had to render, that would be way too much work. And so that's what this entire project was about, was enabling composition on a GPU beyond statically defined things. Out of my frame again, there we go. So the hardware that we have these days, especially with VR, as Jomay was talking about, is incredibly powerful. You've got the NVIDIA GTX 980. It has 336 gigabytes per second. It famously has 5 billion transistors. The more interesting number that I find is the 5 teraflops. Because if you actually go and look, this is the IBM Blue Gene P. The picture's not 100% correct, my fault. But in 2007, one of these cabinets had 5 teraflops. And so today, that's just one of these puppies. And you get numbers like 3.3 trillion bits per second. That's not an exaggeration at all. That's the objective ability of these things. And so that also means that your phone, that your fancy smartphone, is probably equivalent to a Dell box from the mid-2000s. Not as fancy. And so this stuff is what drives generally video games, right? Video game graphics. This is an example from Alien Isolation. I really like this game, but also because it's gorgeous. Not just the styling and the visual design, but the rendering is really, really good. The lighting is incredibly well done. You get these images that seem near photo-realistic, right? It's hard to tell if this is a picture from a set or a picture from a video game. It's actually from a video game. And that's a scary alien that you have to deal with. Another two examples that are included, because it's kind of interesting, this is Crisis 3. So this was the previous generation of consoles. And you can kind of sort of see in the background, there's still a lot of fog. And any time you see that in a video game, you know it's because they wanted to draw more, but they couldn't because it was too slow. Whereas nowadays on the PlayStation 4, for example, that problem has gone away. It effectively got infinite draw distance. That's the difference between one generation of consoles and the next. So there's an incredible amount of computational power here. Which means supercomputing has kind of just become background radiation. It's everywhere. From the displays in your pockets to the laptops, to the consoles, to the TVs, they all have to do incredible amounts of computation to make this stuff work. And we're kind of blissfully unaware. So finally, I want to just quickly talk a little bit about practical stuff with WebGL, stuff that I learned from this. Which is the first is pre-allocate as much as possible and avoid garbage collection. And trigger warning, coffee script. Because it's a two-year-old code base and that was the state of things then. Where you want to allocate your buffers and your objects in your state ahead of time and only change values. You don't want to allocate new things. Because doing that 120 million times per second is going to make Chrome V8 cry. Another thing that's a problem is text rendering in WebGL. This is the big problem that everybody deals with and the good solution seems to be something called signed distance fields. Where you create this image like the one at the top and then dynamically you apply a contrast, which is like the Curves tool in Photoshop, to actually make it crisp for a certain size. So based on the image at the top, you can render text at a relatively wide range of sizes. And you can also use this to do outlining, which is very useful for rendering labels in 3D on top of other things. Unfortunately, there's no way to generate a signed distance field easily in a browser. So I've had to come up with a technique to fake it with Canvas, which comes down to drawing progressively smaller outlines with stroke text and then doing a post-processing effect on that. It's a lot of work. The other thing is that little loading spinner that I put in, which I don't like. I wish it was faster. Unfortunately, when you create a shader in WebGL, you have to compile it synchronously. And as you've seen with the graphs that I create, you can imagine all that code combined is quite big. It takes a little while to compile. It's not the end of the world, but it is annoying. So I have to deal with that. A thing that is ridiculous is if you want to read floating point numbers from a GPU with WebGL, you can't. You can only read back bytes, which means if you want to read back floating point, what you do is you make a shader that encodes a floating point number as an RGBA. I didn't come up with this luckily. Somebody else already did the work because IEE floats are not your friend. And this is the kind of ridiculousness that you have to deal with sometimes. Another problem is all this multi-sampling stuff that I was talking about. You can use that, but you can't do anything with a multi-sampled result. It's only the final results. So if you want to do post-processing effects like cinematic effects, you kind of have to give up multi-sampling. And there's also order independent transparency, which is a technique that's relatively recent that solves the Z-buffer problem, so where things that are transparent aren't layered correctly. But you can't use it without rendering in two passes because, again, WebGL lacks certain capabilities. So that's, again, annoying. Now, there's a website called WebGLStats by a guy named Florian Bush, who's one of the people who's very active on the WebGL mailing lists that tracks the actual support for this stuff. So despite the fact that, you know, I'm painting a bleak picture, things are getting better, and you have really good information on what you can and cannot use in certain situations. So that's great. This site is a public treasure. It does not get enough recognition. And another problem that Florian particularly has worked on, too, is that life performance scaling with WebGL is quite difficult because you get a request animation frame. You can see it's taking me too long to render, but you can only scale down progressively. Once you're rendering at 60 frames per second, there's no way for you to know how much slack do I have. Could I render twice as much? You don't know because the request animation frame clock is just going to keep happening. So actually dealing with WebGL content that could work well on a low-powered smartphone as well as a fancy laptop is really difficult. You kind of just have to aim low and scale up very conservatively. So that's sort of the question of what is WebGL? Is it good? Is it bad? It's really good. I will say this. It's just the problem wasn't just how do we bring computer graphics into the browser. WebGL was the first time that they actually took an existing API and tried to make it work on the web and that's why there's these odd gaps here and there is because they had to invent a whole security model to make this stuff work. But that just didn't exist before. So if you're playing a video game on your laptop that's a native application, they're talking directly to the driver and you can be sure that there would be bugs that a developer worked around that would be considered a security issue in a browser. So a lot of the work in making this stuff work was work about the work. That's why we ended up with what we have today. A good question that was raised this week that I heard. I forget who said it. Where would WebGL be without 3.js? I think this is a very important observation. 3.js is one of the main drivers that made WebGL accessible. And luckily things are improving. So there's this work being done on WebGL 2 that will be an improvement that adds a couple things. It still doesn't quite match what you get as a native developer. But there's definitely improvements. The other thing that people are talking about is Vulkan which is sort of the next generation OpenGL which will maybe have a Web Vulkan at some point. This is basically the open version of APIs like Apple's Metal where they want a graphics API that is more multi-core, more parallelizable is more natively threaded and it's basically what they already use when they program game consoles. So the situation on a game console like at PS4 is you have, you know, luxurious access to the hardware. You have one machine that you're targeting. It's kind of blissful compared to the stuff we have to do on the Web. But, you know, reach versus accessibility is always a problem. The thing that I also want to bring up that I find really interesting is that there's work being done on something called Spear which is based on LLVM. So this is in the same realm of things like Asm.js and all that, cross-compiling where this might enable packaging of shaders as binary modules that don't need to be compiled but that can just be put straight into the driver and then translate it to the hardware's own instruction format really quickly. So we definitely don't have to compose shaders as source code forever. It's just that right now that seems to be the best way to do it. Now, time-wise I'm pretty much at the end so I am going to leave it at that. I haven't released this code base yet. I will be putting the slides online and talking more about this stuff as I can write the docs and explain it. There really isn't more right now than I have shown you. So thank you to everyone who made WebGL happen. There's some people there on the mailing list, too many to name, too many to thank. That did a lot of work to bring this to life and it's what enables what you saw today. Thank you very much for listening.