 As they said, we're going to talk about WebGL and 3JS, and this is something I'm really excited about. In Doug's talk earlier, he talked a little bit about the way that the web is evolving and the way that the web needs to change. And as far as I can see, technologies like WebGL are going to be a really big part of that. Another one of the speakers mentioned actually people doing some dom rendering in Canvas are giving them an abstracted DOM API to render in WebGL on Canvas to improve performance. Actually doing direct pixel painting, which is what we're talking about here, I think it's going to be a huge part of the future of the web. Again, the DOM, HTML, CSS, it wasn't designed for building apps. And again, building games, I will show some examples in a minute, but it's incredible the kind of power that these kind of technologies allow us to have. So hopefully by the end of this, you should have been bitten by the bug of WebGL. You'll want to go away and learn as much as you can. So if you want, reach out to me on Twitter, reach out to me any way you like. And if you have any questions after this, I'm happy to help you with any problems you run into or if there's any hard to fix bugs and there are lots of them that you'll eventually see in a moment. I'm happy to actually sit down and talk you guys through these things. But first of all, a little bit about me. That's me on Twitter. I judge my worth as a human being by the number of people that follow me on Twitter. So if you want to follow me on Twitter, it will help my ego a lot and I'd really appreciate it. Everything I'm going to share here is going to be available on GitHub. As you can see here, that link, unfortunately, to my website isn't actually live at the moment annoyingly. I tried to do it just now, but the Wi-Fi dropped out on me. But this evening, if you go to that link, I'll tweet about it and get the json folks to retweet it so you can find it. All of the content will be there. And there will also be a link to where it's on GitHub as well. But yeah, I'm on Twitter on my GitHub. Please follow me. My ego would really like it. Again, this is some of the weird stuff I was talking about. I work from McKinney Digital Labs, who are also sponsoring. Probably lots of the people that are organising this and lots of people here also work from McKinney Digital Labs. It's what we've taken over as much as we can. It's awesome. I get to do really cool stuff like this all day. As many people have said, we're always looking for awesome people. So if you want to come and hack on really cool stuff, MDL is a great place to do it. Talk to me about it, talk to Amit, talk to Sati, who's outside who leads MDL worldwide. We'll be on tomorrow doing the keynote. But fundamentally here, we're talking about WebGL. So what I want to do is I want to talk you through a little bit about the history of WebGL, the history of graphics programming on the web. And then I generally want to do a little bit of an introduction to graphics programming because there's some concepts here which are a little bit different from the regular day-to-day of building web applications and building even complex web applications. So what I want to do first of all is a little bit of crowd participation. Can you put your hand up if you've ever written any graphics code? Like OpenGL, DirectX, that kind of thing. Awesome, awesome. I'm going to rely on you people too. If anyone has any questions, ask those people. But fundamentally, one of the reasons I ask is because there is an unfortunately quite a steep learning curve when it comes to graphics programming. And now hopefully we should try and make that a little bit more shallow today. It should hopefully help you mount that summit of the learning curve. But there's going to be some things that we're simply just not going to have time to talk about. And we'll come to those in a minute. But as I said, if you want any more information afterwards, please don't hesitate to reach out on Twitter, grab me afterwards. I'm happy to talk you through it in some more detail. Fundamentally, WebGL is what we're here to talk about. This is a formal definition. I'm not going to read it out. You won't get any value from me doing that. But fundamentally, it's a way of rendering 2D and 3D graphics on the GPU in the browser. It's in short summation. The key part of that is that 3D graphics on the web aren't a new thing. I'm not sure if any of you have ever had the horrid, horrid, horrid experience of writing Java rapplets and Joggle in the late 90s and early 2000s. Java rapplets generally are a technology that needs to burn in a horrible, fiery death. But Joggle particularly was an API in a Java virtual machine for running graphics processing code, open GL code via Java. You could do that from the web. About 10 years ago, almost 15 years ago, you could do complex 3D graphics on the web. Now, it wasn't good. It wasn't a nice API. It wasn't fun to work with, but it was possible. The kind of thing we're talking about here, it's not brand new. These kind of ideas have existed before, but they were bad for several reasons. Flash, a little bit better. Stage 3D, the same thing. Stage 3D allowed you to address open GL code to the GPU via the Flash sandbox. It was a nasty API, but you could do it, and it worked relatively well. Well, I say that. It worked okay. But as I said, they're hard to use. They were sandboxed, and they weren't open, which was one of the big problems. They were all stuck inside these impenetrable virtual machines in terms of the JVM or the proprietary stacks like in Flash. So they weren't nice to work with. They had plug-ins. They didn't run natively in the browser. All of the reasons Flash and Java rapplets are not good for any reason. These things aren't good. I'm not telling you anything you don't know already. But then a guy called Vladimir Vukosevich. Vukosevich, I'm sure I'm pronouncing that one. In 2006, created Canvas 3D. He worked at Mozilla. He was in Firefox originally. It was a full open GL ES implementation. 1.1, which is the standard. We'll talk a little bit about these standards in a minute. He implemented it as his vision of the future of 3D graphics on the web. Now, this was amazing. It's not insignificant engineering feat to do this. He deserves a huge amount of credit. But the awesome thing about this was that we had an API for the web that allowed you to do 3D graphics. But there was a fundamental problem that open GL as a concept was never really designed to work on the web. If you've ever written open GL, there are some nasty, nasty little funky corners to it. Fundamentally, there are a couple of other sort of more strategic problems. The first one fundamentally that while Vladimir was doing this, Opera had decided they were also going to do something like this. So they did it again themselves. And while they both theoretically implemented the same open GL spec, there were subtle differences that made it very difficult to run platform-independent code on 2. And the second part that I'm going to talk a little bit about in the context of security in a minute is that the open GL isn't a web API. It was never designed to be addressed from the browser from running untrusted code. It was always designed to run trusted signed code in an operating system with all the re-operating system checks and verifications that are implicit when you do that. But enter the Kronos group. And the Kronos group are the standards body. I think they're actually a for-profit company, but they're effectively the standards body for open GL, the standard. And when the browser manufacturers, browser vendors, started to talk about doing 3D graphics on the web, the Kronos group were the first people to put their hands up and say, this is an opportunity for us to do something web-specific to take the lessons that we've learned over 15 years or more of open GL and actually build something that is built for the web and is a native citizen of the web. And they did. And in 2011, the version 1 of the web GL spec was published. What was really awesome is that right from the very beginning, this was something that a lot of browser vendors were very excited about, I'm sure you know, so Safari, Chrome, Firefox, Opera, they all got on board very quickly. But like many things on the web, not so much with IE, but the most shocking thing is they were right. One of the fundamental problems with web GL is its security model and its execution model is fundamentally different to the rest of the web. Now, there are a number of, I'll explain that in a little more detail in a moment, but the other point is that open GL is a standard and web GL is incredibly similar to open GL, yes, it's very, very similar indeed, is that there are all kinds of legacy issues with open GL. I'll talk about a couple in the moment that mean that it's, from a security model point of view, it wasn't generally a good idea, things like shared access to memory, not zeroing buffers. I'll talk about those in a little while, but fundamentally, we have a little diagram here of what the execution model for web GL looks like, and fundamentally, you can see number one, you have a shader, I'll talk about what these things are in a minute, like vertex and fragment shaders, they're compiled and run inside the browser sandbox, and now the browser has done a very good job isolating the untrusted code you run on the web from the rest of your operating system, from preventing it from doing nasty things, and they've built lots of process sandbox, tab sandboxes, browser level sandboxes, and also they operate within all of the user mode protections in operating systems, that means the process isolation, it means not being able to jump between pages in memory, all of the things that are inherent in running a regular process in an operating system. But the thing with web GL is that the reason it's so successful is that you're addressing graphics hardware, you're putting stuff directly onto the metal, as close to the metal as you can to get as much performance, as much rendering performance as you possibly can, and that means that when you put one of these shaders, number one, into the web GL engine, number two, it actually executes in kernel mode, it doesn't execute in user mode anymore, and that's something that is a very fundamental part of the way web GL executes, and it's something that is a fundamentally different execution model. This never happens with any other kind of browser-addressed code. So you start to run into problems that people have had with open GL and graphics programming for a long time, in that graphics drivers are among the most buggy and worst-written pieces of software that you'll ever be unlucky enough to come across. It's the worst example of proprietary systems and systems coding, effectively. They're buggy, they're out of date, all of them will make them, they have no financial incentive to maintain them beyond about six months, and there's a great statistic on Steam. People who are on Steam who have an implicit advantage of being on the newest drivers, whatever car they have, only 20% of them are on the newest driver. All of you developers in this room, 80% of you are still using IE6. You can imagine the kind of upgrade cycles, update cycles you're reliant now upon the nasty bugginess in all of these drivers and all the possible security executions you can do. And if you can exploit something in Carmel mode, that's a really fundamental exploit you can write to pretty much any part of the operating system, still any data you like. But there are a number of things that were done to address this. There's a practical angle that if you ever want to look at really good C++ code, and that isn't a contradiction in terms, that does actually exist, take a look at the angle project that's used in Chromium and WebGL. It's a transpiler and effectively validation and transformation library. It takes WebGL, OpenGL code, and dynamically transpiled it real time into DirectX 3D code on Windows. And now it's an incredibly impressive piece of engineering, but it goes a long way to validating shaders for trying to jump out of security context. That was a change that's been made and has become more mature over the last couple of years. But fundamentally some other things had to change. Things like out-of-memory range access, I think I'm not going to go through these one by one, but effectively these are things that were features not bugs in OpenGL and were at the heart of a number of optimisations that people have made. For instance that second one, access to unalysialised memory, that OpenGL doesn't zero buffers, so it doesn't remove everything that's in memory when you try and access. When you allocate a buffer, it doesn't zero what was in that buffer, what was in that piece of that memory page before that. So therefore you can access what was in there. Another huge problem on the web, right, because if you've got one tab that has one thing in it and another tab that has another thing, and if you can access, you can have shared memory between them, you can steal data and all kinds of things. That changed. This is what changed. When you first started doing this, denial of service was very easy to do. With OpenGL you could dox yourself quite easily during the first versions of Canvas 3D. That's now changed. But the fundamental thing about this is that it was fixed. And it is fundamentally one of the most awesome technologies that exist on the web today. There are some incredibly impressive things you can do in terms of dynamic 3D content, in terms of actually creating incredibly immersive experiences on the web. And these are just a couple of examples that I've pulled out. All of these things execute natively in the browser. From jQuery forms five years ago and doing form validation to doing this, it's a different world. Hopefully today we should go down the journey of starting you coming to this world, which is going to be awesome. Again, as I talked about a little bit in the beginning, to quote Carl Sagan. I don't think any technical talk is right without quoting Carl Sagan. If you want to make an apple pie, you need to invent the universe, first of all. I use this to say that there are a couple of things you need to understand before starting to go down this road of graphics programming. There's a lot of math in graphics programming. There's a lot of... There's a different programming language you have to use called GLSL, which is the graphic shader language, which I'll touch on a little bit. But these things, this is where that steep learning curve that I talked about comes along. Simply because it's not necessarily as simple as actually just writing thumb-based browser code, for instance. The first thing I'm going to talk about is this idea of describing objects. Again, this is trigonometry 101, so I'm sure you guys probably know most of this, but I'm just covering it just to make sure that we'll have this shared understanding. Right at the core of building graphics code is maintaining co-ordinates. You've got this 3D space and you want to know where things are and you want to describe those things in it, and you do that with these two concepts, points and vertexes, vertices. Points are non-dimensional, so if you've got a cube, a point is somewhere inside it that you represent by series of co-ordinates. A vertex is a combination of those two things, and vertexes and points are the triangles, are the tessals, are the polygons, how you describe a 3D program. Again, this is a very simple thing to conceptually understand. We've got an X and a Y axis. We can say that this point here is a 6.2 on the X axis and 4.4 on the Y axis. Again, that's probably not scaled right, but you get the point. It's a relatively simple thing to understand. Again, the same thing extends to 3D. We just add another axis. In this context, we're describing the shape of this box by literally saying what are the co-ordinates for each of these corners, as an example. We have our origin point, which is down here, which is 0, 0, 0, and we can see that the lower right edge is 12 along and 0. Again, this isn't a particularly complex concept. It's trigonometry 101, I'm sure you're familiar. You can imagine describing it like this. This is an array, a two-dimensional array of how you describe a box on a previous page. It's not a complex thing to understand, but it's a relatively simple data structure. The great thing about this is that this is what's called a matrix in mathematical terms. You may have come across it before. Matrices live right at the heart of graphics programming. It's how you describe objects. It's how you describe their position. What I want to do here is that actually manipulating these matrices is what is right at the absolute heart of graphics programming. I will make this a little bit bigger in a moment. You can imagine that scaling an object, can you see that? Is that big enough? Sorry? Light theme. I have a weird sublime config, so unfortunately not because I've broken my sublime yesterday, but hopefully I can try to describe this. Effectively, what we're trying to do here is we're trying to take this matrix that we just created and we're just using map to add it into a map as a way of storing our objects. What we're doing is scaling it. We want to make that box bigger. Scaling it is a relatively simple thing to do conceptually. I just have to iterate across ideas of map my matrix. I need to map the axis of the second dimension. I need to map each one of these and I multiply them by a certain multiplier. Therefore, my box gets bigger across different axes. Again, this is a bit buggy. It doesn't take into account all kinds of things that you'd have to do when doing matrix multiplication, but it's an example of that. It's a simple mathematical trigonometric concept. There's nothing difficult here. If you could do it fast enough, you could probably do all of this math in your head. You'd have to do it probably billions of times a second, but should you be good enough at math, you could theoretically do it in your head whenever you needed to. Other things are very similar to that. Rotation is the same kind of thing, but using cosine and sine in order to be able to do radial transformations. Scaling and perspective are the same things. There are simple mathematical equations and this is what GPUs are fantastic at. Those kinds of floating point operations are the things that GPUs can do hundreds of millions of times in very short time periods. This is right at the heart of what GPUs are great at. The thing that I talked to a little bit about before, I'm going to talk a little bit about here is this thing called GLSL. You've got your operations, you've got your math that you need to do lots because obviously if you're operating at 60 frames a second and you've got 50 objects in your scene and each one of those objects has 200,000 points on it, you can see how exponentially large the number of calculations you need to do increases. It gets very large very quickly, but that's what GPUs are fantastic at and what they can do all day without any problems. The way you do that is by this thing called the GL shading language, which is a C-like language to run these math equations on GPUs and to render them on to screen to create 2D representations of 3D objects. Effectively, there are two kinds of these shaders. I talked a little bit about them in the beginning. There are vertex shaders that should hopefully become a lot more understandable to you at the end of this. Vertex shaders effectively allow a 2D representation of a 3D scene, which is what we're creating. We need to know two things. We need to know where pixels are and what colour they are. It's effectively the two things the GPU needs to understand and these two types of shaders are how you do those things. Vertex shaders allow the GPU to be on the screen and fragment shaders allow it to determine what colour they should be. These are things that are done, as you'd imagine, thousands and thousands and tens of thousands of times a second. This is an example of a vertex shader and as you see it comes in the script tag, it goes in the HTML. This is addressed from WebGL code. You can reference this from your 3 code. As you can see here, it doesn't look much like JavaScript. This is the kind of thing that I could probably spend a week talking to you guys about GLSL and you'd still probably need me to talk a little bit more. Unfortunately, this is the thing that there are lots of resources that are available that you can learn, but again, you can get started without knowing any of this when it comes to WebGL. There are lots of abstractions on top of this, but this is the level that all that complex 3D stuff you saw, like the troll game, the map stuff, all that complex 3D involves this shader language and it involves using shaders like this. As you can see, it's a typed language with... It's a C derivative, it's a superset of C if I know it's a subset of C, in fact. So if you're ever in any C, this will look quite familiar to you. It has a main function that gets pulled every time. Again, all of these things then you expose things like this GL position. That's effectively a variable that the GPU can use to calculate things. This is the code it runs. It's almost like a global variable in the JavaScript context that's exposed by this shader. The same thing with fragment shaders. This is the thing that determines color. As you can see, there's a lot of math in there. This is the kind of thing that is right at the heart of graphics programming and the way it works. These two things in combination create something like this. It's not particularly exciting, but you can understand that really doing this kind of thing without addressing it directly onto a GPU and trying to get a decent frame rate, trying to get 60 frames a second is quite a challenge. You couldn't do this in raw JavaScript. There are too many calculations you'd have to do. This inherently single threaded model of the event loop isn't conducive to doing this kind of thing. Plus JavaScript sucks at numbers, as we all know. Whereas a statically typed language like GLSL is inherently much better at it. That's unfortunately where I'm going to leave, GLSL. I'm going to touch on it a little bit tangentially a little later on. If there's a next thing that you're really interested in after this, GLSL is a great place to start. Again, it can be a little bit of a learning curve, but it's really rewarding. You can do incredibly cool things on the web. But fundamentally, this is one of the main reasons I'm here. GUNS 3 is an awesome library that provides a fantastic abstraction layer over the top of what can be a very verbose and hard to understand WebGL API. Many people compare three's relationship to WebGL to j crew's relationship to web.API. I don't necessarily think it's a completely accurate comparison but it's a reasonable thing for framing it. mae'n meddwl ychydig, amser'r cyfnodd, a'r meddwl yn teimlo yn fagrath i ddiweddol neu'r melyniad ar y cyflwyno. Yn ymweld, mae yma ymweld yma yma yma yma ymweld. Yn ymweld ymweld, mae'n meddwl i'r meddwl yma i dwi'n meddwl i fath o'r cyflwyno, a'r ysgrifennu'n meddwl i'r meddwl i'r meddwl i'r meddwl. Mae'r ymweld yma, ymweld, ymweld i'r meddwl i'r meddwl i'r meddwl, dwi ddim yn geitig i gyfauth. A chynnwys ei ddal gwathio – ddim yn sefydlu arnaf, i fynd i'r dwyneud ei ddal gwathio, i fynd i'n sefydlu'n ddylch, i fynd i lwybr iawn. Felly, ei ddal gwathio nesaf i ddau'r ddau, roedd chi'n gawsb arall. Mae'r ddau'r ddau, mae'n ddau, mae'n ddau'r ddau, a bod ni'n ddau'r ddau, a bod ni'n ddau'r ddau, ni'n ddau'r ddau, idea of how these things work, but fundamentally three is like any of the JavaScript library, right? I would point you to this idea of overflow hidden on a HTML and body tag. When you're doing that full screen canvas and it makes life so much easier, especially when you're doing scroll movements and mouse movements or touch events, it just makes things much simpler for you. Some people go as far as to do things like turning pointer events off on the body in HTML, but it can interfere with capturing and bubbling events into your scene, so I generally kind of avoid it. But fundamentally we have this here, and I'm going to talk a little bit through this a little bit. This is a very, very, very simple 3JS scene. I'll talk to this a little bit line by line. So fundamentally we have this concept of a scene. I talked a little bit about the co-ordinate space before of this theoretical 3D space that contains objects and that contains references and points in those objects. Conceptually this is what a scene represents. It's a conceptual space into which you put things and into which you address shapes and colours that are rendered on the screen. The aspect is again just about setting the aspect ratio, which is useful for the kind of camera operations, because obviously you've got this 3D space, but you're not projecting it anywhere, you're not being able to look into it, and cameras in this context are pretty much, as you'd imagine, they're the point of focus through which you view a scene. The camera is an object, it's not a static thing, you can move it, you can change its field of vision, you can change all kinds of properties about it, but for every one of these scenes you need to create a way of seeing what's in there, and the camera is the way you do that. I'm going to talk a lot more about cameras a little later on. The renderer in this context 3 has a number of different renderers. It can render SVG, CSS 3D, I'm not going to talk about any of those today, but there are some that have lesser and greater APIs, so it's not a completely transparent API, not completely portable API, between them there's something you can do in WebGL, you can't do in the others for instance, but in this context we're using a WebGL renderer, and just assume from now on, I'm not going to put that line in, but wherever you see a reference to a renderer variable it's to this 3 WebGL renderer, and then effectively we're just appending the renderer to the DOM. It's creating a canvas object, and it's putting it into the DOM. This takes care of all of the get context 3D, getting the canvas context such a pain, and it's sort of a boast about actually doing it, and this is one of the great things about 3, is that this workflow idea of that sort of life cycle for bootstrapping a canvas, getting its context, this is all taken care of internally. It's an abstraction you don't have to worry about, which is really awesome. So the next little section we hear on line 8 onwards is this idea of we're actually starting to create something, we're putting something into our scene, and we're using one of the very simple abstractions that 3 has, which is creating a box, literally just a cube of systematic proportions of one item of size. Again, the units that 3 uses are a little complex, they're not pixels, they're kind of arbitrary units as compared to the base percentage. I'll talk a little bit about units later on, but suffice it to say that we're creating a cube of three dimensions that are the same in each dimension in this context. Then we're giving it a material, we're saying what should we close, because again that cube we've created is just a set of coordinates, remember like the matrix we talked about before, it's just that same thing, it's that same kind of data structure, it's just a representation of it, but what the material is here and what the mesh is and what the material is in this context is actually what skins the outside of the subject, how do you paint it, what does it look like on a screen, where it is and what size it is based on the geometry, but we don't necessarily know exactly what it looks like, which is what this material is, and those two things, the geometry and the material in combination create a mesh, which is this final thing, the summation of this box that I'll show you in a minute, and we add it to the scene, scene.add and then what we do also is we set a camera position, so we take the Z axis and we just move the camera up a little bit, so it's looking down on the scene and again what we then come on to here is this idea of a render function. Now again, sorry, sitting down again, the render function in this context is a thing that is executed, the speaker before me actually talked about request animation frame, I'm going to come on to it in a minute, so effectively what you do is you say render, it's like set time out, but there's a couple of important, set time out, while set time out zero, there's some important differences that I'll talk about in a minute, but effectively it says render this thing in a loop and this loop, this idea of a loop is right at the heart of graphics programming, no matter where you do it, so you say every tick of screen refresh rate or whatever it may be or CPU clock rate or GPU clock rate, render my scene, so the whole scene is rendered 60 times a second, which is why it's so important that you address things directly to the GPU via the GLSL shader language and so we took, as I said, request animation frame, in this context it's about, it's effectively a much more predictable application of set time out from the browser, it effectively uses the internal operating system and GPU APIs for exposing internal clock rates and screen refresh rates, but again the previous speaker talked a little bit about this, I don't think there's too much detail, but effectively you produce this, isn't that exciting? We've got a little blue square on the screen, again this is about as simple as three gets, this is the equivalent of changing the text with jQuery, it's very simple, but again actually to make it a little more interesting is two more lines of code, remember the render function that we had before that runs on every tick, what we've done is we've got a reference to this mesh variable and what we're doing is that on every tick we're changing the rotation and instead of I mentioned the matrix manipulation you have to do for these things, three takes care of a lot of this for you, it gives you an abstraction on top of it to allow you to just do things like this, just setting a variable, set the x rotation and the y rotation and increase it by 0.1 every time and effectively it means you get this, right, because what we're doing is we've got the same cube, the same mesh, but we're just rotating it and on every frame it rotates and again this again is maybe changing a CSS class in jQuery, it's like it's very simple but it sort of demonstrates the abstraction that it gives you on top of what could be quite a complex API, so further to this there are all kinds of other abstractions that it gives you for working with these meshes and these objects, this idea of setting position is very simple, literally just setting a variable to minus 100 moves this thing, there's no matrix manipulation, there's no having to deal with GLSL shaders, it's a very high level abstraction, scaling it, that buggy implementation that I wrote in javascript, there's a much less buggy, much more useful version of it here which is scale.set, so I'm going from a 1.1 to a 2.2, again these things are very simple, these aren't complex problems, they're hiding a lot of complexity behind it but it's a very high level API, again rotation, I showed that already, but again these kinds of rotation issues are very simple, they give you a high level API for addressing what is quite a complex set of problems underneath, so next up we're going to talk about actually painting it, because you can see in this here we created a camera and you saw me do it in the API here, I've got my camera up here, I've created it before, I've given it these three arguments which will hopefully soon become illuminated exactly what they are and so fundamentally, sorry let me click through these, we've got this idea of a camera and the camera you can see here I'm using a built-in three helper that I'll come back to, but effectively we can see very easily what the field of vision is for this thing, we can imagine here that you see this little square, this represents the 2D projection of this 3D scene, this little square represents this canvas here, so we can see that by looking down at it from this angle, we can see the exact projection we're going for, in this context we're using a perspective camera, so there's foreshortening, foreshadowing, things get smaller in the distance, but what we can see here is that the field of view is also quite narrow, so this field of view is this first argument and what you can do relatively easily is we've got a set of 15 degrees and it uses degrees for field of view, is that we can make that larger relatively easy, we can just set a field of view to a different argument and we dramatically expand everything we can see in this scene, so this idea of being able to dynamically update what's viewable, what isn't viewable is very important and is a big part of actually rendering these kind of immersive experiences, the idea of far is also quite interesting, effectively what it does is it's an arbitrary distance of how far should be rendered away from a particular the origin of the camera, and in this context we've got it set to a thousand, so it means that we can see the edge of this grid here, if we change it to three thousand you can see obviously we're rendering much further away into the distance and that has performance implications right, we're rendering more polygons, we're doing more maths, so this is one of the real key areas for actually making your 3D as performance as possible, we've got a performance section where I'll touch on this a little bit more, but fundamentally it's one of the real levers that you have for performance, actually deciding what issue render and when you need to render it, what sort of work here about orthographic cameras, now this is another, again unfortunately they've inherited a lot of terms that are completely unhelpful and unintuitive, they've inherited this from the rest of the graphics programming ecosystem, so unfortunately we're kind of stuck with them, but effectively this camera is something that you can imagine as a proper 2D representation, if you look at games like Clash of Clans or something like that, you know that 2D view you have on everything, that's an orthographic camera, as you can see here there's no perspective anymore, it's a proper 2D representation of a 3D scene, and this is very valuable, if that's the view and the sort of aesthetic that you want to go for in your scene, it can be super valuable, and you can see that the viewing angle here is very straight, you can see that it's projecting onto this grid in a very flat way, and again all of these helpers, I'll show you how to do all of these helpers stuff, this is like three lines of code, it looks very impressive but I can take no credit for it at all, so again that's this kind of idea of cameras, of having these arguments of like left right top bottom is probably as you'd imagine, it's effectively creating the polygon, the square on which this camera projection exists, and again orthographic cameras and perspective cameras have different uses, they create different effects, most of the time you're going to go with a perspective camera just because perspective is the way that the human eye sees things, we don't view things in an orthographic way, we view things in a perspective way, so if you're going to create photorealistic 3D, if you're going to do things that you want humans to recognise as not 2D, this is a very common way of doing it, is it me playing that music? Oh it was Jesus, I don't know why that was happening, and so fundamentally what you can also do is what's interesting here is that these cameras are part of this rendering scene, so as I talked about before you can change these perspectives on the fly, and this thing here is that we're just changing the size of the canvas dynamically, and it's re-rendering the entire scene based on the aspect ratio of the thing that it's displaying, so we've effectively added in new left right top and bottom, and it's dynamically updating the aspect ratio of the camera, so fundamentally it means that you can do things like on a mobile device, if you flip it, it can dynamically change the aspect ratio of the scene you're rendering, and it does all of these things, and that's, I can't really explain to you how complex that is to do when you write code, when you address code directly yourself, if you've written any OpenGL, other 3D you know that yourself, and again it's another example of 3D giving you a lot of really great tools for working with these kind of abstract, high level abstractions, and then geometry again is a big part of what we're doing, this idea of vertex shaders, the actual idea of these are just putting objects into scenes, the idea that drawing a cube, drawing a globe, drawing a cylinder, drawing a torus, these shouldn't be difficult things to do, they should be simple, and in three they very very much are, in this context we've created box already, you saw it in the code, the very basic example, again give it a width, height and a depth, you're good to go, you've got a cube, and now again how useful the cube is going to be if you're building games is another question, but what it does is it allows you to, you can add them together, you can build, it's relatively simple, there's like a 400 line 3 app for implementing 3D tetras, because of the high level abstractions that 3 gives you in this context, again the same thing with spheres, the same thing with cylinders, and the same thing with torus, tori, the plural of toruses, and again all of these here, these are objects, these are vertex shaders that are just part of 3, they give you for free, you have to bother writing any of the GLSL shader code that I've talked about before, and this gives you a very easy way of starting to reason about the way 3D graphics work, how objects interlink together, how you position them on a screen, and all of those kind of implicit ideas, it's a great learning tool, these kind of high level geometries that 3 exposes, but what's important here, you can see that I've got just a basic gray color with a wireframe on them, but starting to actually make things a little more immersive, a little more part of what you would probably imagine as the 3D experience is this idea of materials. Actually what are the things that we talked in the original thing with the multicolored material that I added to the cube that span, this same idea, there's lots of different types of materials that you can create, and all of these things are again just single function calls from the 3 libraries, no math of having to create a Lambert or a Fong texture, which I'll talk about in a minute. Effectively we've got this one, which is a basic material, which is just there's no shading to it, there's no, a lot of these materials, what's important about them is what's called their specular profile, which is how does light interact with them. Light is right at the heart of rendering 3D, and right at the heart, I'll talk about light specifically in a minute, but right at the heart of how you interact with them, and this basic material has no specular profile at all. All it has is just a single shade, you can see it here, it has no shininess, it has no occlusion, it has none of the things that you'd associate with a real world object. The other one, which again is just a different call, is just a Lambert material. Now there's a Lambert material and a Fong material. The differences between Lambert and Fong materials are probably another four hours between the two, but fundamentally what they do is they have different, this idea of specular properties, they have different properties for the way light interacts with them. The way light interacts with objects is a big source of computationally expensive operations on GPUs, like how shiny something is, how light reflects and refracts off it. These different materials mean different performance implications, because if something fundamentally has a more complex lighting profile, it's going to take more resources in order to be able to render at a high frame rate. Finally, you've got this idea of a normal material. The concept of a normal, and this thing here, is fundamentally about how light interacts with something. Now you imagine that if you've got a flat surface, the classic reflection is that the light beam comes in this way, it reflects off exactly its tangential angle. It's not complex, but if you have a different or weird angle, you can get different refracted surfaces, and that's right at the heart of texture. It's what makes things look anything but smooth in the real world. Fundamentally, this is what this idea of projecting these normals allows you to visualize. Effective, this is a smooth texture, so we can see these little black lines are just representing how the light is interacting with these materials from all around it. We're talking about material properties here, which is this idea of exactly how the properties are affected by different normals and different textures on their surface. Again, we have this idea of a smooth shading. It's what we used in the previous two examples. It's completely smooth. It's actually a series of polygons, but it's a series of angled polygons that are small enough in dimension that they appear to the human eye to be smooth, to be round. That's what's really interesting. One of the conceptual things a lot of people have problems understanding is this idea that how smooth something seems, no matter how textured or human-like or whatever, it's a conglomeration of millions of tiny polygons. If you ever play games, I'm sure you'll notice as well as I do. Fundamentally, there are other shading models. There are flat shading models. What this does is it can actually really help you understand rotation geometry. Again, you can't tell that this globe is spinning, can you? It is spinning. Fundamentally, it allows you to A, a different look and feel, but also actually to help you understand it's useful for debugging in a lot of contexts. If you have a smooth object that you can quickly throw a different shading method on, you can quickly see how it's interacting with the scene around it and how it's specular profile, its light profile has changed. Again, we can see the normal's bouncing off this again. We can see that it's bouncing off the top in the uniform. This is very useful. You wouldn't use this kind of normal. Again, these are very simple things. This is a single line of debugging code in three. You wouldn't put that into a production app because users have no need to see the normal faces of something, but fundamentally, it can be super useful for helping you understand how lights interacting with your particular object or your particular scene or the thing in your scene. Again, what we've done here is we're just putting in those wireframes that we talked about to show that this is just a collection of triangles effectively of polygons. Again, this is vertex normals. That's effectively about the same thing, but as you can see here, this is a smooth rather than a faced shade. You can see from the inside as well the rotation of these things and the tessels and the polygons that make up these just apparently smooth objects on the outside. Again, colour. Again, changing the colour of something is very simple. You have to use the octal representation. You can see the 0x in front of it, but these are hex codes. That's blue. This is something that's very useful because colours and we talked about fragment shaders. The way that GPUs do colour probably isn't the way that you're dealing with colour, especially from a web background, but three again gives you lots of transformations and abstractions to allow you to deal with colour in a really interesting way. Again, I'll talk a little bit about colour in a second. Again, shininess of these materials. I can make things more shiny, less shiny. These are all properties of these things. Again, what this is doing is effectively showing you that all of these components that make up these objects that make up these scenes effectively combine in the GPU, combine in three through these abstractions to provide you a very high level API for changing and looking at the way you interact with these objects and these environments. Again, opacity. Again, should you want to make something transparent, it's perfectly possible. Again, all of these concepts are very familiar from the dawn. Change of colour, change of opacity, give it an outline. None of these things are new concepts, but it's actually quite a complex thing to do without an abstraction like three and the value that it really provides. Fundamentally, I'll talk about UV mapping. UV is, again, an unfortunately not very intuitive term. We talked about XYZ in terms of axes. UV are the axes of a 2D texture. They couldn't use X and Y again because it would have been incredibly confusing if you talk about X and Y in one axis and then X and Y in another axis. So they use U and V for the vertical and horizontal. Again, it's not intuitive, and this is something that's been inherited from the rest of the graphics programming world. But fundamentally, it's about how do you map 2D textures onto 3D objects? This is a very challenging thing to do. How do you actually take this thing here, this map of the world, and convert it onto this 3D thing? Because 2D to 3D is a conceptually big difference. I mean, what we've got here is we've got this globe in the middle that's spinning. On the right hand side, we've got, imagine we've taken that globe and we've spread it out. We've made like a 2D square and we've expanded the globe from it. What we can see is that when we project these things onto a 2D space, they're really distorted. It doesn't look relatively intuitive from a human being's perspective. But the fundamental point is that these textures, these UV maps that you put onto these objects are fundamentally a very easy thing to do with the abstraction that 3D gives you. It's literally a single line of code that I'll show you in a moment for actually creating these quite complex, very high quality graphical textures. Again, but what's interesting, I'm going to talk a little bit about mapping more because there's lots of different types of mapping that give you different effects. But fundamentally, in order to achieve something like this, which almost seems like magic from the dom world, I think it's like five lines of code in total. And again, that kind of idea, the abstraction, is something that, exploring three, you're going to find super, super useful. And again, we have the same thing without the wireframe. And again, this is how I did it. Literally three image utils load texture. It's literally that simple. It deals with all of the loading, all of the event lifecycle for when the thing's loaded when you put it into a canvas. And also, this is the reason why these kinds of things are why you often see loading screens when you do like complex 3D WebGL. It's loading all of these assets because obviously you want all of your assets to be loaded before you render your scene, otherwise it's going to look really weird. But again, we've just got some lights on it, but this is the same thing that we had before. Exactly the same. There's no difference between it. But again, this is very much a texture. This is this idea of, this is fundamentally a map. Again, I realised that it's a little confusing, that it's a map I'm projecting and it's actually called a map. So forgive me, I only realised this five minutes ago and it was too late to change. But effectively, this idea of a map is the thing of what the surface, what are the pixels that sit on the surface of this object? What do they look like? What are their RGB values, et cetera, et cetera? I think we'll talk about this idea of normals. We've talked about normals before, but also this idea of how does light interact with your thing? Obviously, if everything in 3D graphics was completely smooth, nothing would be very interesting. It wouldn't look real at all, but what you want to do is you want these things like, you want light to interact with your objects in interesting ways to create shadow profiles, shadow maps in interesting ways. That is exactly what this is, this normal map texture, and it's loaded in the same way that you load the other map. You just set it to a different property when you initialise material. Again, I'll show you the code in a minute. You'll laugh at how simple this is to do and how actually quite impressive the effect can be. Again, it uses this format, this colour format. Again, this isn't something you create from scratch. There are lots of 3D authoring programmes like Maya, 3D, 3DS Max, Autodesk, all of these things that create these 3D objects that can export all of this stuff very easily. You can just load it in dynamically. Finally, we'll talk about specular maps. There are shininess. Nothing is uniformly shiny in the world around us, like the texture of a human face, some of it is more shiny in certain areas, hair is less shiny than everything else. You want your objects to have these different specular maps. We talked a little bit about this before, setting the shininess factor on the materials. Fundamentally, this is a way of doing a much more fine grained level of specular mapping. How should light interact with my thing? All we're doing is saying that water is fundamentally shinier than land. It's not a complex concept to understand, but loading this map again with this same concept is again a very simple method of doing it. In combination, we get this. We get the same thing we have before, but it looks fundamentally better. The way that the light interacts with it is much more realistic. It's just a fundamentally better experience. Again, it's literally that is all you need to do for each of these things. You know the way we loaded the texture, you load the map, the specular map and the normal map in the same way, and you create a ffong material in this context. It's the same way that it created the material for the spinning cube. I could add one of these textures to that spinning cube in about 10 seconds. I won't because they'll look a bit weird, but theoretically it's a relatively simple thing to do. This is the power of three because it abstracts all of this stuff away from you having to do it yourself. If you've ever done WebGL loading textures, putting those textures into the GPU can be a bit of a pain, to be honest. Talk to them about colours before. Hex is the default colour standard for 3GS, which is awesome. It's what we know day to day, but also RGB getters and setters for hex is very important. What's really interesting is that this line here, this material.colour.set style, you can actually use CSS style selectors for setting colours on materials. That's really useful because you can use style sheets for actually building up some of the aspects of a 3D scene. That's really cool because that's something you can't really do with the OpenGL, the classic graphics programming side of this kind of thing that this is inherited from. Again, talk about lights. We started to look at lights a little bit previously here. We've got a couple of lights that we're rotating in this object. Again, what we're going to talk through now is actually how you go about creating those lights and how you actually set them as other objects in this scene. All these things, cameras, meshes, lights, they're all just objects in scenes. They're just things that you have complete control over. There's no restrictions really for what you could do with them. You can move them really far away. You can change there all kinds of properties on them. Fundamentally here we have a directional light. We can see exactly where this is another helper I'll talk about. It's a light helper that's rendered by 3 by itself. Effectively what it does is it allows us to set the color of the light and the intensity. Again, changing the position of a light is you change in the same way that you change any other property in a scene. You set a property on its position object. These things are very, very simple. A lot of these concepts are shared between all these different objects and across all these different concepts, very deliberately to make it much more easy to understand. Indirect lighting obviously, you can say position it facing away at 160 degrees, facing away from my object so that we get indirect lighting. Again, all of these things moving them around is as simple as setting properties on a particular object. Again, I can't emphasise it enough. This is the real power of the abstraction of something like 3. Again, we're talking about directional lights. Changing colour, again, relatively simple. We can make things much lighter. We can turn down the intensity of the light, make things a lot darker. We can turn the brightness up by a significant amount. All of these things are just properties that you can set. This is a point light. Again, imagine this is the color point of this. The directional light is like a spotlight, like one of these lights up here. A point light is like a lantern hanging from a string. It casts light in all directions. This helper you can see here, this sort of cube on its side, is quite a useful way. You can imagine light coming out in all directions. You can see it has different effects on the scene. The base of the scene that we have here is affected by the light in a way that it wasn't before. Again, all of these things are usable in the same way as the other lights. It gives lots of different effects. It gives lots of different ways of viewing objects and interacting with them. Spotlights, again, a little bit of a different concept, as you can see, but works in a very similar way. You can change the angle with spotlights so you can make them a lot wider, a lot smaller. This kind of thing, the combination of mutating objects, of mutating lights, of mutating cameras, and the state of all of those things in mutating them, are what is right at the heart of this idea of animation and making 3D scenes. That's a much more immersive experience. Again, set an exponent. Make it less intense, more intense. Ambient lights, which again, scene lights for things that are global lighting effectively. Multiple lights. Again, a concept that you can use in multiple lighting to get different shadow effects and shadow maps. This affects specular maps because you can imagine that with the glow that we created before, we had two lights spinning around it, and it's casting different shadows in different areas. All of these things are, as I said, relatively simple concepts, but what they allow you to do is to create some very cool effects and some very cool and easy to use code. In almost no lines of code, these are like five lines of code each. It's almost ridiculous how easy this stuff is to do. Talk about models. Again, up to now, we've talked about very simple geometries. We've talked about circles and squares and cubes. Again, nothing that's going to particularly get you excited. What lies right at the heart of 3D? This idea of a model of actually a set of matrix, a very large matrix, that describes all of the properties of an object. In this context, I'm using the teapot. A teapot is the definitive example that everyone goes to when they do examples of 3D rendering. I don't know why. I tried to find this out for this talk, but apparently it's lost to the annals of history. Somebody used a teapot once and everyone's used them ever since. Effectively, what you can see is loading these models is a relatively simple thing to do. Again, 3 gives you the abstraction to be able to do it. It's literally this. It's a callback that gives you geometry materials. All those things that we set manually before, in terms of setting the face materials, all these kinds of things, you can all set this directly. What's really awesome about this, and I'll show it to you now, is that I'll show you the code for this teapot model, because this model, iframes, models, teapot, this isn't something that has been written by hand. This is something that was created in something like Maya or 3DS Max or a 3D authoring program, and it was exported into this format. In a couple slides, I have a representation of exactly how to do that. There's a couple of help-itals. There's a Python script that you can run to convert from object files to these JSON descriptions. Effectively, what it does is contains all of the information about this particular model and how to render it. You can see all of these vertices here. This is just that giant matrix that I talked about. Again, it's probably going to make my sublime crawl, but if I wordwrap this, you'll be able to see just how many objects and vertices there are for rendering an object of this kind. Again, loading these models is a big part of why you have spinners. It's why you have them in games. Loading them from disk on a desktop game, a game from Steam, is massively expensive, especially when you have them megabytes or hundreds of megabytes in size. That's why loading screens exist. It's loading textures and models and all of these kinds of things that are probably a lot bigger than you would originally imagine they would be. This is thousands of lines long. Or a single line, but thousands of objects. Again, using this JSON loader, all of these things are built into three. They allow you to do them very simply. They give you high level abstractions for implementing them. What you can do is you can download a really awesome 3D scan of someone's face that was created in Maya or 3ds Max. You can export it to an object file, convert it to JS, and load it into the browser and do all of that in about five minutes. Yes. I'll actually come back to that. I've got a little bit about what the actual differences are a little later on, but remind me if I don't go into it in enough detail to answer your question. I'll come back to that in a minute, if you don't mind. This is part of 3, something they created, converting objects to JS files. This is something that I use a hell of a lot. It takes a little bit of time because these models can be huge. Fundamentally, this idea of interacting with these objects, you can see this is a representation of those thousands and thousands of vertices that we had here. Each one of these points represents a point on here. It represents one of these points which are linked with vectors. This is a helper that you can add to any scene to add a wireframe to an object very easily. Effectively, it allows you to debug, it allows you to get a good understanding of exactly how complex your models are. A performance indication is that the more of these vertices you have, the more of the fragments, the more intensive it is for the GPU to render. There's the principle of least complexity, which I'm sure we all know is that the core software engineering principle really applies to 3D graphics. You want the lowest possible resolution that you can possibly get away with and have looked good at any point in terms of performance 3D. Again, you've seen me use this one before. Previously, this is a grid helper, just so it can actually help you to align where your object is here. It's very difficult when you have multiple objects sitting together to actually see relative positioning between the two of them. This can just really help you because it gives you a baseline, it gives you your origin as well, so when you say that plus 100 in the y axis, you can see that the grid is at zero. You can sit the grid elsewhere, but you could set it wherever you like. Again, you've seen me use these light helpers before. I've got three lights in my scene here. I've got a point light, a spotlight that's set to blue, and I've got an ambient light. Again, all of these things are very useful for debugging applications. You're actually seeing how these components work together, actually seeing how they interact with your objects in your scene. Again, axis helpers, very helpful. It allows you to see exactly where those axes are. You can actually see that this axis is actually static and that the camera is rotating around the teapot rather than the other way around because an axis is inherently a static thing. You can see that this is a bit of a contrived example I put together to show that actually I can spin the teapot, I can spin the camera, but that has obviously different implications for performance, because imagine I've got 500 teapots on the screen at any one point. If I spin the camera in the same way the previous speaker talked about repainting, the GPU is going to have to re-render all 500 of those teapots for every frame. That's huge performance implications. That's why you say maybe can I move an object or do I move a camera? Which one of them is going to give me the desired effect? In this context, I'm moving a camera. Generally, objects are things that people move. Again, it depends on what the effect you're going for, but object removing one teapot rather than 500 is generally probably a good way to think about it. Again, box helpers. A lot of these helpers are very simple, like bounding box helpers and box helpers. Effectively, they tell you where is the outermost limits of the object that I'm rendering? Where does it end? What's the box at which it interacts with? This is useful because a lot of models, there's a big problem that I'm sure you went into when you started doing this. You find the model that you like, you open it up in one of these programs, you convert it into the view. It just looks really weird. The vertices have gone a little bit funky. It looks very strange. A box helper and a bounding box helper could really help you see what is it that you're rendering? Where is it? Sometimes you can end up with things like a thousand points away from where you thought they'd be. These kind of things can really help debugging those problems. All these things in combination allow us like HDT pots. We've got all kinds of different materials. We've got very shiny materials reflecting them out. It gets the same three things. We've got this complex background and that background is just a map texture. As you can see, if we had a box helper to it, we can see we're just inside a cube. That we've just mapped a 2D texture onto a 3D scene. This idea of these helpers is incredibly helpful. When I used it in this example before, you were probably thinking, why am I ever going to use that? When you're doing dynamic 3D complex scenes, it can really help in terms of being able to take a step back. Actually, let me take a step back now. Let me try to understand what's actually happening in my scene, exactly how they all link together. Let me do a time check. It may be worth us taking a break at this point. We'll talk about interaction here, and how you actually go about interacting with these objects in scenes. The way we're going to think about it is, again, pretty similar to the way that you think about interacting with elements in the DOM. What you need to do is that each one of these cubes in this scene is a distinct object that we talked about before. This idea of it has its own properties, it's a discrete thing that you can interact with. What you do is use what this concept of called rays and ray casting, in order to be able to determine exactly what things are that you're interacting with. What we have here is an example of exactly how that ray casting works. When you move your cursor across this screen, you can see very much that there's this theoretical infinite line that comes out of your cursor through the scene and directly forward in the viewing eye angle. What it does is it interacts with objects in that scene. It says whenever that line touches an object, whenever it touches one of these cubes, select that cube, like doing dollar dot something or other to select a dollar dot. You can perform operations on it. What this does is effectively give us a conceptual framework for interacting with these objects. This isn't just cursor, this is touch as well, the same thing. On touch start, when you touch down, it will say it will create that same line out into infinity to see what you're interacting with. That's how ray casting works. It's the same with on mouse move on a desktop browser or pretty much any other interaction experience. It's a little bit more complex with things like leap motions because you've got multiple interactors and you have multiple rays, but even then it's the same concept. You have these things that go out into the scene and it determines how you interact with them. You can see in this context if we add a camera perspective helper, we can see like someone just had this question about 2D and 3D. What this does very good is what this does very well is it demonstrates this idea that we're conceptually representing this set of 3D space with these things in it that have an appearance, but the reality is all we care about is a 2D representation. All we care about is what they look like on the flat screen. This makes the computations easier because obviously you don't want the GPU to have to care about things that you can't see because it's wasted cycles. You could be using that for adding more polygons or doing something else. What you have is this is another example of this that gives us a little bit more interactivity with it in that we can just move this thing around and we can say actually what is it that this ray is interacting with? It's a very simple conceptual thing. We can change the rotation, we can change your back and forth. Effectively it gives us this idea of actually how you create this. Again, this idea of ray casting is a very complex thing to do when it comes to graphics programming. Again it's quite simple maths, but there's a lot of trigonometric functions for casting out into what are called Feynman spaces. Again, all of that stuff becomes fundamentally much easier with 3. Again, so what we've got here at these first couple of lines, I'll just talk through this little bit of code, is we're just getting the mouse X and Y positions. You imagine we've got an on mouse move event, we've got an on touch start event, whatever it may be, and we're just normalising these things into an X and Y coordinate. So tell me where on our screen that we're rendering, where is the mouse? Effectively the X and Y coordinates on it. Then create a new ray caster and effectively we say we create a vector and a vector in this context is a kind of, we talked a little bit about them before, it's about interactions between objects. Effectively, we see this ray caster set, line 9 is the really important one here. So you say take the camera position and then effectively normalise the ray caster. So say align my ray caster with the camera so that it's pointing outwards and directly from this bit here, the intersect methods, we say ray caster intersect objects. I've got a bit of a, fortunately got an unnecessary equal sign there, please ignore that. So what we do is we say ray caster intersect object, which gives us back an array and we pass in scene.children and scene.children is again just an array of all of the objects that are contained within that scene, three exposes to you. So effectively you can say take all these objects that are in my scene, take this ray caster that we've aligned on line 7 and 9 and mesh the two together and tell me which one comes up and we can see on line 13 there's the result. So we've got just an array here, we've got intersect 0, we're just getting the first one, because again array can interact with multiple objects, it doesn't terminate it, the first one it hits. So we say what's the first one it hits and then we've got this intersected, we've got, we're setting it here, pardon me, as a variable called intersected and it's the object, so we can start to do things like translate, set color, whatever we like, it's like a dom selector right, it's quite a similar concept and so effectively it allows us to interact with these things and like in that this is what like 10 lines of code, maybe less than that, eight on line. If you've ever written a ray caster or ever used a ray caster in something like Unity, you know how much more complex it is than doing something like this and again with this combination of like the model loaders with the geometry and the shaders and the maps and the ray casters it gives you primitives, you can actually do a hell of a lot without a huge amount of knowledge when it comes to graphics programming and actually the complexities of how you write complex 3D games, it means that you can leave all your logic and JavaScript, you can do like, it can do collision detection and all these kinds of things that two objects collide. And again, let's talk a little bit about tweening, tweening again is a bit of a useless word, tweening is a mathematical concept for representing a curve of movement, so the idea is, if you've ever used the easing plug-in on jQuery, you know exactly what I'm talking about, effectively what it does is it produces a, in this context it produces a stream of numbers between two things, so like position and target as we can see up there, y and y. So effectively it says you can have different easing methods so you can have like, we've got bounce out in this context but bounce in in this particular tween.js, I think there are about 100 different tweening things and in this context it allows you to sort of do things like that, allows you to do the bouncing and the kind of interactive movements that be quite difficult. Again, this isn't anything to do with 3.js, this is just something that's a library that you can use pretty much any animation but what you can do really easily is this on update function here, what you can see is that we're just setting the position of something on update and this gets called, on update gets called on request animation frame so effectively what you can do is just set your position on every frame based on your tween so that you can get that same level of sort of complex eased movement relatively easily and all of these things are just examples of how easy this stuff can be in terms of building 3D. So what we've done is we're kind of coming to the end of what I wanted to talk about, what everyone wants to do is we've had a couple of questions here but was to really actually talk and get a little bit more detail on the questions and then what I was going to do is I was going to take you through some of the code for some of the examples that I talked through before that have been in this presentation and that will be shared later but before I do that I want to talk a little bit about performance. We've touched on a couple of these things in a little bit of detail but these five things that I have here it's not an exhaustive list by any means but what they are is it represents a way of they represent the things that affect the performance most in WebGL apps. Polygon count we talked a little bit about this idea of the models that we loaded the number of polygons that they have on them represents the level of detail that they have. There are some really great demos of human faces and with hair and facial textures and coloring and they're super high polygon counts they've got millions and millions of polygons and the more polygons you have in a scene the more complex it is to render as you'd imagine right there are there's more operations that need to be performed in order to represent it to rasterise it in 2D so as I said the principle of least complexity comes in here you want to get you want to use the least polygons you possibly can now again this this is sort of this is like the trend that there's this sort of confliction in graphics programming that's been happening since inception that GPU vendors allow you to use more and more polygons to do more and more of these operations every year and it's a constant battle of how much can I get away with how many can I have and still maintain 60 frames a second how many can I still maintain at 20 or 30 frames a second whether it's acceptable or on the desktop at 120, 240 frames a second whatever it is. Texture resolution again I'm going to come on to that in a little bit when I show some examples but the textures I loaded they were huge they were like 3000 pixels wide as images resolution is obviously a big part of context is again more pixels more things to render more math that you have to do the more objects need more shaders and the more complex the shaders have to be when they're put onto a GPU another thing that I'm actually really guilty of here I showed you a little bit before when I zoomed inside the teapot at the moment I'm rendering I'm not doing the most most efficient way of rendering at teapot because what this thing is when the scene hasn't changed and when you can't see things like one of the strangest things you can do if you've ever used Unity 3D or any of the sort of large commercial or sort of desktop graphics programming environments there are ways that you can actually like freeze a scene and twist around and what it effectively allows you to see is that one of the biggest performance benefits you can get is not rendering what you can't see and that sounds like a very obvious thing to say like why would you bother doing something that the user is not going to see but it's a real optimization car a real optimization lever in 3D graphics for the idea that you can maintain what is actually in memory I mean a great example is you've got a complex scene that's got lots of things in it and users a user's camera and view point is looking at three out of ten objects but you still have all ten objects in memory but you're only rendering three of them so are there ways that you can progressively remove the other subject you can sort of unmount the other objects from memory you can prevent them having any kind of impacts or side effects on the rest of your rendering operation and also the kind of idea of again I'm doing it very naively here just for the sake of succinctness but that render function that gets called on that recursive function that gets called on request animation frame I'm doing a global render there so effectively I'm saying that render everything on every tick on every time request animation frame is called now that isn't a good way of doing complex scenes what you end up doing is that you have sort of discreet renderers so you could have renderers that can do light rendering for instance that can do texture rendering and again if you go if you look a little bit more the the definitive book on 3js called 3js by O'Reilly goes into in its further chapters actually how you start to break down your scene into sub renderers so you can actually so if for instance you know that a great example if a user has a person walked over in a game and clicked a light switch then you know that the other objects in the scene that are probabilistically lower chance of them needing to be re-rendered themselves that the light and the effect that light has on them needs to be rendered but you don't have to care about the scaling or the scaling of an object or the destructive properties of it or the collisions of it with something else so you can just render conditions or you could just render light and that's an important performance lever as well and again buffer geometry optimization that's something that again I could probably talk about for quite a while one of the things that's important about this is understanding the way that your the matrices you create those complex those complex arrays of floats how they interact with the GPU on the back end and how you actually how those that math happens in the beginning remember I showed you the the vertex shader that had the main and the void main and it was the shader effectively did positional counting and the positional factorization the math that comes in there optimizing that shader math is very important if you if you have to do an expensive operation minimizing the number of times you have to do it those kinds of things but again it's a little bit more of an advanced optimization but when you start doing real complex 3d it's something that becomes a very important part and finally just the idea of using GLSL shaders again most of the things I've shown you here there's no there's no explicit user shaders there's nothing I created or some of it is but not all of it and so what it allows me to do is probably like the way I like to think of it is that you can probably do 30% of what you want to do with 3d graphics without ever writing a shader the 70% of the most complex things the 70% of the things that you want to write these sort of complex games you're going to need to work with shaders in some form now that can be very simple the simplest shaders are literally just taking a variable from like from javascript and putting it onto a GPU looks like that gl underscore position that I talked about originally literally just like setting that to a javascript variable is that the simplest possible shader you can have a lot of time that's good enough you can the math you can do in javascript that you can conceptualize about a little bit more easily can be good enough a lot of these contexts you can just use the GPU for drawing but fundamentally what I want to do here and what I'll do now is I want to spend like 10 15 minutes just talking looking through a couple of these examples going through them a little bit more detail exactly exactly how they work and also move on to a Q&A hopefully by the end of it having taken up maybe about two hours of the two and a half hours probably on about half an hour early that's okay because they should be tea and coffee outside so it shouldn't be too bad and again I mentioned this to a couple of you guys before the live coding part of this is going to be a lot more a lot larger but unfortunately on Friday when I spent a lot of time in hospital a lot of time with the doctor I didn't get a chance to do it but on my Monday this GitHub repo should have everything that should have been here so there'll be a lot of stuff you can go back and look at and feel free to reach out to me on Twitter create some GitHub issues if you run into anything and I'm happy to help out I'm happy to help talk to any problems you have so fundamentally I'm going to take a look at a couple of these different ones and first of all I'll show you the first idea of the the implementation of those shaders that we talked about originally we can see here that we have the same shaders we have before this raft.js by the way is just a a shimmed for request animation frame just so that the sort of the browser rendering doesn't you have to particularly worry about browser rendering but you can ignore it safely and effectively you can see here we have these two shaders they have various variables declared in them for instance we have this vz time and size on the fragment on the vertex we have vz and time and again these things are things that are imported from the javascript scope and what we have here is again just a very simple thing we have this idea of we have a renderer that we've set again a webgl render and we're setting anti-aliasing to true and false and anti-aliasing again probably something you've come in contact with before is about a way of creating smoother surfaces about creating things that are more photorealistic in many senses but fundamentally we're adding it to the DOM we're doing document body append child and then we're sitting a clear color hex what are we doing so we've got our scene like our global scene and what we've done with this line is we've actively set the background color it's pretty much that simple so we have like a hex code and an alpha level so the opacity of that particular code so all that allows you to do is that if you're doing some sort of embedding it means you can actually control the background colors of your scene programmatically in webgl and again so what we have here is we just have a couple of variables declared for setting up our perspective camera and again aspect ratio is just taking the width and height and just obviously aspect ratios are important with doing these things because you don't want to if you've got a renderer that is 16 by 9 you don't want a four-thirds aspect ratio inside it so it's very important that you have that same you maintain the right aspect ratio and again so what we're just literally doing here is just setting up a camera we're creating a perspective camera we're setting the z x and y axes to it and effectively just giving it a couple of values and it's actually just rendering it from there and so that's it's the camera's initial position we're creating the scene and again you don't have to create the scene before you create you add objects into it you can create the scene as long as the new objects are created at the end and you call scene.add after you've called them initially so after we've created this camera variable for instance um so then we're creating this um there we go you have this thing here so we have a the new three mesh which is the idea of the um effective the mesh is the encapsulation of the vertex and the fragments so the position of all of these things and the colors that they're contained within um and within it effectively we have a cube geometry and we're using a cube geometry in this context because we're going to mutate the state of that cube um later on but this is the thing I really want to draw your attention to um we've got this thing called a shader material um so whereas before you saw that we used a ffong material and a basic material and a lamba material um what we're doing is we're actually using the shaders to create the material instead um so we're saying instead of having these dynamic the sorry these static colors we're going to effectively dynamically update the colors um of this particular thing and hold on if I take us back to um to this particular slide so I can actually show you what I mean then we can update it and see again we'll get there eventually I'm not in time sure what color it is there we go so you can see that we're updating the color dynamically um via this shader this is a sort of a moving pulsing color um that we're using this um shader to get and literally all we're doing is that we've got a vertex shader and a fragment shader that we're literally doing by document get element ID and effectively what it does is it allows 3js to manage the life cycle of putting those shaders onto the gpu um and affecting those changes back into our renderer as you can see we add our cube um and so we have renderer render seen in camera so we render everything and effectively we just have some interactions on it here um so we can actually change the color of these things and we can make it larger and smaller in real time um and you can see the number of colors that are being generated it's in the millions of colors that are being generated and again the ability to do that on the gpu is vastly improved over how you could do this with javascript just because of the number of flowing point operations it can do simultaneously um so again what we have here is um this idea of our animate function um so what we have if if the app isn't paused um what it allows you to do is to actually do some of these um actual render and you can see this like this uniforms here this is an important one I forgot to talk about um this uniforms uniforms what it allows you to do is that you can access the uniform variable from your shader as declared in javascript so that you can share state between the two of them um so that you can have actual you can have your application logic or your game logic or whatever it is affect your shaders and it can actually use seed data from javascript and feed it back and forth um it is statically typed though which is important um and so what it allows you to do is effectively the camera is just being told to look at this thing um and effectively what it does is it allows you to do this sort of like pulsing um as you can see it's a square or you can see sometimes it's a square that um mutates um based on various sort of random inputs um and also the color obviously um does the same thing um and so this idea is a very simple one it's a very simple scene it's a very naive example um but what it effectively allows you to do is to get a good idea of actually how you're starting to interact between these shaders and between 3js and between the um and the sort of the logic and the applications um that you can use um in this context there are a couple more examples that I want to go through um first one is this idea of the UV this is the globe example that we had before of mapping a 2d texture um onto a globe and now forgive me that a lot some of the code in here is a little contrived for to make it uh usable in a demo so that you can so things are a little more verbose than they need to be um but effectively what we can do is we're just loading a couple of um what's good with we're loading a couple of helper libraries um everything what we're doing here this is the init function that we really have to care about um so effectively we're setting up a perspective camera we're setting up the renderer as we did previously we're setting up an ambient light um setting up a directional light so that's the thing that was spinning um uh around the globe we're setting up the grid the ground helper um so that we can actually see the plane underneath it and again setting a helper is as much as just this four lines um it's incredibly simple um and again in this context we're creating our own element um for again reasons to do with the demo and the sort of unnecessary um and again we have this one here so we have this idea of a specular map a color map um we delete this comment because it's going to make your life a little easier um so we have this color map um this material map um and so what it effectively allows you to do is to get an idea of actually how this works um in reality so we have a geometry here um the effect of the allows you to see it's a sphere geometry that has this new material added onto it um and effectively again it's very simple it's pretty much the same color I showed you before but all it's doing is that it's utilising a couple of these utilities to load different color maps to load different specular maps um and again all of this in this is a lot more of a both so it needs to be for demo reasons but effectively to provide to do the same thing uh in a more terse form um would probably be about 50 lines of code um and that's a that's an incredibly small amount in comparison to the amount that would take you without three um so again what I wanted to talk about here was the first of all this is a couple of people asked me about this um object conversion file um and it's very important to understand what it is and what it isn't um dot object files which are the sort of the de facto industry standard for representing 3d objects in 3d authoring program so like blender uh 3ds max and mire they can all export to this format but that's not a format that sits very well on the web um so we need to just convert it into this uh text-based uh jason and javascript format um and this is exactly what this does it's effectively a parser and alexa for these object files um it generates an ast effectively or sort of it's not a syntax tree obviously but it's a it's the equivalent of an ast for these large object graphs and then it parsers it into jason um and it has all kinds of different options for it and I definitely encourage you to check this out it even gives you a little bit of examples of actually how you do um uh these kinds of things and it can do both um because again a question that came up before that was a really great point is the transpining thing from a binary format into a text format can have inflationary side effects like the idea that a binary format can be much smaller than the text format can for um you can do all kinds of eternal optimizations um but three can actually deal with binary formats as well um again it has some computational overhead for having to parse those things in the client um but it's perfectly possible to do um but what's great is that another thing somebody mentioned which is another really great point is that these this kind of like object conversion can form part of like an automated pipeline um before I worked at MDL um I did a contract at a gaming company and part of their pipeline uh for they did desktop games or mobile games but they had effectively a ci server that would take all of these blender 3ds max myr assets convert them into formats that can be accessed from uh from within the tools that they used to create the programs as it's part of an automated process of creating these tools and the same thing works with the web um it would make a lot of sense um in that context um and so there are lots of other things in here and again all of this will be on github so you'll be able to check all of this out um and you'll be able to read all this and what you'll also have is uh in this particular instance where are we here um load against another example of this but uh cubemap effectively um um this thing here is um the implementation of the um of the 3t pot um example that I gave um it's a little bit nicer format than the other ones because this is the thing that the uh the interactive learnings can be part of in this repo eventually so what you'll do is you have a repo you could step through this and you can go step by step and say how do I actually currently see how do I load textures how do we interact and eventually you'll be able to like take a t-pot and throw it at another t-pot and see them bounce away from each other which is the end state um because it means you have like object interactions you have um bounds and geometries you're not rendering it on the outside yet so effectively you're only rendering what is currently visible on the screen um so the great example is that so if you're rendering the t-pot from the outside you're not at the same time rendering the inside if that makes sense um you're only rendering the bits that's actually visible um and now that's the default but the performance thing I was mentioning was um sort of the intermediate memory between your objects being represented and then being rendered that is some of the things where you can load and unload for performance reasons I know it's the same renderer um it's just the what you're doing is that the camera is in a different position so effectively the camera is saying all right what's directly in my line of sight and then the renderer is saying it's the inside of a t-pot rather than it's the outside of the t-pot um so effectively it's it's sort of procedural but yeah it's exactly that um so all these things should be very valuable to you um but I think what's really interesting here is three as much as any other technology um is enabling sort of a new generation of applications to be built on to be built on the web um webgl particularly and um three as an extension of it um a really kind of raising the bar of what's possible um you've seen it I showed a couple of examples of the video before um it's amazing what's possible um and the technology and the tools to use them the learning curve is big the abstractions are becoming better and better and better um you can do more and more complex things with less and less um care to the complex implementation details there's a lot of things that can be hidden from you are having to conceptualize them and use them and three is an enormous part of that and I definitely encourage you to check it out um I said if you do have any questions afterwards come up to me here reach out to me on Twitter find the github repo put an issue on the github repo I'm happy to help but at that point do we have any questions um is there anything specific burning in your mind at the moment I've got someone at the back just there say that again sorry with DRM oh with virtual reality um that's a great question somebody else asked that question there a minute ago um web VR is an emerging standard um it's a very very experimental one I think that it's in chrome but I think it's still behind a flag when you start chrome I think that it's it's cut it in firefox nightly I believe but effectively it's a standard for doing binocular 3d um virtual reality in that context um and so the idea is and we talked a little bit about this before um binocular 3d um is a representation of two scenes um from slightly different perspectives so the idea is um the the distance between the human eyes allows us to perceive objects from subtly different angles because of that it has the appearance of three dimensions and those that idea and that concept can be implemented relatively easily in 3d graphics and you do it by cameras right so effectively if you've ever seen someone use an oculus rift and it has the two screens rendering things that look a little bit different in this context you would use a single renderer or even two renderers to render scenes from two different camera that use the same scene to render with two different cameras to two outputs um and you render them slightly subtly differently apart there is a 3js helper library for doing binocular 3d it's of just creating another camera um so it's very possible the tool in three the the tools for authoring the stuff are becoming very mature unfortunately the thing that's lagging behind is the web vr standard and the web vr implementation um if you google web vr there's a lot of chrome experiments you can do um so you can check that out but it's getting there it's getting better but it's not quite perfect just yet any other questions it's on back there so could you repeat the question yeah that's a great question i think that um it depends on your on the method that you use um i think that well take to take to take a step back you notice the the texture that that global texture that i used wasn't um was authored in a certain way so that it would be conducive to being wrapped around a globe um it wasn't a flat projection it was a projection that was created in a tool for wrapping around a 3d object um and what you can do is loading that into a gpu um there are various different scenarios you can use a various different algorithms you can use for uh when you zoom in and zoom out um because fundamentally what it needs to do is need to load a bitmap version of your texture into memory and it maintains the colour and pixel coordinates and in the same way that the browser does scaling according to what the name of the algorithm is um but effectively it uses the same sort of algorithms um so everything it will get blurry eventually and that you may have seen when i zoomed in um it does start to get a little bit blurry the further you zoom in but there are ways of dealing with that as well um there are things called procedural textures um which are this idea that you load a texture pack it's a jason file that represents a set of textures um that are for different zoom levels um so in the same way that you do like uh two x images for retina devices it's a very similar concept uh but with zoom levels as well so you say when i'm very far when i'm very very zoomed in load just this portion in a very high high resolution texture but when i'm out load a much larger macro texture as it were that answers your question perfect um any other questions perfect that's a very good question i think it's less about 3js than it's about webgl um webgl is um an emerging standard on mobile devices um for a long time apple said they just wouldn't support it um in ios eight onwards i believe um is supported in ios eight i think maybe seven that's all right um so it is working and the the but the benefit that we have on mobile devices is the open gl the base standard is the api by which you do complex graphics programming on those devices at a sort of an unnative level so the infrastructure and the optimizations to do high quality webgl are theoretically already in place on these devices um now the implementations of it um still aren't perfect they're emerging but it's very very usable um i would say that um targeting multiple devices can be a real challenge in webgl um especially going from things like desktop to doing like responsive graphics programming um is a bit of a challenge um simply because the performance characteristics and the performance optimizations you need to make are very almost very device specific um for instance um you're going to sacrifice the size of your textures and the number of polygons on the mobile device right just because it doesn't have the same amount of uh GPU power that a desktop computer has um and so therefore you're going to learn more textures and you can do that in one code base but you end up with a really unmaintainable set of code paths where you're having to say if mobile load this thing if desktop do this thing again that's not something we're not all completely used to um into doing responsive web stuff but it's like exponentially more um because um the performance implications are much broader and the performance is good but yes very possible broadly is the answer to the question um not by default um uh 3js um has um an api to plug in physics engines very easily um but by default um there are lots and lots of very good JavaScript physics engines now um there in fact there is a uh mr doob the guy that created 3js um recently went on like a tweet storm uh where he tweeted lots of very good uh um physics engines so i've definitely encouraged you to follow him on twitter anyway because like he does loads of awesome webgl stuff but um particularly um uh that particular instance no problem um that's that's a very good point i think that um the innovation happens in gaming i think that's the cool thing to understand that even the things that happen outside of gaming are reliant on the advances that has been made in gaming um i think that some interesting ones i've seen is i think audi did a car configurator um in webgl i think it was audi oh don't quote me on that um whether you could uh do colors of the car uh if you wanted like a hat like a convertible or a cabaret or that's one particular thing and it was really immersive you could like zoom it back and forth you could like open the hood you could open the boot you could like look inside change the color of the leather inside so it was really immersive um another thing that we did internally actually was doing this idea of um um feel building houses right this idea if you could build a 3d model of a house that you can walk around um that so that someone if you're doing a big development like a big off commercial real estate development um you could say actually here's a virtual reality device maybe or a web browser and you can walk around the house that you're that we're going to build for you and that's because there's lots of different things to there's lots of things you can do um i think that it was so early um in what those user experiences look like the not much has been done that isn't gaming but the more and more that is done is just blowing people's minds um a little bit questions that's a fantastic question yeah it's it's very interesting because fundamentally um the skill set of being able to create these 3d objects in 3ds max in mya wherever it is probably isn't a skill set that most people in this room have um and so therefore you're reliant on 3d modelling 3d modellers um uh if you're going to do complex 3d graphic artists again we have a we have a lot of relationships with creative people especially in the client side we're working with designers user experience people interaction designers um so it's the same sort of relationship but it's just people with different skill sets um 3d modelling 3d artists environmental artists that can produce these assets um that you can integrate um in terms of skill level um as we talked about the learning curve with especially glsl can be quite steep um so i'd say start with gsll break the back of the hard thing first and once you've done that you're generally good to go um again building a you're probably not going to build like unreal you're not going to build like um unreal like an unreal engine game in the browser first off um mostly because most people aren't we've talked a little bit about this before um a way that a lot of these complex web gl games are built are actually built directly in javascript people aren't sitting there writing javascript and writing web gl and writing shaders um they're using a tool called emscripten or emscripten as it's sometimes called to transpile c plus plus existing c plus plus code bases into javascript and into web gl um the canonical example of this is that mozilla worked with uh the people that make unreal engine to transpile um unreal tournament into web gl i took them a weekend or something it was a really it was a fantastic piece of engineering because the problem you run into is almost also obliquely the point you were making is that the people with the really real skills building these cd games um they have tool chains that work very well um they have tool sets they have assets and tools that are orders of magnitude more mature than currently exist in web gl so that kind of transpilation allows you to um take advantage of a lot of that ecosystem um but fundamentally a couple of different team skills but the team's sort of same same sort of composition i would say um don't you feel yeah so um 3js has a number of out renderers that you can use um web gl in one particular context is um effectively hardware accelerated canvas it's all rendered in canvas um but what um web gl does is it allows you to directly address the gpu to render what is contained within canvas i mean it doesn't use the browser runtime in order to render the contents of a canvas um svg is a markup language in this context if you've ever opened an svg file it looks a lot like html um and again that has implications right because if you're using svg um again you can't use shaders but the kind of performance that you're going to get is inherently tied into the dom um simply because you're doing your removing nodes your adding nodes your modifying nodes um it has uh some of the things you talk about the kind of like reflow implications repaints um so it has implications um but effectively they're just output methodologies uh for the same api there's different different ways of rendering a particular instance and they're useful for different things um i i you wouldn't create a 3d game in svg um well you might be able to but you'd probably be a bit of a masochist um because it'd be very complexing to do um but fundamentally 3js just allows you to do complex animations complex um uh 3d and 2d rendering um with a consistent api across different outputs and those outputs are svg uh 2d canvas and hardware to the raid webgl as well so great question um there are uh there are a couple of books to check out the 3js book which is the classic o'r Riley example is great if you're into those kind of books um there's a website called webgl fundamentals um which is a i think a 20 um a sort of a 20 lesson introduction to webgl it covers shader languages it covers the raw api the 3js uses um it's a very good introduction i'd definitely encourage you to check that one out um there's a great um the opera team did a developer series on webgl i think it's a series of like four or five articles that's also a really great resource um it's quite a high level introduction um and there's also a number of like 18 or five rocks introductions as well which will cover some of the same things i've talked about um but in a little bit more of a sort of a walkthrough um code let me have it but generally there's a lot of resources out there but webgl fundamentals is one of the best and i'll definitely encourage you to check it out yes it will be um by this evening all of this stuff will be here by monday the rest of the stuff that was going to be here will also be there um my jo8 bit it's i'm jo8 bit everywhere um if you follow js channel or i'll get him to tweet it and i'll tweet it as well and so i'll make sure everybody knows about it um but there is a link if you uh of course it's bork now um so effectively yes it'll be on github i'll tweet it it'll be on my github jo slash eight bit slash uh webgl hyphen intro or where it'll be um when it's opened up awesome any other questions perfect so i really appreciate you sitting through this for two and a half hours um hopefully you've you've been bitten by the bug a little bit um hopefully you'll go away and hack on some stuff and you'll realise how easy it is and how fun it is um because you could do some really cool stuff um and again if you're into any issues you have any questions talk to me afterwards talk to me on twitter um crit issues on the github repo um and i'm happy to help out if you have any questions but thank you for listening and have a great conference