 How many of you have already worked with WebGL before? No. Okay. So the talk is an introduction to WebGL, which is basically an API in the browser to draw graphics via the gateway. And we do a very introductory thing, so if you already have a lot of experience with WebGL, I don't think you're going to have to. If you've not used the native API and you're used to using libraries like 3JS or higher level libraries, and you want to get an introduction to the basic WebGL API, then also the talk will be helpful. Can somebody tell me what the GPU is? Graphics processing unit. How is it different from your CPU? What's the difference? Sorry? There's one dedicated CPU. It's for images. One more? Yeah. Okay. So to the point of numbers, okay, everything in number design is numbers eventually. So the GPU also does number processing at the end of it all. As far as your definition separates out image processing for GPU, it is dedicated for graphics. That's the original intent, but not necessary. So the key difference is that a GPU is made up of several small processors. So they're not as advanced and as fast as the CPU. The one processor in the CPU is faster than individual processors in the GPU. But because GPU has several of them, it can do things in parallel. That's why it ends up being faster. And especially for images, it ends up being faster because images are a set of things. Yeah, so I don't have any slides today. I'm just going to walk through a little bit of code. This is the Wikipedia definition for GPU. A graphics processing unit or a GPU is a specialized circuit designed to rapidly manipulate and alter memory in such a way that it can accelerate the building of images in a frame buffer intended for output to a display. So essentially it's a means to quickly manipulate memory. So if you have a bunch of values in memory, a GPU is a unit that can take that array of values and transform all of them parallel. So instead of doing it sequentially, let's say you want to multiply each number in an array by two. If you did it in a CPU or you go from one circuit to the third circuit, it will do it sequentially. But if you give it to a GPU, it will just multiply all of them together because it has several processors. And just to make that a little bit more clear. No. Alright, never mind. So the video I was going to show you was this video by this group called MidBuster. It's a TV show I've ever seen. They try to show you in nice interactive ways to explain some of the concepts. So they have a video where they show the CPU drawing first. And they are just in general, go one by one. So if it's drawing a smiley face, draw one pixel, second pixel, third pixel, fourth pixel, fifth pixel. They don't like it. This is how a CPU would do the process. But if it was the GPU, they saw the same example where it's not drawing a smiley face, but it has several thousand injectors. And all of them apparently paint the model on a wall. So it's just that a GPU is essentially a collection of several small, simple processors. So WebGL. So WebGL has people use Canvas here. So Canvas is the drawing service that they have inside in your browser. So they actually would fire today. And WebGL is a context. One of the contexts you can get from Canvas. You can get the regular 2D context from Canvas or you can get the WebGL context. It is based on this specification called OpenGL PS, which is used in a lot of mobile phones and a lot of desktop tools as well. So the API looks almost the same, even though this is a C-based API. But the JavaScript version looks almost exactly the same. In fact, there is no documentation for the JavaScript version. I have to wait to see if I can tell you better about it. It is not necessarily for GPU. So the major misconception that a lot of people end up having with GPU processing is that you need it only when you need to know 3D graphics manipulation. A lot of graphics manipulation, for example. But GPU is not necessarily for that. It can just as easily handle 4 dimensions. And it can just as easily handle non-graphics. Because essentially it is a number processor. It is a number processor that takes an array of values and manipulates that array of values into something else. So you do not have to necessarily use it for graphics. In fact, Iron is called a company which is into data analysis. And the main use we are doing of GPU is on the server to do data analysis of large data sets. And we are also using it on the client to do data manipulation. But that is not our main use. So GPUs can be really really fast for other purposes as well. And yeah, so that is what I was trying to say earlier. It is really a low light. So it does not care whether you are drawing 2D or 3D. That is up to you. How you figure out how you are going to draw 2D or 3D. It has a 3 dimensional 4D system but that is about it. So this is where I wanted to show demos. But I don't think I have internet access. Alright, so we get to the demo later. We start by writing a first GPU program. So can everybody read? What does it say to be bigger? Bigger? Yeah, good. So because it is new. So the programming language is different from JavaScript. You can access it from JavaScript to the core. Concepts are different. And how the runtime functions is also different. So try and pay attention. If you get lost, stop me. And we take a quick look back again. And it is the only way I could think of how to explain this. Alright, so to start with I have a simple skeleton page. It is going to be the basis of any WebGL application that you write. What am I doing here? I have created a canvas. So you need to run WebGL and need a canvas. In fact, you can't even run it in all browsers today. You can run it in pretty much everything else except I. I am defining the standard. And it is not here to standard. It is a proposed thing. Although the specification is complete. It is 1.4. I have been excited on what it should be. But as of IU10, Microsoft isn't supporting it. And their arguments have been very well founded. But it is just them. They are not supporting it. They are not supporting it. But the good thing is almost all recent mobiles have it. Which is great. At least the smartphones have it. Which is great because if you are doing a mobile specific thing, you can use it. If you are not doing a mobile specific thing, if you are doing a web-based application, then as long as you have a canvas, you can do things. So you can fall back from WebGL to canvas. In fact, a lot of the common libraries, WebGL libraries that are available, like VJS do that. They try to draw it in WebGL. But if they don't get the WebGL content, they draw it in canvas itself. The problem with that is that canvas isn't GPU accelerated. At least in most browsers except for I. So your rendering will be slightly slower, but still kind of works. So this is a basic shell. We have a canvas in place. And this is my initialization, start-up function. Once I have the canvas initializing, before I go too fast. Every year, most hours are different. Perfect. Yes. So we will actually see the demos first. That will be motivation, hopefully, to stay away. Because we're going to see a lot of code. So, it's my demo again. Many of them are done. That's just one of the popular libraries. This is really high-end 3D. That's possible in the browser today, right? It's a really high-quality texture. In fact, if you weren't limited by Necromang, you could have gone higher quality even. And it's concerned, isn't that complex, although the face part is fairly complex. That's fine. That's fine. The face part is fairly complex. That's interesting. If you're 3D not building car racing, of course, isn't going to help. Right. Sorry. So this is... It shows an application that wasn't possible on the web before this. So what this company does is they collect data about human body movement. And they have measured how humans move in certain synaptics. Right. And they are sick or when they have different moves, right? What this application does is it visualizes that data. To, for example, if you're really, really sad, then you're moving like that, right? And you're super happy in moving like that. If you're... We were super female. The thing about this is that as far as the application is concerned, if once you learn WebGL, it opens up new enemies. See, why this wasn't possible in old browsers is because they have to process each... See, this model is built out of small little triangles. Every curve is built out of several little triangles. And so this is probably a few hundred thousand points. Well, it's already moving all those few hundred thousand points, but every time I change this, all those have to be manipulated. See, that just wasn't possible in the browser before, right? WebGL starts to become interesting. You can obviously rotate earthquakes around the world in the last five, six years. And does it really nicely? Has anybody seen this? Google had this body explorer that you used last year. Can you see this? Cool. So this is another application that just wasn't possible on the web earlier. You can see where it's still possible with WebGL as a part of the web standard. Alright, so this is that earthquake application. I can't see all of it. Okay, so nice little model. It's very simple to build actually, theoretically. So when I clicked on something, right, showed me one earthquake. How far did it stretch? And that's been pretty nice. So my reason for showing this was that most people have this conception that you look at the GPU and you're building games, right? The reality is different processing is everywhere. You know, on your iPhone, the moment you do a swipe, that swipe is so smooth because it's using the GPU, right? It wasn't possible in other browser, other phones earlier because they didn't have good quality GPUs. And the iPhone came out with a pretty decent quality GPU. And the reason you see all those nice effects is because they're being done in the GPU. So to combine your UI as well, it must be done in the GPU. Although in at least the latest browser with CSS3 and CSS3 transitions, you do get some amount of GPU acceleration. But WebGL gives you more control. So we keep that example as a load for when we come back to October, okay? So this is my skeleton. I've created a canvas and I've spoiled a self-executing function that figures out gets that canvas element by ID and passes it to the function. Everybody here understands what's happening here, right? Okay, if you don't understand this, then we have a problem. So just ask right now. Alright, so we have a function and the reason I've created this because this is where my scope is going to be I'm not going to modify anything outside. My scope is this function. Everything I do will be inside this function. A simple example. The first thing you need to do just like with anything you do on the canvas is to get a context, right? And if you're doing regular canvas drawing, you get a 2D context. You might be familiar with this API call which is canvas.getContext. In regular canvas drawing, you say 2D, but in WebGL you say experimental WebGL and this experimental is going to go away very soon. The standard is final so most probably it's going to come WebGL soon. You must do this in a tricab spot because on browsers that don't have this, they will throw an exception. They don't just fail they throw an exception. So you might want to catch and react to that exception but as long as my context is null after I've done this I assume that WebGL is not a valid. So this could be a quick WebGL test if you want to run it and see if it's out if you're running support WebGL. The fact that I've gotten a WebGL context and I locked the context object as well. So I will just quickly run this application ready. So if we go look at the console you see that my message is not printed and this context object is basically your window to the GPU. You can only access the GPU by doing something on this context object. And what happens is the moment I create a WebGL context something called a drawing buffer gets created. This drawing buffer basically has information about all the pixels that you want to draw. So because my canvas was 500 pixels by 500 pixels, my drawing buffer is 500 by 500. But the actual memory it is taking is more it is not taking just 500 into 500 bytes or whatever it's taking more memory. Why? Because in each pixel I need to store color information which is r, g, v, red, green, blue and the alpha at that pixel. And it also stores two other values once for intent. It basically means that if you're drawing two objects which one is in front and which one is in the back. And there's a third value called a stencil that you don't need to follow this right now. It has these r, g, v, a depth and stencil. So the overall size of this buffer that gets created is 500 by 500 into the size that all those values take to store. And this drawing buffer is where you will do all your manipulation. The GPU will then eventually render information from this drawing buffer. So it's important that we created the drawing buffer. So you know this was simple enough to start a point. So now you may notice that that's the one being huge for drawing. You don't see anything on the page. You don't see it is because the default value of alpha is 0. So the default value I need to pixel is 0, 0, 0, which is red, r, g, v are 0 and alpha is also 0. It basically means that that object is invisible. And that's why you don't see it. But you can change that. If you want to go back to our example quickly in the options here pass you see a flag. The context got drawn immediately. Now that's immediate is important. WebGL is an immediate mode graphics area. And it's different from retained mode graphics area. So for example in the browser it goes to the DOM. So we create a dip and a dip here. This dip doesn't appear immediate. The first thing when you say DOM.addDip to the add element dip what happens is that the DOM which is a data model representing what is on the screen gets updated. And then later on asynchronously a rendering phase gets triggered in the browser and then this DOM which is the data model of the screen gets reflected actually on the screen. It's called retained mode drawing. And it's common in pretty much all UI frameworks in the browsers. If you're using playlists, if you're using some other UI frameworks they call it screen graphs. But basically retained mode APIs have a data model representing what's on the screen. But immediate mode drawing APIs simply just draw. So the moment you tell it to do something it will go to it. So if you tell it to draw a line it will go draw a line and after that it will not remember that it drew a line. So there is no data model that tells you that there was a line drawn. And that's the difference between immediate mode and retained mode APIs. You could of course build a library on top of WebGL which would do the retailing for you. You could create your own data model and say I have 4 cubes in my system. This is the length and breadth and height of each cube and then later on render all of those cubes by calling WebGL APIs. But WebGL let's say I have this image here which means that the moment I create in this context the can was actually got rendered. The reason we didn't see it was because it's alpha was 0. Now we'll change that. We'll actually set a background. So we'll set a background. So remember I said that the drawing has values about each pixel. It has the rgb and alpha values of each pixel. The default value of rgb and alpha is 0, 0, 0, 0. But we can change that. And we change that by calling this clearColorMethod. ClearColorMethod basically says that when clear is for use these values to as the default. And the next thing we have to do is clear the buffer. So when we say clear the drawing buffer, it basically goes to the pixel value and changes the color to whatever we set here. By passing a flag it says only values don't modify the depth and the stencil values. These are also the values in the buffer. So this is what I do. Now let's see what happens. The refresh actually takes a lot of time. So this is an important format. It has a model of basically the entire body. All our muscles like the brain, the muscle and whatever. Just like Google Earth, or this psychotes brain. We move on to the next example. So far we have done very little. We have already drawn a background. And in the next 5 minutes I am going to try and draw a square. And that's about it. But in the process of drawing a square, I am hoping it takes all the concepts required to do the drawing. So So here's how the drawing is done. The GPU is a separate processor from the CPU. So you do not have access to the same things that you have access in the GPU. Whenever you want to run something in the GPU, you have to upload it to the GPU and then tell it to run. And this thing that you upload to the GPU can be of two kinds. It can be instructions or programs and it can be data. So the data is generally stored in buffers. Or you can put them as arrays. And you can create an array and push that array to the GPU. Push the program to the GPU. And then tell the program to operate on the day. But this is happening in a separate processor. You have to understand that. So when the result has to be gotten back you may have to get it back from the hardware. So these programs that you push to the GPU are called shaders. So buffers are working to store the data in. And shaders are what are used to operate on that day. Ten minutes? For Q&A? We have Q&A already. Or do you want to let me finish? I am sorry. So shader is a set of instructions you send to the GPU to run on the day. You process the data so the goal of the GPU is modify the data in some form and push it to the frame buffer to render it on the screen. So we push the data and we push the instructions it runs the instructions and gets the output and displays it on the screen. But these instructions are called shaders and they are written in a different language. This is called GLSL graphics shader language. And this is a different language so I am going to show you a simple example. Right? And with Yes but I am going to just draw a rectangle which is a two-dimensional rectangle. That is the goal of it sir. My question is how does it because sorry how does it benefit performance? But essentially this is at the end of it all basically drawing. Our frame buffer the frame buffer of your hardware is two-dimensional. So at the end of it all everything you draw on the screen is two-dimensional. And that is how it is. Process each pixel faster. For example, if you are building an Instagram and you want to make a popular audience you would take and process each pixel in a bunch of electromagnetic operations to transform that pixel. Now this electromagnetic operation that happens on the image is very costly. So if you do it sequentially on each pixel it will take you forever. But if you do it in parallel which is what the GPU does you can do it much quicker. And that is why in 2D image processing also you will use shaders look like this. There are two kinds of shaders. So this program the set of instructions that we send to the GPU programs it is made up of two types of shaders. One is called vertex shader the vertex shader basically decides the position to draw each point. The vertex shader basically figures out where to draw it figures out what is the color on each pixel. And this is the vertex shader this is the fragment shader and here I am saying whatever position is input to me set that position. But basically what I am saying is if you give me a position to draw I will draw it. So our vertex shader is basically doing nothing. Our fragment shader is saying that whichever pixel I draw set its color to 1111 which is essentially white white with full opacity. So 0 to 1 is the RGB level. So we have two very simple shaders vertex shader says whatever pixel value you give me I will draw it fragment shader says if you give me a pixel to color I will color it white. The thing about shaders is that they operate in parameters. See these are the instructions that I have running in the GPU. So if you have let's say 10,000 vertices in your drawing let's say you are drawing that human body and it has 10,000 vertices all it's doing is setting the same vertex. But it will do it in parallel on hundreds of processors inside the GPU. Similarly the fragment shader when it's trying to color our Instagram image if I image it is 250 pixel by 250 pixel and it has to change the color of each pixel it will do all of that in parallel and in this case this fragment shader will set all of them to white. So if you have a really nice if you wanted to convert it into white this is what you would do. Alright so how do you run these programs? These are two very simple GPU programs one decides where to draw a pixel the second decides how to color that pixel So we have already got our context everybody has seen it if you have lost track just tell me Basically get that string I have created my script as a DOM node so you get that string you tell the GPU to create a shader create this is your CPU so here you say you pass a message to the GPU to create a shader so GPU allocates a memory for the shader and create a shader then you say the source of the shader is this string that I have in my CPU so when you do that the string that we had stored in our script tag gets pushed to the GPU once that happens so this is what I did here and then finally I compile the shader so the GPU knows how to compile and understand that language so when I say compile the shader the GPU compiles it it throws a big context or something I am basically telling the GPU to do something almost 90% of time some exceptions are so this is how I do my work send my vertex shader to the GPU similarly I will send my fragment shader to the GPU also I have two programs I send both of them to the GPU then I am going to combine them and this combined unit is called a GPU program or shader program and here I basically say create a program object attach the vertex shader attach the fragment shader late the program and start using this program right so simply I am saying all that I gave you start using it so we gave it two programs one was for every pixel as it is given you so the position of each pixel don't modify whatever is given you just draw it and the second one basically sets the color of this pixel to white right and this is where the pixel values are so now I have to give it something to draw right in our case we are going to draw rectangle right a very very simple rectangle now that are rectangle so this is my vertices of the rectangle rectangle has four vertices but I have given it so many values right it is 12 and 1224 values that doesn't make 4 vertices that makes 6 vertices right so what is that what is happening here yeah 6 vertices whatever my math so basically the GPU draws triangles right so if you want a complex shape if you have let's say a sphere right you could assemble the sphere with really really small triangles so the earth that we just saw in our example is essentially a set of thousands of triangles that look like 3 minutes I will finish in 3 minutes but basically I feel basically a lot of triangles every drawing in the GPU is a bunch of triangles right and that is why my square that I am trying to draw is also a set of two triangles these two triangles being this triangle and this other triangle right and this is what I have input so I have input the vertices based on these two triangles now what might be confusing you is the coordinate system the coordinate system is very simple this is 0 0 and this corner is minus minus and this corner is plus plus right so so what I do is I so now I have created this in a local area float 32 the special area stores float values then you have to see I said context create buffer so now I am telling the GPU create some space for these vertices right and then I bind the buffer it basically says take these vertices values vertices values which are in my CPU and put it into the GPU right oops sorry there is a static drawing this is just some map that I have to set this is important so this line is basically saying that each of these vertex positions set it to this variable right and this is a variable that is existing in my vertex shader my vertex shader basically receives a variable of two values called vertex position right so these are the two coordinates and I need to pass them from the CPU to the GPU and that is what I am doing in this instruction what I am saying here is that everything that is there everything that is there in my vertices array push it to this variable right and then I am going basically the same line I say clear color which basically sets the default value which will be used when the context is cleared and this clears the color right and this is where the drawing works so this is basically saying do all of that stuff that you are supposed to do and get a square right cool and so we did that what I told you to do is draw triangles but I can do something else as well if I say line group I get to see the variable so remember I told you that it is actually drawing triangles but what you saw was the square essentially is drawn this triangle in this triangle right so now right and this is how it essentially works I will share all this in my jitter where you can take a look but let me add right this is this simple introduction to the fact that essentially you can run programs in a separate processor and these programs run parallel the point being that when I gave it the vertex which had 6 vertices it didn't draw the first vertex then draw a line to the second vertex draw a line to the third vertex it did all of that in parallel which means it figured out the position the vertex should have figured out that each vertex has to stay where it has to stay the fragment should have in parallel figured out that each pixel has to be white and this happened at 1.4 rather than 1.3 but you don't actually have to call that draw right you could do that computation and not call it at all which is the main use case that I am using it for is to do data analysis so gpu has dedicated hardware to do to do trigonometric operation dedicated hardware to do matrix math and because of that if for example I wanted to multiply 2 matrices of 1 0 2 4 into 1 0 2 4 size which is a really big matrix I wanted to do multiplication if I do it in the cpu it takes me I don't know half a minute or so but it was in the gpu it takes me a second right if it's a fast enough gpu and if you the probability of several days gpu is a great place to do it but it works only if you have a collection of it if you only want to operate on one data point then gpu isn't really of value to you it is valuable when you have parallel data to process and it's great but if you are more interested we can do that outside thank you so much for coming have a good day