 I'm KP, and this is Tamil. So our talk is about how to get started with OpenGL programming on Android. It's more about OpenGL than anything specific to Android. And the structure of our talk is going to be like this. So initially, we're going to talk something about some basics about OpenGL, and then we're going to directly jump into code, some code. What we will be trying to explain today while we go through the code is a simple OpenGL application which draws some ancient shapes, and then has some basic animation using that. So what is OpenGL? Most of the computing systems available today has dedicated hardware specifically about processing graphics. OpenGL is a library which gives you access to this dedicated hardware. So what that means is most of the processing that is done in OpenGL programming, the programs which uses OpenGL are in the GPU, rather than in the CPU. And the GPU is a dedicated hardware which is highly optimized for executing graphics instructions. And another advantage of it, you have multiple choices if you want to do this, such as you have direct access in Windows and similar stuff. One advantage of OpenGL is that it is highly, it's available across different platforms, such as Windows, Mac, Linux. There is an opportunity limitation for almost all the platforms that I know. OpenGL ES. OpenGL ES is a string top version of OpenGL which is available for the desktop. This is something that is highly optimized for embedded devices such as mobiles and consoles. This again, OpenGL ES can be defined as a strict subset of OpenGL desktop version. We have different versions in both of OpenGL and OpenGL ES, such as OpenGL ES 1.1 and 2.1. There are some major differences in that. We'll get to that later. Like OpenGL, OpenGL ES 2 is available for a wide-billion platform. That means the programs, even if you create an OpenGL-based program for Android, without much difficulty, you can use it for, say, iOS 2. Some very basic concepts related to computer graphics. This is for people who have, when you do 3D graphics and such, went to graphics. Before I started with OpenGL, I mean, the graphics meant to me, what graphics meant to me was what Photoshop was. But basically, computer graphics can be classified into two different types. One is vector graphics and one is raster graphics. What happens in vector graphics is you try to build the objects in your scene with a set of very simple alternatives, such as lines, polygons, and surfaces, et cetera. So what that means is if you want to show a sphere, you try to build it using much simpler surfaces. What you get is an approximation of sphere. The more number of objects elements you have, you get a better approximation and raster graphics. This is what most of us are used to. What we deal with here is a grid of pixels, such as what is there when you do something in Photoshop. One simple thing that I can explain is if you, in vector graphics, you can explain a circle using just the center point and the radius. That is all the data that you need to represent a graphic which contains a circle. Whereas in raster graphics, you need the entire big map with different colors for the circle. So what this means is if you want a big map with higher quality or a larger size, you need more storage. But whereas in vector graphics, all you need to store is the center point as well as the radius. It doesn't matter what the radius is. Even if it is a huge circle, you still just store these two values. So that is one major difference between vector graphics and raster graphics. So in OpenGL, you use both these concepts. You use vector graphics to actually define the objects in your scene. So initially, you will define the objects in a 3D space. And then as part of the OpenGL rendering, finally, you will enter this to a screen. That's where the raster graphics comes into picture. Also, this will be used when you apply textures to a surface. We'll look after that later. OK, Tamil, you can continue. As KB mentioned, OpenGL is the combination of both vector graphics and raster graphics. So how those vector graphics and raster graphics come into play in OpenGL? That's what we're going to see now. OpenGL actually performs so many steps that happens in GPU. So we don't have to worry about that now. So we'll just simplify the flow of what happens with the OpenGL and GPU. Initially, to draw anything on screen, what you're going to define is what objects you are going to draw and how you are going to draw. So those are the data that we have to provide to the OpenGL. That is the first step. What do you see there is data. So after you provide the data, OpenGL will convert them into shapes. For example, you have a square. We define that with four vertices. So what do you give to OpenGL is that you give the vertices to OpenGL and say that those four points refer to a square. So what OpenGL will do is it will convert them into its internal representation. That's what happens in vector operation stage. And then the next step, rasterization. With only the vector objects, click not directly displayed on the screen. So OpenGL actually is a 3D graphics. So we have to map the 3D objects into the 2D screen. So that is where the raster graphics come into play. We convert 3D objects into 2D pixel representation. Farer objects will be smaller in size when you look at a 2D screen. Those kind of things will happen in rasterization. And then come the fragment operation. There, you can, there, what OpenGL does is it will decide what color to render in each pixel. You just define the objects for now. For example, we define the square. And it is rasterized to 2D screen. But you'll know what color it will display. So that will happen in fragment operation. Finally, it is being displayed to. It's being rendered to frame buffer, which can be either targeted to a screen or to a texture. That will, that is, those are the other topics which we will not cover now. There's one more thing to OpenGL. OpenGL is not like traditional programming, like whatever object we're into programming that we use. It is a state mission. I guess everyone would know what a state mission is. State mission will maintain its state until some transition occurs by some commands. So OpenGL will also do the same. For example, if you want to draw a line with some color, you will set the state to a line color as red or something. And then you call it law function. So it will use the previously existing state. So until you change that state, whatever you draw, it will use the same color. So that is the state mission. So this is one important concept that everyone has to remember while programming in OpenGL, because you cannot visualize things in object-oriented form in OpenGL. So that is one important step. What do we keep in mind? So before we start getting into actually drawing the basic shape, this is some setup that we need to do specifically to Android. So there are different versions of OpenGL ES. There's OpenGL ES2 and then the previous questions. What we are going to use today is OpenGL ES2. And if you want to use OpenGL, you have to give this use feature in Android Manifest that's similar to giving permission for normal Android apps. And for normal views, what we use in Android is surface view or you surface view. But for rendering using OpenGL, what Android provides is the GL surface view. You are trying to render something onto the screen. So you need some class which will handle that surface view. That is what the purpose of GL surface view is. And that's what GL surface view. It doesn't do a lot. It basically handles surface which you are going to draw. And the actual, so this is the code snippets which describes the scene with getting a GL surface view. And we are setting the context client version. We are using OpenGL ES2. We are setting that to 2. And then we are setting a renderer. The GL surface view itself doesn't take care of rendering the actual graphic. That is something that we need to do. So what we basically do is to create a subclass of GL surface view.renderer and then add our logic which uses OpenGL to render the graphic. So you create a subclass and then finally set that renderer to the GL surface view. So the GL surface view will use this renderer to actually render the content. This is what I just said. Most of the work that happens in OpenGL is done inside the renderer. Actually most of the work is done here. So here what we are trying to do is there are three major callbacks in the renderer. One is the surface created on draw plane and on surface changed. On surface created is called each time the views created and on draw frame for each for rendering a single frame. And on surface changed is called whenever the dimensions of the window in which the surfaces get changed. So what we are doing here is nothing much. We are just setting a clear color as Tamil had already mentioned. OpenGL is a state machine. So normally if you want to clear draw some color or back set a background color set background color and color. So instead of that what we do here is first you say set color and then you say clear. You can see that the GL clear method doesn't take any parameter for the color. That has been set previously. So each time when you call GL clear, whatever color you have previously set. So if you want to change it you should explicitly do that. And then visiting the view board it describes which even if you have an entire window you might not actually want to render to the entire window you can. You might want to take only a part of it. But in this case what we are doing is we are passing the entire width and height so that means that we will render to the complete window. And that's it. It's very basic. So after you run this, this is what you actually get. So we do have some sample apps. We will show you some demo. This is the sample app which does similar stuff to what we have done in the slides. So I just run this app and see what it does. That is the output. This is not an empty but this is starting point. So without any surface or clearing it you can't do anything. So we got the surface in place. Now we want to join some stuff on the screen. As I mentioned earlier we have to provide some sort of data to the OpenGL for it to understand and create the objects. For example I said square. Square can be represented with four vertices. So that vertex is actually a primitive using which you can define the square. Same way there is something called a set. Lines that join the vertex that is called a set. And there is something called a face which is more than two vertices. When you have more than two vertices it can form a complete shape. For example triangle is a closed shape. So those things will be called a space. So these are the quantities which you will be defining the objects for drawing to the screen. Here we have a square sample data which will be passing to the OpenGL. OpenGL usually has the coordinate system starting from minus one and ending at one. Both x and y axis starts from minus one and ends at one. So everything that we define within OpenGL will be in floating point values. Here we have defined a circle which starts from minus 0.5 to minus 0.5. So you can understand the primitive. So the coordinates which we have already shown will be storing that in a float array. But float array is not very efficient in terms of graphical processing. So what OpenGL will do is it will accept a white buffer which is optimized for the storage. So we are not going to cover that now. We are just passing the coordinates. We are just storing the coordinates into a buffer and we use that buffer to pass it to the OpenGL. Now we have the data and we have to tell OpenGL to draw it. So we need some sort of commands to make the OpenGL draw some stuff. So those commands are draw, GL, draw arrays and GL draw arrays. GL draw arrays, what it does is we have already created an array we have seen. And for defining a square you define the vertices. So that is the data that will be stored in an array. So it will pick all those values in that array and it will draw it into the screen. So that is the purpose of GL draw arrays. And there is one more function called GL draw elements. This will come into play when you have complex shapes to draw. The Hamlet square is simple. You can easily mention the points. But when you have complex shapes which have the vertices, then you will use GL draw elements. But that will take no time. So we will just move on to the important stuff for now. So we now have vertices 0.5 and those connections that we had. But how do you tell OpenGL how is it related? Those points are related to each other. We know that it is square. But how do you open GL? Those points form a square. So for that we need to define using some sort of primitives. Those primitives are listed in the page. I will show you a slide here. These are the examples of primitives. When you say points, each vertex refers to a single point on the screen. So when you open GL, it will draw the points. But if you say GL lines, then you have 10 vertices. What it will try to do is it will try to draw lines with each vertex. For every two points, it will draw a discrete line. Same way with line stripes. Line stripes will not try to draw discrete lines. It will draw a continuous line. The same goes with triangle stripes. I think you can understand where the diagonal is. Now I will hand over to JP and he will explain the process. So what Thamil was explaining is how you pass the data about the objects in this scene to OpenGL. When you have complex scenes, you also pass the geometry of different objects along with their orientation tool and the camera position. Based on all these things, you need to have all these vertices, the camera position, the projection in which you are using as an orthogonal perspective. All these things are described using matrices. So all these things are part of the data. But then you have to apply all these transformations and you have to finally figure out the final orientation and position of the object. That is done in the second step, that is vector operations. These vector operations are carried out in the hardware. So whatever we have been talking till now, such as setting all these buffer arrays, everything that happens in the CPU, but actually calculating the final positions of each vertex and the orientation that happens in the GPU. So that is what we are in the vector operations section. In the last session, we have already discussed finally converting it to Poody's screen. And then fragment operations you calculate the pixel value for each point of this screen. So until OpenGL ES2, the stages starting from vector operations were fixed. That means you set the values and then OpenGL takes care of it. You don't have any control over it. But since OpenGL ES2, you have control over that. You can track programs which will affect how these operations are carried out. Mostly vector operations as well as the fragment operation, these two sections. So that is where shader programs come into picture. We have two different types of shader programs that we use. One is a vertex shader and one is a fragment shader. A vertex shader is a program which executes for each vertex and the object. So in this case, we are trying to do a square. So it executes four times. Whereas fragment shader is a program which... The purpose of vertex shader is basically to figure out the final orientation of the object. Whereas what fragment shader does is it gives the actual color for each pixel inside the object. So it executes for each pixel that is inside the object. So that basically means that fragment shader executes much more often than vertex shader. Since it executes in GPU, these operations are mostly done parallelly and that is taken care of by GPU. So this is the flow that we have to follow. First you create shader objects and then you attach each... You have a program for vertex shader and fragment shader. This is a language, but the vertex shader and fragment shader are in the same language. It is very similar to C. You create these programs and then attach it to the shader program. And then finally you link that so that it can be exported in the GPU. So this is the same diagram which explains these steps. You have vertex shader, you have fragment shader, then you compile it. Then you attach it and you get a program object. And then you link it and then you get final executable. And then this program is finally executed in the GPU. Okay, vertex shader. So this is what this vertex shader program does is something similar to what the pre OpenJS2... What used to happen in pre OpenJ2 years too. We get the attribute vertex position that gives the current position of the... As I said earlier, this vertex program where vertex shader executes for each vertex in the object. For each vertex it gets the input, the current position of the vertex. So in this case the output is the final position that is where we are assigning the GPU position. In this case we are not doing anything other than directly outputting the same value. And for fragment shader this actually puts for each pixel. So what we are doing is we are setting the same color for all the pixels inside the square. These are the, before you actually, before you can actually use the shader program, these are the steps that you have to do. This is the actual code which does the stuff that I talked previously about. So initially you are creating a shader. So here you used the constant GPU vertex shader so that OpenGL knows that you are trying to create a GPU vertex shader. So you can handle that and then you attach the source code. The source code in the client program it will just be string. But then once you compile it and link it, it can actually execute in the GPU. So the next two statements what it basically does is it attaches the source code and then you compile it. And then you need to do all this again for fragment shader. The steps are the same but then you use a different variable while creating shader which tells OpenGL that you are trying to create a fragment shader. So you've created the program, now you attach, created the main program object with the GLCreep program. You attach both vertex and fragment shader to them and then you finally link it. Now you have something that can be executed in the GPU. As KB explained, we have just created a shader program that can be executed in the GPU. Those steps are similar to what we do in a normal program. Once we have the program in shader, our Android app actually runs in a different processing unit. GPU and Android's processing unit are different. So how do you communicate between these two? So that's what we are going to see now. So you have an executable program and you have variables there like vertex question for example. So when you want to access that variable in your Android app, then you have to get a handle to that attribute in the executable shader program. That is done by the jgetattributelocation function. It will give you the position of that particular attribute in the shader program. So using this you can get the location. This is similar to pointer but pointer between two different programs. After getting the pointer, you can use OpenGL methods to assign or get values from those values. Here what we are doing is to draw the square. You have to pass these values to the OpenGL shader program. As you already explained, what the shader will be executed four times. So how does it know it should be executed four times? We are passing an array of vertices, that is square vertex buffer which we have already created. Based on that, it will execute four times. And we are editing that vertex attribute array and we are drawing it to the screen. GL draw arrays will draw the triangle straight four times. This is what happens. As we mentioned, it is a state machine. So how does the GL draw array know what values to draw? So that is what we do by enabling the vertex attribute array. Whatever you enable, it knows that those are the arrays. The draw array method should execute. This is the output but this doesn't look like a square, is it? So this is basically because of the projection. But we will show the output for this particular app which I mentioned. The square is a rectangular button that we need to correct it. Because as I mentioned, OpenGL uses square coordinates. Since we have already shown, it starts from minus one to one. So when you map the square coordinates to a device screen which is rectangular, you will get this kind of output. The device size is very big with each device. So we have to adjust for those values. That is being done using projection and camera views. We will get them in a few minutes since we are running out of time. What these projections will do is that it will adjust for the screen size. Whatever proportions you mention, for example, the square or the rectangle, whatever size you mention, it will keep up the proportions on any screen. These are the methods that define the projections. First time defines the perspective projection. Perspective ends in near and far objects will look different when you look different in a 2D screen. That is what we will define in the first time. And set look at will define where you are looking at the object. This is more like a camera, setting a camera in front of the 3D image. After we create the transformation for all those projections, we are passing those projection matrices to the vector shader to use it for normalizing the square. The square was displayed as a rectangle. Now, since we pass the projection, all the calculations are needed to be done to make the square look like a square. That will be taken care of here. When you multiply the m matrix with the vector expression, it will modify them. This is the same. The one additional thing that we have to do is we have to pass the matrix that we have just calculated to correct the output. This is it. Now it will look more like a square. This is the output. We have just added a matrix that corrects the output depending on the screen. That's it. One, almost that we can, we wanted to show you was the animation part. Animation part, we have to add some more transition to the object. Now we define the square, but when you try to rotate it, you have to change the vertices to some other location. That can be easily done with matrix calculations. That's what we do by using matrix dot self-rotation. That will randomly set the, for now we are randomly setting the angles. What it does is rotate the square that we have defined based on the time. We don't have to worry about calculating the new points per rotating. For every rotation, we need to find new points that needs to be drawn. But using matrix, we just start to define what rotation we need. And if you multiply that with the objects, it will rotate. And the frame rate in the screen is very low. So you can, this is the actual output. That's pretty much it. This is actually getting started. We just wanted to make it feel comfortable that open tier is not a very tough thing to do. Once you get started, it will be very easy. So. It is not time for one question. What are the different shapes? We have a triangle, which is the other triangle. Then you show the part of the triangle which is very correct. So how do you do that? That cap region that will happen based on the location view. There is one more axis called self. That will define where that is positioned. For example, x and y defines only two dimensional squares. But there is something called z-attacks which will define where the object is positioned. So you can easily do that in open tier. I'm sorry to cut the question short. They will be available at the venue. You can just contact them. Thank you for the presentation.