 So, how does this all work, so let us go and look at graphics primer, so what I will do is I will go back look at graphics in general, talk about data, data sets, how they are represented and then go on to VTK, little bit of TBTK and wind up with Maya, just the architecture. You have seen lots of pictures and demos, so now I will go into some of the details. So in graphics, computer graphics, you basically try to represent things visually and what you do is you try to ultimately render everything into what are called graphics primitives and there are two ways in which these graphics primitives are usually rendered back to the user. One is called raster graphics where you actually draw each point of that object, the other is where you do vector graphics, where you are basically not rasterizing everything into pixels but you actually represent things as the object itself, so if you have a square you talk of a square. So if you look at things like PostScript or PDF, you can actually draw when you draw a line and in a PDF viewer if you zoom into that line, no matter how much you zoom in, the line is still good but if you take an image and you keep zooming into that image, you will start seeing pixelization. So this is one of the differences between a raster image and a vector graphic. So typically when we are talking about 3D graphics, you are usually only talking about raster graphics. Now okay we get to this later often when you are producing scientific graphics, you are producing images, you really want to try and get vector graphics, so your lines look good. So there is a way by which with VTK for certain types of visualizations, you can actually convert that to a vector representation and in VTK there is a particular output called a GL to PS exporter which will actually take OpenGL commands and generate from the raster, not from the raster image, from the OpenGL commands it will generate a vector representation. So you can do some of these things but typically when you are looking at a screen of a visualization it is usually a raster image and raster images are basically collection of dots and process of rendering is basically the conversion of your data, so your data is not in the form of dots. So you have like points, scalars, you have a field and you have say a line, ultimately it is represented in some visual form and that process is called rendering. Now the process of this rasterization requires you to generate graphics parameters and the way this is done is to reduce things into one of the following, either a point or a line or a polyline or a polygon which is closed or triangle strips. So all surface things can be completely represented in terms of these kind of graphics parameters. Now how does vision itself work? So when you look at physical vision you have objects say like this watch and each of these things have properties, so they have some amount of opacity, so like the glass on this is completely transparent, so you can have a transparency, the inverse of that is called the opacity, how opaque is it? You have color, you have texture, okay. So what happens is light from a light source strikes the object depending on the properties of that object that light reflects and falls back into the eye. So often what rendering systems do is they mimic this process and this is called ray tracing where you basically you have, you actually track every single thing that goes strikes an object and you track back what happens as it comes back. Now ultimately when you see things obviously the brain is actually projecting back an entire universe, it is not what you actually see is an inverted image on your cornea. Now when you are usually doing rendering there are two ways to do it, one is to do ray tracing as I said, the other is to do object order rendering but what normally is done with open GL and BTK is basically object order rendering. All these CG effects and graphics that you see in modern movies, all of those guys use ray tracing, the reason is you get much more realistic images with ray tracing but it is a lot slower and that is why they need huge render farms on which they render each scene, get it back, compost it and make a movie but we cannot wait that long, so you generally go for an object order rendering. So again when you are rendering you have two types of things, one is surface rendering which is basically you are looking at surfaces, so things like this. The other is things which actually have a volume based rendering, so let us say there is fog, you want to see fog, you are not just seeing a surface, you are actually looking at an entire volume, you are looking at light that is going through the entire fog. So this again because of the computation it has to, you have to track every pixel going through a volume, it is more intense, so volume rendering in general is a lot slower than surface rendering. Okay, so typically when you do graphics you represent colors in what is called either RGB or hue saturation, RGB is red, green, blue and hue saturation value, there are two ways to represent colors and then you specify an opacity which is between 0 and 1 at least in VTK that is the way it works, you have lights, so you have what is called ambient lighting which is lighting from the surroundings, diffuse lighting which is reflection of light from other objects and specular lighting which is, so when for example if I take this, this is very specular in the sense that light from the source directly reflects off this, okay, so that is called, so it is called the shininess in the sense of the object. Now you have the object, you have properties of that object and now you need a viewer, so typically in graphics parlance it is called a camera, so you have a camera which is actually viewing this object and then depending on the position of the camera you end up projecting that scene onto the plane of your film or your eye and that is basically a projection. Now depending on how you do the projection whether you use parallel projection or use orthographic projection or use perspective projection, you have different ways of generating the final 2D image from the 3D object. Now we are obviously not going to do any of these, most of this functionality is done in libraries, so all people do is they program against those libraries and use it, so typically in 3D you use OpenGL which basically takes care of all of these things, so you give it primitives in terms of triangles and things like that, it will actually do the transformation in the projection and actually show your 2D image and OpenGL today is widely used in the gaming industry as also in science but gaming industry really seems to drive hardware cards and you can get very cheap cards which actually end up having more transistors than your CPU, so they are called GPUs, graphics processing units and these basically are hardware accelerated, so some part of that OpenGL is implemented in hardware, so many of the operations are actually done in hardware, so there is no software step involved and all of this processing is offloaded from the CPU to that GPU, so you can really do high performance rendering on desktop machines today. So given this background now we have data in the form of numbers that we are interested in and basically we want to be able to look at functions or data that depends upon the space and this data can be of different forms, you could have surfaces, you could have volumes of data, you could have 1 dimensional data which can be plotted with Chaco and things like that, you could have 2 dimensional or 3 dimensional or multi dimensional data and on top of this, so let us say you are trying to look at the air circulation inside this room, so think of the temperature inside this room, so now you have a volume, an entire room at which at each point you can think of a temperature, so you have a temperature distribution inside a room, so these kind of quantities are called fields that is at each point in space you can associate with that particular point a scalar or a vector or a tensor, they are called scalar, vector and tensor fields. So given these you want to be able to view it and reduce it to that, to some visual representation that the person who wants to study it wants to understand. So when you are talking of data, your data exists in some space, in the sense that if you have a single point what is the dimensionality of that data, so if you think about it you will see that a single point is basically you can think of it as a 0 dimensional thing, a line is a 1 dimensional creature and a surface is 2 dimensional and a volume would be a 3 dimensional surface. So the number of components in your vector specifying the point do not determine the dimensionality of your data, so I can have a 3 dimensional point, a single point specified by an x, y, z triplet but it is not a 3 dimensional piece of data, okay. On the other hand I could have a 2 dimensional data set and I have it specified on an entire plane, I have a whole bunch of points, a surface, it is a 2 dimensional data set. So here is an example, so if you have points, I have taken the same sphere, I have rendered them as points, so points are completely different from a mesh which is like the wire frame you see which is completely different from a surface, so I have changed the dimensionality from 0 to 1 to 2 and the results are completely different, so you have to understand this when you are talking about data visualization and it is very important because if you just give somebody a bunch of points and you do not specify how those points are related, there is no way you can visualize it, okay. So this brings us to the notion of topology which is how are these points organized, so I will not get into like the details but basically it is a branch of mathematics which studies the properties of geometric forms which retain their identity under certain transformations such as stretching or twisting which are homeomorphic, okay but essentially what it is saying is how are points of the space connected up to form a line surface of volume. So if you go back to slides, I had points, so I could represent these points as just a bunch of dots and as we saw it is completely different from representing this as a bunch of lines which is completely different from representing a surface. So the topology specifies how these points are connected and it is important that you realize that when you want to specify data to somebody to visualize, you need to specify that topology and this concept is used in computational fluid dynamics for example where you use what are called grids and a grid basically is nothing but points plus a topology. So if you say here are three points and they are just three points, I do not know how they are connected, it becomes a zero dimensional data set just a collection of points but if you say here are three points and here is a line connecting this point or next and so on and so forth, it becomes one dimensional. On the other hand if it becomes these three points are actually a triangle then you are actually saying something more. So the points and the values at those points are not enough to specify your data. In addition you have to say how these points are connected, okay and that is basically what I mean when I talk of topology here. So essentially it is space itself you are talking about when you are talking about some volume so I talked about a temperature distribution in this. I have to know whether the space that you are talking about is it a discrete or a continuous space. So is it that at a point here I have a temperature and there is nothing in between here and then there is another temperature here or is there a connection between this point and this point. So only when you specify that can somebody visualize it. So basically in data and typically in CFD you use computational fluid dynamics is what CFD is. You use two classes or even computer graphics. One is called a structured grid the other is called an unstructured grid. In a structured grid if you have a bunch of points the topology is implicit which means supposing you have four points that form a cube. You have a notion of an index moving from one to the next. So if you look at this data set on the left there is an ordering. I know that if this is an x if I associate x direction with this y with this and z with this okay sorry x with this x with this y with this and z with this then I know that given a point I know who my next point is. I have a notion of who my neighbor is. So therefore I can implicitly specify what is the connectivity between a point and its nearby points and therefore I can specify without actually saying anything that this is actually a volume that they are not just discrete points but there are cells associated with each of these inside which so think of it like this if I have two points at which there is temperature is there a temperature in between these two points. If you just specify as two points with no connectivity I cannot say that but if you say that there is data in between those two it is like having a rod with temperatures at two points and there exists a temperature in between the two. The same way here also if you are specifying a volume of data you need to specify whether there is something in between. So when you say they are connected these bunch of points form a volume or a surface or a line on the other hand oops if you use an unstructured grid the topology is not implicit that is there is no index on which I can say this is the next and that is the next. For example in this case if I started with this origin let us say 0 0 0 the next point in the x will be delta x 0 0 0 delta x 0 0 next point on the y will be 0 delta y 0 so on and so forth over here I cannot say that I do not know where the next point is and I do not know who the next point is connected to. So with an unstructured grid you have to explicitly specify a topology you have to say this point is connected to this point which is connected to this point and these three actually form a triangle because when you start having bunch of points you can connect them in many different ways. You can say this is connected like this this is connected like this or you could say this is connected like this and this is connected like this and they are different and your graphics tool has to know what exactly you are talking about. So typically when you talk of graphics primitives and libraries they will reduce everything to certain fundamental types of cells but the idea here is so now we have a bunch of points associated with the points we have a topology which specifies how the points are connected. The specification of the topology can be implicit in case of structured grids or explicit in the case of unstructured grids. Now in addition the useful thing you are really looking at is how do you associate data with this? So associated with each point you can have data like temperature or pressure or speed of the velocity of the flow things like that. So each of these are called fields which means you have a function that given a point in space returns either a scalar value or it gives you a vector value function. So quick recap so we have graphics basically which reduces data to primitives which are rendered there are various ways in which you can render and when you want to represent your data you have to be careful about how you specify the data. So you have points you have a topology and you have attributes at each of these points you can specify a temperature. In addition you can also specify attributes at a cell level. So instead of saying there is temperature associated with each of the points I can say this whole cell is at this temperature okay so that is called cell data. So this is the theory of like some very high level overview of data representation and graphics now let us look at VTK. So all this is kind of low level what I have talked to you about rendering things on a screen would require using something like OpenGL and OpenGL is just again a primitive library it is a very powerful library but as far as the user is concerned it just lets you draw primitives. So given a triangle it will let you draw it, it will let you do transformations and let you do lighting things like that but that is not quite enough. So what VTK does is VTK stands for the visualization toolkit it is an open source BSD style licensed very high level library that is implemented in C++ that uses OpenGL underneath in order to do visualization. So visualization as against drawing is very different in that in visualization you take data and render and generate the primitives for the graphics. So there are two things one is graphics the other is visualization. So the graphics part in VTK is handled by OpenGL. So graphics as I said is just rendering of primitives on screen or on some output. Visualization is taking your data and generating those primitives that need to be generated. So VTK is a visualization library it supports 3D graphics imaging which means you can actually give it images and do image based algorithms and visualization. So as an example of visualization let us say I have data in this room I have temperature field in this room and I want to find all those hot spots in the room I want to improve the air conditioning I want to move the air conditions around so I want to find which are the areas which are actually hot. So I want to find all those regions where the temperature is 30 degrees centigrade. So I would take this data and find and try and draw something which says that region over there is hot. So then visually I know that region is hot I need to move my AC there something like that. So this is visualization you have data and you want to be able to view it in some form that you want as against having primitives that you are trying to render. So the nice thing about VTK is as far as we are concerned is that it has very nice Python wrappers. So it is written in C++ and it is wrapped to Python it is also wrapped to TCL and Java but we are interested in the Python part. It is very cross platform it runs on all major platforms you can think of. The other very good thing about VTK is it has a very large developer community. So if you look at other tools like say OpenDX and other visualization tools the developer communities are nowhere near as large. So VTK has a very large mind share lots of people use it and there are like 40 people worldwide who actually check in things into VTK and these are pretty serious people some of these people actually have PhDs in visualization and graphics. So it is a pretty serious group and this really makes it a strong library and as far as libraries go it is probably one of the best tested open source libraries that I have seen. You go to the VTK.org site you will actually see test suites that are run every night on various platforms and they make sure that the software is always in a good state of affairs by making sure that they are what are called dashboards their test suites are in a decent state. So they really take care about they take pride in their work and they really test it very well. It is extremely powerful and goes without saying the problem with VTK is it is huge. It has over 900 classes 900 not kidding so it uses a pipeline architecture which I will explain in a little while and it is not that easy to learn because it is just daunting you have like 900 classes and you have to learn visualization. So the idea was with Maya V2 the idea was that I do not expose all of the details of VTK and make it easy for people to use at multiple levels. One level is of course to the MLab interface where you do not even have to know anything about VTK but at a lower level you really need to know what VTK is doing if you really want to use these libraries and be able to do graphics. So the architecture of VTK is as I said a pipeline architecture the way it works is actually quite it is very simple. You have what are called various types of objects that are connected together and there is a data flowing through that. This is like a pipeline so imagine you have oil flowing from one end to the other and you just put pipes in between and in between you process those. The same way you have data that is generated by some kind of a source object so you can imagine something like a cone source. A cone source will basically generate primitives or generate data that gives you a bunch of points and a topology such that you generate a conical surface. Now this will be taken by a filter so for example if I took that the example of this room I have data of the room stored in a particular way. I read that the source object will read it and generate the output of that. Filter will then process this data so if I want to do a contour the contour will be a filter object which will take in this input contour this data find out what is where and return further down more data which can be processed by a subsequent filter. So source generates data the filter processes the data this data can then be rendered on screen using what is called a mapper. A mapper basically takes data generates graphics primitives. The graphics primitives are then shown on the display now remember when we said we had graphics primitives like this they have properties associated we have things like color, texture, opacity and things like that. So all of those are managed by an object in VTK parlance called actors. So an actor basically is connected to a mapper which generates primitives and is then rendered on screen on a display. The same time if you had a filter and you wanted to write that output to disc you could use a writer. So the display and writers are basically called sync objects. So this is the kind of overall working of VTK so you have sources, filters and you have syncs and you have mappers in between which convert data into primitives that can be rendered. So here is an example VTK script. So I say import VTK in Python I create a cone source which is a source object and then I set properties of this cone source like so then connect the output to the source in this case I am not doing any filtering so I am connecting the output of the cone source directly to a mapper. In this case I use what is called a poly data mapper. So the output from the cone source is set as the input of the mapper over here. This mapper in turn is connected to the actor. You can set the actor's properties like so. So actor has what is called a get property method which will return a property object on which you can set the color, the texture, things like that actually texture actually comes on the actor but you can set the color, the representation. You want to see it in a wire frame, you want to see it in surface, you want to see points all of that is controlled by the property. So here is the example. So here is a cone with a red color and show you the code that is the code which basically creates a cone sends the output of that into a poly data mapper puts that into an actor. The bottom code is basically just creating the window and the objects that let you render it into an object. So let's say you have some data, typically if you have like a file of data, you want to be able to take this file of data and be able to render it. So a source would be a reader, it will read from file and generate data in a form vtk can use. So vtk itself supports a bunch of different data types, data objects. They read the structured points and describe each of these. Structured points, rectilinear grids, structured grids, unstructured grids and polygonal data. It also supports both ASCII for file formats, it supports ASCII and binary formats. So what I am going to do here is I am going to talk about what are called vtk legacy files. They are an old format that vtk uses. I am going to use it to show you the types of data that vtk can handle. So there is basically a header on each of these files, so just forget about it. You can specify whether the file is an ASCII or a binary, forget about that also. Then you can say the data set is of different types. As I said you can have structured points, rectilinear grid, structured grid and an unstructured grid. So you basically specify what it is and then you specify out how the data is organized. Then as I said if you take the temperature in this room you can say what is the temperature at each point or what is the temperature at each cell. So you can associate with each of the points that you are going to specify, either point data or cell data. So going back again to what we said before, you have points, you have topology, you have attributes, three things and that is what you specify in these files. The structured points are kind of like images, it is like a cube of data and it is always an orthogonal, it is always rectangle with equal spacing between each point and the next point. So you specify it by, so in this case because you know exactly how one point is connected to the next, the topology here is implicit. Because if I have x, y, z, I have an index i, j, k, I know my left and the right top bottom behind and in front, all the three I can tell you which point is next to me. So how these points are connected is a given. I do not have to explicitly say i plus 1 is next to i, it is obvious, it is implicit. So what you do in a structured points is you say dataset structured points but you do not know the dimensionality of the data in the sense that you do not know how big, how many points on x, how many on y and how many on z, that is specified by the dimensions. You need to know where the origin is, so you specify an origin and you need to know spacing and now I know every point in that space, is this clear? So if I start with one origin, I know spacing, I know it is a rectangular block, I can go to every next point and I have the entire 3D volume. So I specified the entire 3D volume, that is it, so that is the structured points. A rectangular grid is slightly different in that, the points on x are going to be repeated the same way but one point on x is going to be slightly different from the next point. In that I could have x at starting at 0, 1, then 4, then 7 but this same thing is going to be repeated on the same on y, you follow? So you could have either x, y, z always being equally, always being spaced the same way that is delta x being 1, delta y being 2, delta z being 3 and that is a cube that keeps shifting. Whereas here you could have a varying x, a varying y and a varying z but they are still the same ordering, they are still i, j, k, they are still indices. So if you give me i, j, k index, I can always tell you what is the x coordinate, y coordinate and the z coordinate, so if you give me say 10, 10, 10 it will be the x coordinate will be the 10th x value, 10th y value and the 10th z value. So now I can actually specify a slightly more complicated data set and the topology is also implicit here. In all of these there is a specific ordering that VTK follows, it follows the ordering where the x coordinates very fastest. So x1, x2, x3, x4 at a given y and z then you change y and then you change z. So all of these structured data sets in VTK follow this structure. Now you can go one more complex, you can actually make it a structured grid where you actually have something like a circle. So imagine that I have something like an annulus, I will show you an example. So here we have a simple data set which is like this. Do you see the structure here? So I have points radially organized on theta with r changing this way and z changing this way. So again if I give you an r theta z pair I can tell you what the next theta is, what the previous theta is, what the next r is, what the previous r is. So topology is again implicit but by specifying the points in this crazy fashion which is like circular and going this way I can actually specify a non-trivial geometry. So in this case what you do is you specify out all the points and the ordering of the points again is implicit where x changes first, y next, z next. So in this case you need to specify all the points, all the points are specified and you need to specify the dimensionality of the data. So quick recap, structured points, origin, spacing gives you the next point. So topology is implicit, points are trivially identifiable. You do not have to specify anything but origin, spacing and dimensions. If you have rectilinear data, the x can be different, y can be different, z can be different but again the topology is the same. X to the next x, previous x, you know everything. Structured grid, they could all be different but again you know who is next, who is previous. So you could actually set up circular agreement, set up a sphere. The next is polygonal data which can really be arbitrary and here you have to explicitly say who is connected to what. The way you do that is you say data set is a poly data and then you say points are given by number of points and a data type, typically you use float and then you list out the points 1, 2, 3, 4, 5, so on. And then you say here are the polygons. The polygons are specified such that here are five points that are connected and you specify them by index. We will do a concrete example in a little while. Basically think of it as this, you have four points that form a tetrahedron. So you say 0 is here, 1 is here, 2 is here, 3 is there and you specify them as points. Then you say this tetrahedron consists of 0.0, 1, 2 and 3 and then it forms that tetrahedron. The same thing holds for unstructured grids. So now instead of polygonal data is typically for surfaces. Unstructured grids are for entire volume. So you can create those, remember I showed you various cell types, pyramids, hexahedron, you can specify them in this. So again you specify a list of points, the cells which means what is the connectivity for each cell, how you define each cell and what are the types of these cells. As I said before if you have four points or five points there are many ways you can connect them. So there are a bunch of predefined cell types and you can pick those by specifying the cell types. Once you have done this you have specified the space, you have specified the points and the topology. Now you need to associate scalar data and you can do that with point data or cell data. Point data associates data with each point, cell data with each cell and here is a complete example. Structured points, I specified dimensions, so this is a two dimensional data set, two by two by one, an origin spacing, point data, four points for each point, it is a temperature and four vectors for the velocities and you can actually view this data. Okay, so now we get to VTK, so all this is great, so VTK does a lot of work, it is a fantastic library, you can specify your data, it supports all these funny kinds of data sets, structured points, rectilinear grade, structured grade, unstructured grade and it uses OpenGL underneath. So what is the problem? The problem is the API is not very pythonic. So I will show you an example in a little while. NumPy arrays are not straight forward to use with this. So we are used to using NumPy arrays and they are very convenient. It also has its own native iterator interface which means you cannot say for x in point or for x in points do something to the points, so some of these things cannot be done. You cannot pickle them and you cannot create UI editors easily for them. So what TVTK does is, it is called ratified VTK, it is a pythonic wrapper that sits on top of VTK that lets you do elementary pickle support which means you can save a TVTK object somewhat. It replaces all of these, if you recall the earlier script I had something like set height, set radius, set resolution. Each of these resolution, height and radius is replaced by a trait. So now you have the power of traits which means if somebody changes that cone I can listen for that and do something. The big feature is that it handles NumPy arrays in Python list transparently which means if you pass it, so if you are trying to set the points of a structured grid, you just give me a NumPy array, it will automatically take care of converting that in a very efficient manner to a suitable format for VTK. So the user does not worry about any of the details underneath with VTK. So here is that if you go back and see the VTK script, this is the equivalent TVTK script. So it is a lot shorter, lot simpler and it supports traits, uses slightly different representation but it is the same thing basically wrapping the VTK object underneath. Key differences are instead of importing VTK, you now import from nthought.tvtk.api import TVTK. Instead of saying VTK.VTKConeSource, you simply say TVTK.ConeSource, VTK does not support constructor arguments which means I cannot set radius, I have to explicitly say set radius, set height, set resolution whereas TVTK lets you do all of that in one line as you see here. Things are made traits, many of the attributes become traits like so. The other thing is there are some naming conventions, so we use the nthought naming style which is lower case with underscores and VTK tends to use everything with camel case which is capital G like this get capital height. So all of the names are consistent with the nthought tool suite packages. So TVTK as I said supports traits, so attributes may be set on object creation we have seen that you can set multiple properties in one go, so you want to set radius, height and resolution you can say object.set, height equals this, radius equals this, resolution equals that. The usual trait features work, I am not going to demonstrate that here, so remember when we had an actor we had properties like color, frame, wireframe, things like that, TVTK lets you edit, generate a UI for it automatically on the fly like so and this is an automatically generated UI. You cannot do this obviously with a VTK object and you can change things, change the color, this is a stupid color editor but it is a lot better on Linux trust me. So basically it lets you do all of these nice things with VTK, so it is like a wrapper sitting on top of that. In addition, collections behave like what you would expect in Python, so VTK has this notion of actor collection, renderer collection, all of these will behave like a sequence in Python, so you can say length of that rather than having to say this object dot number of actors instead you simply say length of that it will give you the length. You could also say for eye in actor collection print eye that will work, it will not work with VTK, you can insert elements, delete elements all of that stuff also should work. The key feature, the killer feature for TVTK is the fact that it deals with arrays completely transparently. So any method, any VTK method that accepts a data array which is a VTK data type or VTK data collection can transparently handle NumPy arrays or Python lists. You can also initialize a VTK array using a NumPy array and generate a NumPy array from a VTK array. So both ways you can go in and out. So here are some more details of these. The key here is that most of the time these are views of the NumPy array. So if you set up points of some object and you go in and change the points and you update the VTK pipeline, VTK uses the same memory. So it's basically view of the same data, same NumPy data and this is extremely handy. Here's an example, a concrete example. So from n thought TVTK, API import TVTK, import array from NumPy, here are a bunch of points origin and four points and I am going to create a tetrahedron from this. Here are the triangles that specify the surfaces of this tetrahedron. So 0, 1, 3 is the index 0, index 1 and index 3 which means 0, 1 and 3 form one face. Similarly, the other four faces of the tetrahedron. Now I create the VTK object. So you say TVTK.polydata mesh.points equals points. I don't have to worry about data arrays or any of that. I want to set the connectivity, I simply say polys equals times and then if I want to set the temperature, I simply say mesh.pointdata.temperature.scalers is my temperature array and that's it. The object is created. So there is a simple example called tiny mesh which exactly does what I showed you on screen here. So data is the points and the temperatures. This sets up the window. Notice the code here. Mesh is TVTK.polydata.points equals points, polys equals triangles. Scalers is temperature, that creates the mesh. The subsequent lines are similar to your corn source example. I am just setting the mapper, setting scalar range so you can actually see the colors nicely, hooking it up to an actor and this is some fancy stuff which I am going to ignore for now. But ultimately creating that data set is two lines of code. If you run it, here it is. That's the tetrahedron. The other fancy stuff is all this, the scalar bar and things like that. But the key is that the poly data has been generated using just those three lines of code. If you have to do the same thing with original VTK, you would have to do it so much. You would have to say this is the poly data, create a float array, then set the number of tuples, set the number of components and then iterate slowly in Python. So if you imagine if you had like a million elements, this is going to be extremely slow and then set the scalers and this is just for the scalers. You have to do the same thing for the points whereas in TVTK, just one line, mesh dot point data dot scalers is temperature, that's the end of it. So it's a lot easier, it's very efficient and it really works well. There are some issues with respect to whether the data is always a view or it's not a view. So if you pass a list instead of an umpire array, it will make a copy. The reason is a list is not an umpire array. It will also make a copy if you have a non-contiguous umpire. We've seen that basically the thing is VTK has its own way of storing data and that's a contiguous block. So what TVTK does is it takes a contiguous umpire array and sets it in so that VTK can deal with it as a contiguous block. So if you don't have it as contiguous, it's forced to make a copy. Then there are some exceptional circumstances when you may run into situations when it makes a copy. In addition, when you are specifying cells, usually when you make an assignment to a cell array, it makes a copy. The other warning you should remember is when you resize a VTK array, so VTK arrays let you resize it. Whenever you do a resize, it basically has to reallocate a chunk of memory which means the memory pointer is going to change. So your numpy memory is not going to be used anymore if a TVTK array is resized but it's not a memory leak because all of that is taken care of carefully. So now we have a basic feel for TVTK and what the differences are. I'm going to just zip through a bunch of simple examples that show you that you can actually create complicated data sets and view them from TVTK. So we look at polydata, I have already shown you this, create points, triangles, mesh, set the temperature, set the vectors and I already showed you the demo for this. You can also do this thing in 2D which means, so I want to now create a structured points data set and view that. So what I have done is, if you notice each of these is an editor window on MyAV. So I have just saved them out and I have just typed in the code that you see here along with a visualization piece which lets me view that data. So what this does is it actually generates, you should pay attention to this because we are going to do an exercise on this, generates this Bessel function of order 0 and generates a surface. You did this I think, sync function demo on day 2 was exactly this. So you basically create an array, A range and then you broadcast one of the arrays to generate radii and then you find sinr by r. So how do you now pass this on to VTK so it can view it? You create a structured points, set the origin, the spacing, the dimension, we just discussed this for data sets, it is the same thing and then you say the scalars is this Z, now you have to be careful. As I said VTK orders points such that X varies fastest not Y. So to do this from your NumPy arrays, you need to make a transpose. So you transpose this and flatten it, you give it that one dimensional array that we were expecting in the data. The thing is scalars can have multiple components. You can have a scalar that is just one component, two components or three components. Three components scalar you can think of as a vector but it is not the same thing. But if you have a shape of an array as n by 3, the number of components it would assume are three dimensional but you do not want that. So when you flatten it, you get a flat representation in the order that VTK expects. Once you do this, you have your data and your scalars and all we are doing here is setting the name of that scalars to scalar. So this is just the same code and there is bit of code here which scripts Maya V to generate this block. When I run it, control R, it generates the surface and that is a structured points data set. Now in 3D, you could do something similar, I just did a 2D surface, I want a whole 3D block with structured points. Here I have the example I showed in the demo this morning which is sine x, y, z by the explorer function explorer in 3D. I am doing the same thing here. So all this does is it sets up the data like that using what is called numpy's o-grid which generates again a mesh of points and then again it is just three lines. I take the values, generate the structured points with the origin, spacing and the dimensions and then set the scalars and then you get this same thing, just typed it out along with these six or seven lines of code to do the visualization and there we have it. This is a 3D block of data that I can interact with. So this visualization part is taken care of underneath by VTK but at the higher level it is taken care of by Maya V. And the data, what I am trying to illustrate here is that the data was generated with just as you saw it just ran like that because it is a 128 by 128 by 128 cube of data and it just works pretty much instantaneously. You could also do a structured grid and I think I just showed you that this is the structured grid example. So I could go in here and do an isosurface. And I have associated scalars such that the value of the scalar at the point is equal to the radius. So you get spheres as the isocontors. So here is an example is a little more complicated. I am not going to go into the details already well overshot the time. Finally here is an unstructured grid where you basically have a bunch of points and then you have the specification of the tetrahedral. So in the poly data we specified the phases, we specified 4 data values. This phase, this phase, this phase each of which is a triangle. So these are all triangles with unstructured grid I specify this whole tetrahedron as one volume. So now it is an entire cell, it is not a surface, it is a volume. The way you do that is again pretty straight forward. The unstructured grid is UG, the points are the points that you have given here. And then you use this method called set cells which says here are the cell types and here is the array that specifies the tetrahedron and that is it. And then again you can set the scalars and the vectors and that gives you a unstructured grid. So here is an unstructured grid example, it is exactly the same code with 8 lines of oops. In addition to this it provides like a scene widget, the window that you be rendering visual MLAB all of that, pipeline browser, viewers things like that all of these are provided by TBTK. We have already shown you the MLAB interface which is like one quick one liners with which you can render data. So all of that is underneath based on TBTK. So you are seeing like the top of that, the icing on that cake, the bottom is all of the other stuff which deals with NumPy arrays, traits and things like that. Now as I said there are envisage plugins also that TBTK contributes which can be used reusably by any application writer.