 Welcome to another edition of RCE. Again, I'm your host Brock Palin. I have here Jeff Squires from Cisco Systems and open MPI Jeff Thanks again for helping out with the show Hey Brock looks like we've been going on a Visualization stint recently Yes, actually the topic today. We're gonna talk about involves two former guests from two different packages that are actually part of this Global package. We're gonna talk about today before that You can follow me on Twitter at Brock Palin all one word and then you can also just find that on the RCE website RCE-cast.com you can find all the old back shows there Because I think iTunes only shows like the last five or something like that And there's also an RSS feed where you can subscribe and be able to reading RSS reader Jeff. I also think you have a blog I Do there's a blog pointed off the RCE cast and The big news in that area is that we are finally new blogging platform at Cisco and all of us are very excited about it Because I think I think the package that we use today is The best blogger So so we're excited for the the new launch in a out of model. Whoo Okay, well, let's move into our topic again. I mentioned that two of our guests have been on the show so probably remember Burke Gavichi Who was on about VTK, but he's going to be speaking with us today about pair of you along with Kenneth Marlin? From Sandia who had been on talking about iced tea, which I believe is used inside pair of you And we have a new guest today or catch Aichi he can correct me on destroying his name when he introduces himself But all three of those guys work on pair of you Visualization package the details of which we will let them explain So Burke why don't you go ahead and start off again? Say a little bit about yourself for those who haven't listened to the previous show. Sure. My name is back give a G I Lead the scientific visualization and the informatics teams at Kitware Kitware is For those that don't know is a small company that about 80 people That focuses on visualization informatics Image processing and computer vision. I've been here for about 10 years. I The whole time that I've been developers of developer of VTK and pair of you I have I was the lead programmer for a pair of you for a while, but now Would crush does that? I would crush. I watch it and I'm a technical lead here at Kitware And I've been here with Kitware for over six years now. I I've been working mostly on pair of you as a developer I've been involved with other projects related to a pair of you and VTK and Because that's pretty much it. This is Kenneth mohler from Sandia national laboratories I've been the pair of you lead for at Sandia for several years now My main interest in the past decade has been in large-scale parallel visualization algorithms and systems our research at Sandia Has been driven by advanced scientific simulations and world-class supercomputers during this time We've been using pair of you as a research and deployment platform Okay, so why don't you one of you guys take and give us the 10,000 foot view of what is pair of you? And what does it mean to the average research computing user? I can do that so pair of you is a An application for visualizing Scientific data sets I It was designed originally for Visualizing large data sets, but I think I don't want to necessarily restrict my definition to that at this point So the pair of you project started about It 10 11 years ago for the purpose of Building an end user application around the visualization toolkit the visualization toolkit at the time was already old it was already I believe 10 years old or so and But it did it what it's really a developer tool and it there was no way of Delivering functionality to to our end users and as we started developing large-data visualization functionality in VTK we wanted to be able to Deliver that Therefore we started building a pair of you as essentially a kind of an end user extension To VTK since then obviously our main focus has been large-data data analysis and visualization But pair of you Has also branched into a lot of other things so therefore It is used commonly for scientific data analysis both small and large And it is also a develop development platform for those that want to actually build upon a Toolkit for end user applications and extend it to deliver functionality again In the area of scientific visualization as well as we've been doing more and more lately Pre-processing of scientific simulations and things like that So that's that's a very short summary. Obviously, I'm sure we'll we'll get more into What pay-a-view is and what pay-a-view does Okay. Yeah, so that was that was a good summary there a pair of you and a little bit of VTK there round this out with explain. What's the relationship to ice tea then as well? Ten well, I guess I'll take yes So I'm Ice tea as a parallel rendering library So when you're running pair of you in the large-scale parallel mode You need some sort of mechanism to be able to render images that provide the visual representations So I see Basically provides that capability That ties it all nicely into why you are all working together relationships Between those those pieces of software. So let me let me jump back to something you guys said earlier because She said that you were the technical lead on pair of you at Kitware and Ken you said you were the lead Fool or a pair of you at Sandia. So how do you guys split it up? You have different roles or Are you leads of different parts of pair of you? How does that work? So? pair of you is Is a lot is a lot larger project that is That has many contributors both from funding point of view And but also from you know contribution as as developer contribution Kitware is really the kind of the hub that where all of it comes together. We we started doing the original Initial development of pair of you and we can continue to be essentially to be the gatekeepers for for pair of you and folks like Ken and others Are developers of pair of you and then you know, there are leads at different institutions like Sandia and Los Alamos But Kitware kind of manages the whole project and we're responsible of the releases and and things like that Yes, just to elaborate on that. So San Dia has been a major contributor to pair of you for most of its lifespan We have our own particular interests will have developers at San Dia working on pair of you itself And we'll also have contracts with with Kitware to do works specific for Sandia's interest What the reason why we're so keen on pair of you is that a sandy and other national laboratories Regularly encounter these really large message meshes through advanced simulation and computing program Which typically are much larger than those encounter in academia and industry. So a lot of the other solutions that are just Other people are interested in don't necessarily work for for San Dia's Okay, so in the introduction about what pair of you was you asked about you mentioned that it was kind of like a front-end to VTK Does pair of you actually implement all of VTK's functionality or is that just an impossible task to do because of flexibility or Does even pair of you is all this functionality come from VTK or do you use some other third-party libraries? So VTK is basically a toolkit, right? It's a toolkit which provides you the data model provides you the pipeline model the execution model and all that and pair of you uses almost all of that the infrastructure provided by VTK for a state of processing but since VTK is a toolkit. It doesn't really have the application level logic to Save provide the what happens when the user creates a filter what happens when the user creates a reader all that application logic is typically Not handled by VTK at all because it's a toolkit and that is left to pair of you to for pair of you to manage And that's where a pair of you deals with it And also pair of you as its own extra things for to VTK to do things like parallel rendering or Data distribution and making sure that the data ends up at the right notes for rendering and so on and so forth so Peru has almost it does indeed have the entire VTK directly brought in but not all the filters or Readers are exposed in preview GUI, but that's only because But some of them don't really work well in parallel because of that or other reasons is just because no there's no real We just didn't feel the need for it But it's very easy to add Any of VTK filters into per view using plugins or just writing some simple XML So pair of you is a parallel application. Is it? Single machine parallel like you need one of these big SMP machines to do a large data set or is it used like MPI or some other type of distributed memory architecture? So per view supports both configurations You can run it on a single desktop without SMP or anything just a single process in which case It's not doing anything parallel if there's all the work in single process itself and the other case where you can run it supports running a parallel server using MPI and Then you can connect your private client to this parallel server and then you extend you can do distributed data processing as well as rendering most clusters don't have a Display they don't have acts. They don't have anything like that. How does pair of you actually handle? using a cluster or do you have to move data to like a viz cluster or So this part be independent of the render part or IO part or you tell me So all the things that you said said are possible. So first You can pair you can run in a mode where it delivers the geometry the final geometry not your actual data that you're processing to the client and then it renders on the client So this is very this is feasible when the geometry is that this is typically possible because you're doing things like ISO surfaces or Extracting outer shells or something like that and that's what So if those geometries tend to be much smaller than your data sets you can easily deliver them to the client if that's not possible Then you can run par view in a mode where you have a separate data processing unit and a separate rendering unit Which is so you can set up set it up such that you have a smaller cluster Which has access to X and display cards and all that and then then probably you can deliver the geometry to these nodes for rendering and Another possible that you can compile with West Mesa or which is another So open source library. It's an open source open jail implementation. You can use that That's on your versus without X and it's very well possible to do rendering on the data server Of course, then you're doing software rendering Well, I think that one newer thing that you skipped is is that more recently we have added another library support for another library called Manta, which was Which is developed by university of Utah to be able to do actually ray tracing On on on the data server So it's Mesa and Manta are really two alternatives that we have to just doing it where we don't have access to hardware for graphics acceleration and We're back after a crash that none of you heard due to the magic of recording and post editing and whatnot, but We had some technical difficulties, but now we're back in Brock Brock sounds much better now Yeah, well those with the recording since I did recording. I always sounded absolutely wonderful. The rest of you guys just sounded bad All right. Well now we sound better. Yeah All right, so let's let's pick up where we left off here We were just hearing about the various ways in which rendering occurs in the support libraries and things like that So let me follow up with a question about you mentioned. I think it was Burke Mentioned hardware acceleration for rendering. So this leads to the natural question and Flavored du jour of buzzword and HPC world today about GPUs So are you guys using GPUs the way they're meant to be used to actually render graphics? So, yes for of course our polygonal rendering does indeed go Use open GL which uses GPUs for doing the You're for the rendering part, but at the same time we have special implementations for things like volume rendering or as well as new surface lick algorithm that we added recently to Fairview Which is again using GPU for doing some GP GPU stuff So we have explored in the past Some using the GPU for special solutions like these But I guess we are going forward. We are going to investigate more and more using it For general processing and filtering and things of that So in a lot of systems that have done this they still to do GL rendering You need to have X running or something like that for those of us who have clusters that maybe have Tesla cards on them that we only have the CUDA library, but not the full X running Can you take advantage of that? Can we do any rendering that way? I? Am going to give a non authoritative answer that There are some libraries out there that try to solve that problem Whether or not they work with you know a particular card is you know, it depends a little bit on how much of the Open GL is implemented by the driver and we actually we did get various answers from a media depending on the cards on this But there are things like virtual GL and such that Kind of try to allow you to bypass essentially X to do that in some time so it's sometimes possible and sometimes it's not It's really a lot of times depends on the drivers And in other cases I've also seen where you can only use a single GPU per MPI rank and can you guys support like? Well, no, there'd be a single GPU for the entire MPI job. Can we do? malt a GPU for each MPI process and then Can we even mix that can some of the MPI processes have GPUs and some of them not Yes, that's possible that you would do by actually just like the configuration We were talking about earlier where you have your data processing cluster any visualization cluster in this case Your data processing visualization cluster are actually physically overlapping But you still have a smaller set of nodes that are actually doing the rendering and a larger set of nodes Which do the data processing, but yeah, that's possible You can also in theory There are some complications to this, but you couldn't in theory mix Hardware x-ray to open GL and may say in some cases So but I in our experience it is better to actually have multiple Courses for example, let's say you have a collection of nodes that have Eight cores per node, but one G1 or two GPUs per node It is easier and more efficient to actually have those cores In this case would be four core I shared the GPU Rather than saying one core is going to use the GPU and the other ones are going to do software rendering So you kind of touched on an interesting topic there and this is something a Very into this we argue about in the parallel computing community a lot. How much heterogeneity do you run into? Do you run into? Customers who actually have a couple of different types of nodes, you know, like this kind of card over here that kind of card over here Or this guy has x this does not have x and I want to use them all together in one, you know rendering job We don't have that Yeah, I mean I have honestly in the last 10 years I have I have almost never run into people that are mixing and matching cards In in their cluster. I guess for the most part because people that do use Clusters to do distributed processing Usually have enough funding to actually get and set up set the whole thing up at once Whereas in the other case people that are Are you using another more general-purpose cluster probably are not using graphics cards in the first place? Let me take a step back then in the beginning is what platforms does pair of you support so as an end to application I'm kind of assuming that you Support all the popular Environments is that a good assumption? That's correct. We support Linux is a 64-bit 32-bit Windows as well as Max And you're you know any Unix not that we're seeing a lot of those anymore, but you know we used to do we used to even release binaries for sons and HPs and Sgi's etc. But we don't really do that as much anymore. We were not seeing a lot And on the case on the server side. We also support Supercomputers such as you know cray and Various IBMs and things like that Okay, good. That was that was exactly gonna be my next questions on the front end. It's desktop be kinds of things on the Back from Supercomputer. You have and have convenient So let me ask a derivative question on that something that's always annoying, but still sometime interesting What is the license for pair of you? I went on to pair of you org and I see a bunch of different licenses there But let's say I'm a commercial customer and I want to Use this which which license am I abiding to? Will probably and we take a both our BSD so And I believe all the components that we have that are that come with the previous source once you check out the depository are Indeed under BSD right one one exception to that is that for Graphical user interface we use the Qt library or Qt library And that is licensed on the various license But we were using the LGPL license of Q So it doesn't bring a Some additional complication if you are a developer that want to build on top of pair of you You have something that's coming be that's BSD and then something else that's LGPL and you have to make sure that you You adhere to both of those licenses So that actually leads me into my next question Hey making pair of you extensible making it support File formats that doesn't support out of the box of course it supports the VTK formats out of box XDMF and a few others I Want to expand it to support my proprietary application and ship it with me the BSD makes it nice that I can write that How hard is it to write a new IOP plug-in to understand my file format both for serial IO and parallel IO? so of The first thing that you need to do to write to bring in your reader into pair of you is write a VTK based reader for it so you need to understand some of the basics of how to How to create a reader in VTK like the different things about the pipeline How does how to satisfy requests and how to provide information about the data that you need to be aware of? But once you have that figured out once you have a VTK VTK based reader that works It's very easy to bring that into pair of you all you do it there You can do that through a plug-in and you just write a bunch of XML that allows you to import that reader into pair of you The same holds for parallel as well. You just thought there are so in VTK When you're running in parallel different requests come down the pipeline asking for different parts of the data And so long as your reader respects that You're good. Your reader is parallel where and that that would just automatically works with pair of you Ultimately if you have If you have a Python library or some Python wrapped library that you can use in which case But we already support something called as a Python programmable source Which is almost like a data source With it allows the user to put in any arbitrary Python script to generate the data So that could be a very easy entry point for people who already have a Python interface for their leader library that they want to use So when I make this reader, do I have to compile it into Paraview or can I load it up as a dynamic object or something like that? Right. Yes, you compile as a separate plugin not into you don't barely compile it into Paraview You compile it as a plug-in that's separate which creates a shared library that then can be loaded into Paraview Okay, so no need to rebuild everything every time I want to do something. That's nice right And plus starting with the most recent release We're also Distributing the development binaries of Paraview so you can't you you don't even have to build a pair of you to build the plugins You can directly so by development binaries. I mean a package with the header files and lives and all that Yeah, one exception to this of course is if you Have to deploy in a platform that does not support shared objects such as the IBM Blue Gene Then you you have to recompile the whole thing Well not recompile actually you have to relink the whole thing with the additional code that you compiled So talking about plugins and whatnot. This leads to a natural question of contributors So you mentioned Sandy and Los Alamos who else is involved in the in the Paraview community attributes code So the the main development comes from mainly Los Alamos, Sandhya and Kitware and we have then a somewhat large number of developers that That are from different organizations. It's it's you know too too many to list here Some examples for example we we we had contributions actually a fair amount of contributions from Edf, which is the French electrical electric company electricity of France We have we have had collaborations with University of Utah at the ski Institute the Vistrails group there We've been working with them for a while and they have developed Modules for Pewe especially for Vistrails integration and then you know, there's there are there's probably It's something in the order of 10-15 developers that you know on and off contribute that are coming from variety of organizations So I couldn't help but notice that you said one of the contributors is the ski Institute in Utah Are these a bunch of ski bomb? What is that? Ski is that stands for scientific computing and imaging Institute at University of Utah. So it's with a C not with a K And at Utah, so they probably like skiing too, but not really directly related Okay, okay. Yeah, I was actually thinking the same thing there Jeff So a pair of you and tile displays some Viz applications work with the tile display directly and Some rely on a third-party tool to actually tile the image across. How does pair of you handle handling tile displays? Her view actually uses iced tea to do its Rendering power rendering and which includes tile displays as well. Ken. Do you want to elaborate? Right, so the car said a pair of uses iced tea Which provides the scalable sort last rendering on top displays? I'm not going to go into details about iced tea as we just recently did an entire RC podcast on the subject But suffice it to say that tile display support works out of the box with pair of you So to use a tile display you simply run one of the render capable pair of you servers on the computer driving the tile display And then tell pair of you server the number of tiles wide and the number of tiles high that displays then pair of you Take care of the rest I want to add something iced tea rocks Now into discussion with Ken about iced tea I was pretty impressed of just like how much better it performed over some of the other options out there It definitely seems really interesting And actually we're in the process of maybe building a tile display here at Michigan finally after a long time school of information is doing something Easties actually one of the first things I want to get going on it probably via a pair of you So we'll see that goes so to have this whole thing work We need to have X running on there and pair of you and iced tea just kind of fires up in full screen on each of the tiles And there's an XML file that says where each tile is or is it more complicated in that to actually have pair of you display on each one of the tiles correctly That's basically it if you have X running so that you're actually driving the The displays for the tile display then pair of you will automatically Do a full screen view on each one of those instead of the tile display for you? I mean we'll make some assumptions about the layout So you should set up your MPI ranks accordingly, but otherwise it's fairly automatic One technical note that you should be aware of is that in order for iced tea to perform really efficiently in large data sets You should have more compute nodes and you have tiles So pair of you will actually take advantage of the extra compute power you provide so that I can render even faster Okay, let's go follow up on that. So you say you know have more nodes than than tiles Is do you want to have a fixed ratio like a two to one or three to one or something like that? Or can pair of you or iced tea handle? You know a non even rate one and a half to one or or two half or two and a third to one or something like that There's no fixed ratio pair of you will just use as many Any processes that you've specified an MPI job and you've told it how big with how display is so it'll just know Which one to display on but regardless it will use all of your your nodes in order to do the compositing and rendering of the images So how much you actually want that ratio? It really depends on how much geometry you know how big your initial mesh is that you're trying to render If it's something fairly small obviously you don't need very many, but if the larger Your input mesh is the more compute power you're going to need to render it in an efficient manner So a little bit of clarification here not having used pair of you on anything besides my laptop so far There's like these IO servers render servers and then there's like the servers that are running on the display wall Is this all one MPI job or should the The MPI process is running on this on the like render nodes Are they a different MPI run and I tell them where us like a socket is for the MPI run That's running on display wall or does do you just describe all this to pair of you and it just does the right thing? Okay, uh, it's pointing at me to answer the question So the the different modes that Utkars described Can kind of do a combination of what you described so if you are running pair of you in Just a clock just simple client server mode The server then is actually doing the IO Processing as well as the rendering and in that case the server is just one MPI job Having said that what we can was referring to as having a MPI job that has more ranks than there are Displaced Is also valid for this case so you essentially you're just saying that the first six nodes Let's say are driving the tile whereas the other nodes are doing contributing to processing IO and rendering But they are not driving the tiles They do the the sort last composing takes care of where the final images go and then that they get displayed in the data server render server client mode You actually have one MPI job, which is the data server that's responsible of IO in processing And what it will do is that will generate geometry polygonal geometry and then there's another MPI job, which is running on the The display clusters or the viz cluster so to speak That's responsible of getting the geometry And rendering it and the two talk to each other through over to CP IP sockets and of course the client talks to both Both of these server types over sockets as well Okay, so then another thing after that I noticed that there's a config file where you can describe a cave environment for those I don't know what a cave is. It's like an immersive where you've got displays all around you What's pair of you support for cave like? So parents support for cave is still I Would say this infancy we are currently working on it to make it much better. So our current solution is We don't use iced tea for cave rendering What we simply do is the geometry is distributed to all the rendering nodes Everyone gets the full geometry and then every tile just renders the renders its own thing based on this its orientation which is different for every tile in a cave and We just and the client simply synchronizes all these together And client access the driver where you're interacting the user interacts with the scene using the client So we have also been working currently to add support for tracking to this framework. So people have we've been playing with using VR PN, which is what you're a reality Peripheral network virtual reality peripheral network Which is a library which allows you to interface with different devices which are typically used in such VR environments Yeah, I want to clarify that a little bit. So When you're within a cave a lot of times you don't want to necessarily use a mouse To interact with your visualization to rotate it, etc. So they they support a variety of devices That and we you know, we want to support those devices and one of these things that's kind of special is essentially head tracking where There will be some cameras or some system designed to essentially track your head position and orientation and adjust the scene accordingly so We haven't had support for Head tracking and any of these other devices and we're working now in introducing that support to to pair of you Okay, so intact Whoa Where would the con scene pair of you used I think we've all gotten used to seeing these dynamic fly-throughs of the weather on the 6 o'clock Who's and things like that has pair of you been used in any, you know widely seen outside the sign community kind of simulations like a movie commercial or Any commercial kinds of application widespread? pair of you it's is used by a large number of Organizations going, you know from from government to academia to to the industry and yet we do have we We do have partners and customers From the industry that couple pervy obviously with some of the major simulation codes For for the purpose of post processing and another interesting thing is that There is a popular CFD code called open foam that you may have heard of it's it's an open source gpl Parallel CFD code and pair of you is their official adopted Post-processing code. So if you if you actually get open foam you would get pair of you with it So what's coming in a new future versions of pair of you? What are you guys working on right now? So one of the main things that we are really excited about is of every station. So we've been working on adding support for exposing pervious large data visualization capabilities through a browser and So that's we almost have we can see it an alpha now So we have that out and people are excited about it and we are we are now working on integrating it with different web platforms Then another thing is Adding support for collaboration So which entails that you have multiple per view per view sessions running on different Desktops at different locations and they all share and look at the same data set and they communicate with each other too So that you can interactively visualize to as a group Then there's Institute processing and co-processing Ken do you want to talk about that? Sure, I'll talk a little bit about that So the pair of you co-processing library is basically a fairly small library attached to the pair of you services That allows you to run the pair of you analysis subservient to something like a simulation so the idea being is that you can run a large-scale simulation and While it's running and while the the actual data is still in the core memory you can fire off this analysis and visualization to produce Possibly images or or some other extracted information That that generally is a lot smaller than the original data that you started with so you end up you start with a very large three-dimensional mesh you may end up with a fairly small image or a Contour which typically is much smaller than the original message or just some sort of extracted surface or feature and then when you can run it in Much more temporally fine manner, so they don't have to dump everything out the disk and then load it up again cool I'll ask a question. I'd like to ask a lot of other software developers and guess that we have I'm here in RC, which Search code repository do you use and why I just like to hear people's reasons for what they use? use get You're gonna talk about the reasons why We use get us because it's much easier to create forks and work with so we have Some we have parallel developments going on right? We have these different projects driving doing different things on power view at the same time so that has become insanely simplified once we are switched to get and Everyone can have their own forks off of power view Which are doing doing their own feature in enhancements and once they're done with it They can push back to the master repository So so that's really working well. Yeah from a VTK perspective Because VTK is a is a large project Developed by a large community We wanted to leverage the distributed version control capabilities of get and also actually allowed us to even grow our community further Because we don't necessarily have to give everybody Right access to our central repository rather people can Push to things like a mirror for example that may we may have on GitHub or Gatorias or something something like that And just make a pull request to us as it's called Essentially, that's they send an email saying that hey, I have this branch on GitHub For power view. This is something that may be of interest to you Can you please pull pull this change to to your central repository? Of course, we had our challenges working with get also. It's not necessarily the easiest Easiest tool to use so a pair of you was Made for doing very large like Kent said like the national labs tend to work with things that are bigger than the average academic and public researcher What's the largest thing you've ever viewed with pair of you in terms of number of cores required to even fit in memory or Geometry, yeah, I'll go ahead and start so the scientific work at San Diego produces You know some of the largest models that we've visualized a pair of you The really big data sets the stuff that comes from the hero size simulations using the entirety of one of the world's largest Supercomputers they tend to come in sporadic, but important for us. They're driven by scientific need and supercomputing availability Our last such thrust was in 2008 when we analyzed the results of a shock physics code Comprising several billion cells and hundreds of thousands of blocks Now that said Sandhya and Los Alamos National Laboratory are in the process of jointly building a new pedestal computer called Cielo As the system comes online We expect a new wave of the simulation thrusts to occur as scientists are to leverage the new resource So in preparation, we're running several scaling studies For example, we've loaded a super sampled astrophysics data set containing over 40 billion cells And we've also stress tested the system with an internally generally generated data set containing over a trillion cells I'll talk a little bit from my perspective and this is mostly coming from more the our academic partners from the more the NSF community on the In situ we're right now. We're actually Working on doing some scalability studies And we have scaled our code on up to 32,000 cores So this is essentially the simulation running on 32,000 cores, but it is coupled with with pair of you In pair of you doing the processing on the same same as part of the same MPI job essentially When we talk about large data also There's there's there's a little bit kind of differences between doing this sort of in situ or batch processing for example, I asked around before the podcast and There's there's a group at UCSD And they have actually run pair of you on up to 1 trillion particles they have a particle data and We're talking about one just one field within this particle data Being around 100 gigabytes. So when they load something on the order of you know 10 to 20 of these things They are you know going beyond terabyte scale and So that's the kind of more batch processing On the interactive processing side actually the the kind of the numbers that can we're given those are Interactive analysis, so we're really running it on a running it on a vis cluster and We want to be expect to get something on the order of several frames a second to interactively do that and one Number I got from this is Scalability study. We did we did with the together with the tech Scientist the tech Texas computing center And we did a essentially 4k cubed volume. So that's about 64 billion voxels We interactively volume render that on or using our hardware Explorated GPU volume render and that's To look at the scale that's about only one of the fields there is about 256 gigabytes and when you're bringing multiple fields and you have multiple times steps, you're really getting you know on larger terabyte size So those are some of the numbers I mean this obviously these are the stuff that we have done or we have we have known The larger communities, you know doing out there using our tools to do large data And so there's I'm sure there is some hero runs that we haven't heard of yet So for the batch runs a pair of you support some sort of scripting like a python interface or its own Yes, Peru supports python interface So we have a python based API that allows you to create visualizations and configure parameters and do renderings and so on and so forth Okay, guys, thank you very much for your time I'm excited to get pair of you going on display wall if it gets built around here and Where can people find pair of you and is there something like a mailing list of how people get involved? We have Mailing list bug trackers and all that stuff the easiest thing to do is to just go to pair of you org And then follow the links. I think I believe there's a tab there that says help And there's a bunch of entries in there pointing to our wikis and web pages and the documentation and the mailing list That's it there Okay, well, thank you very much for your time guys Alright, thanks for your time Take care