 Welcome to another edition of RCE. I'm your host Brock Palin and a couple of comments before we get started SC is coming up in November, and I will be there. I won't be at a booth. I'll be walking around I'll be tweeting while I'm there and my Twitter name is BROCKPALIN Brock Palin and you can find that on the RCE website if you want at rce-cast.com Also again Jeff is helping me out And I believe Jeff will be there also Jeff. What will you be doing at SC? Yeah Yes, I'll be there. I'll have my usual open MPI boff where we're gonna talk about State of the Union Where we've been where we are where we're going with my co-host of George Basilica from the University of Tennessee Knoxville And I'll probably be spending quite a bit of time in the Cisco booth too. I think we're number 1847 Somewhere in the middle of the floor there somewhere But so please people drop by and say hello Okay. Yeah, I know that boff. I'm at that pretty much every year and that's actually where we originally met and I always learn something new that you guys did every year that I come back and use pretty heavily So, well, I don't know if I'll be tweeting from the show or not. I do have a Twitter account. It's Jay Squires But random little fact. I actually got that for API testing purposes not really for tweeting purposes and A bunch of people started following me and so I kind of figured out like okay I guess I really should do something professional with this account So every once in a great while I tweet on there and maybe I'll do something during SC But it's more likely I'll put some some stuff on my blog while I'm there Okay, okay, but our show for today is Silo, it's a file Data storage format. Um, I've never used it I learned about it from the visit guys when we had them on the show and we have with us Mark Miller from Lawrence Livermore National Lab and we can get Mark to introduce himself and give us a little bit of background of what Silo is and his own background That'd be great Mark. Welcome to the show Well, thanks Brock and thanks Jeff Yes, Silo is basically an IO library in scientific database It's primarily an Interface for reading and writing scientific data in terms of Meshes that You know typical finite element finite different types of meshes that we deal with and then variables defined on those meshes piecewise linear piecewise constant or in other terminology zone centered and node centered variables on those meshes and Development of Silo started at Livermore Labs probably in the early 90s I think the earliest C code in the Silo source code. I think is around 92 and It's been developed by a number of different developers over time Many of them are on the visit team also And it's been enhanced and more recently. I've become responsible for maintenance and enhancements of Silo in the past few years Okay, so what is your own personal background? You've been doing file formats before been working HPC for a long time sys admin program or user. What's your own? How do you get started in this? Right? Well, actually when I first started at Livermore Labs about 18 years ago. One of my first responsibilities was to Sort of redesign the underlying one of the underlying IO drivers in Silo will not so much redesign it but But reconfigure it and we were at the time we were using a library called PDB That's not to be confused with protein database. It's a portable database written by Stuart Brown at Lawrence Livermore Labs And at the time we were trying to find a way to To restructure the way we were using PDB in Silo And so one of my first jobs at Livermore Labs was to do that and I did a very large Analysis of the IO performance of Silo at the time Including you know for example at the Silo interface if you issue a call to write a mesh excuse me the the question is how many real IO requests does that result to an actual file on disk and After reconfiguring the way we were using the underlying libraries a bit We were able to reduce what I called IO fragmentation at the time You know one call from the client ends up looking could end up looking like hundreds of IO calls on disk and We were trying to reduce the amount of fragmentation that was occurring and From then on I have I have worked on you know, it worked with things like MPI IO I went I was really heavily into scalable IO in the late late 90s Then I worked on some other scientific database software called safe as part of the accelerated strategic computing initiative and From there I've been working on on visit primarily but 75% of my time is on visit about 25% of my time right now is on Silo So what was Silo it was a replacement for PDB But what was the target for PDB in Silo like is it for a specific Project at Lawrence Livermore or was it supposed to encompass all your data? So you had one format for everybody working there? right So when I started at Livermore Labs, we were And I work in B division and there's two divisions at the lab that have traditionally Tried to share software and run into a number of difficulties in doing that And one of the one of the ways to achieve sharing of software is to make sure everybody writes their data to basically the same data format and For many years at Livermore Labs that that didn't even happen within one One division different code groups would choose different IO formats And there are a variety of reasons for why this would happen But ultimately Silo was designed so that for example within B division at Livermore Labs All the codes could read and write their Their data to a common format and then share tools that use that format So that was the reason that Silo was designed and you had mentioned PDB PDB is not sort of designed as a replace our Silo wasn't designed as a replacement for PDB It was designed as a sort of an addition of Additional level of abstraction on top of PDB PDB is primarily a library designed to read and write data structures that you would see in a C or C++ application, you know it reads and writes You know linked lists or arrays or structs That's what PDB is designed to do and Silo adds an additional level of abstraction on top of that that's specific to scientific computing Meshes and variables to find on those measures So when you say meshes, are you talking structured data or unstructured data or both all kinds? Okay, so Silo Silo supports structured Unstructured gridless meshes it supports something we call constructive solid geometry It recently was enhanced to support AMR As far as I know it supports the widest variety of scientific computing meshes of any of the IO libraries. I'm familiar with Okay, and what kind of applications you said, you know scientific computing, but could you give us some Specific examples of applications that are using it and and how they're using Silo or sure Well, well-known application at Livermore Labs is a l3d. That's excuse me an arbitrary lagant Lagrangian simulation and They use that for all sorts of types of simulations what you know you could You know if you wanted to drop up drop some structure some small object that you're using to transport things around and Understand the impact if it was dropped off of its, you know carrying apparatus or whatever you can use a l3d to simulate Simulate that and analyze it Let's see there's just a number of different simulations and they're not just within v division now It turns out that the engineering department at Livermore Labs uses Silo Number of other since since Silo was relatively successful within v division is branched out and used by you know a number of different simulations codes Whether they're structural dynamics I It'd be hard to list all the different applications It's used for but there's quite a number just within Livermore Labs and then External to the labs it's used in a number of different places Often visit is the the trigger for using Silo for someone that's new to new to this whole world They find visit they want to use it and they hear about Silo and so well, let's use Silo to store our data. I Know a complete answer there, but oh, no, that's great. That's great Let me let me dive down a little bit and ask a little a little deeper technical question here So the API that you export Assumably you have some kind of handle that represents a data structure and you can read and write to that but what is and please correct me if I'm wrong on that one but Follow on to that is how do you actually get the data in the program? Do you what language bindings do you export and and what kind of is the flavor of you know reading and writing the data? While it's still in RAM in your managed data structures, right? So the bind them answer the binding question is Silo supports Basically a C interface in a Fortran interface a number of people who use Silo and have used it for many years Would like to see a very a natural C++ interface added to Silo and we may do that eventually, but but right now just seeing Fortran And what you get the first the most important handle object, you know Thingy that you need to have in a Silo client is the file handle So you do a DB open or a DB create and the result of that calls You get a file handle and that with that file handle then you basically do put and get calls on it So you would do a DB put UCD mesh for an unstructured cell data mesh And the arguments to that call would be the three floating point arrays Double or single precision or whatever you have representing the coordinates at the nodes of the mesh and Some additional information in that call they're just sort of a set of arguments that you're going to pass that call and then another call a Sort of sister call to that as a DB put zone list call which then writes out the connectivity of that mesh You know in for standard unstructured zoo data like hexes tets pyramids and wedges you write out the connectivity with that information So with those two calls you will have From the client you will pass a set of arguments representing either the coordinates or the connectivity and those get shoved out the Silo file And then later on you can you can either close that file hand that to another application They can open it up and read that data back out of it And on the read end of things what you get back are not individual Not the same arguments that you wrote in the right half you get struck back every read call results in some Uber struct representing all little tidbits of information associated with that object So this file format it really is more of a format then you can do all sorts of It's it's much more specific. It doesn't seem like HDF five or these other things where you can pretty much right blocks of values right it is specific to the the Job of reading and writing mesh and field data It does have it actually does have a lower level interface. It's just the raw data interface and This sort of gets at the issue of data shareability and interoperability HDF five you're right that supports basically reading and writing, you know arrays and structs or arrays of structs It's it's a very freeform sort of Format in interface, but if I were to store an unstructured mesh to an HDF five file and not give you Any of the details on how I did that it would may be rather difficult for you to use the HDF interface alone and Identify for example, which arrays in there represent coordinate arrays which arrays might represent the connectivities Or if you had arbitrary polyhedral for example silo supports This is a another mesh type is a completely arbitrary polyhedral mesh It might be rather difficult for you to use the HDF interface alone to understand that data So the value of the silo Interface excuse me my throat's a little dry. I'm gonna probably get a glass of water here in a minute The value the silo interface is it adds the The meaning and semantics necessary to understand that data in terms of meshes and fields And in fact silo not only writes PDV files. It does write HDF five files. So But if if you look at an HDF five file, it's written by silo I guarantee it will not be that easy to understand what's there that that's the silo Semantics on top of it that gives you an understanding of that meaning Okay, so in comparison then silo HDF five I mean would you consider HDF five a competitor or is it something with you know Just a different software package with a different emphasis or you know, how would you characterize the differences between these two? I? Would not I would not characterize HDF five as a competitor to silo HD silo uses HDF five and in fact, we're very very happy to have HDF five Underneath silo and there are several reasons for that. I'll give one example a couple years ago at Livermore Labs when we There was a file system. We were installing on the purple machine at the time And there were a number of number of problems with getting that site file system to work correctly And we were having issues with as users were using for example visit Data was coming back Reads were failing, but there was no indication that we were getting read failures And you would find that you you get data all the way to somebody's screen and it would look a little different It wouldn't necessarily be obvious that it was bad to a user and this is this is a very bad situation So they wanted to add check-summing application level check-summing and since a number number of applications actually use silo Within a couple of days. I took advantage of HDF five's check-sum capability within the silo library itself So that applications using silo could turn on this feature and now all their data would be checked some so if they read did fail they know about it immediately and That turned out to be very useful and very important and and so HDF five is very useful for us for that reason Yeah, middleware. Yeah. Yeah, exactly. And it's it's not a competitor Because it really doesn't operate at the same level of abstraction now I think the HDF five group does support a high-level HL is that I think the name of the interface on top of HDF five to do some of the kinds of things that silo does and so you Might consider that it sort of looks like a competitor. I don't know. I actually haven't used the HL interface to HDF five So I really couldn't say what it could do, but I know that it exists So mark how many developers are there for silo you you said you're in charge of features and maintenance and things like that Are there others is there a community? How does that work? well for for a long time silo was in a clear case repository and Not really accessible to any other developers now We've moved it recently to an SVN repo and because of that it's been more easy to get other developers working on it but for the most part I have Certainly in the last year and a half to two years I and Kathleen Bonnell a Visit developer have been the only ones that have been doing real earnest development on silo and I do often get feature requests Pat occasionally patches from you know users at large either within Livermore or outside of Livermore and For the most part try to respond to those when I can But over the years since you know first started development on it I think you know close to 18 different developers many of them also to visit developers have contributed to silo in one way or another It's not it's not available. I'm sorry. It's not available for example for SVN anonymous check out for people to do Work on yet. I don't know if that's going to be necessary for silos continued life, but but certainly we do respond even to the You know general user community when there are issues with it. Okay, great So you mentioned that you have C and Fortran bindings for silo, but what language is it actually written in? Well, it's it's written in C but recently Peter Lindstrom Develops him. He's a developer a researcher at the Livermore labs He's done a lot of compression work and he developed some really cool Compression capabilities that we added to silo recently and those are actually in C plus plus So within the last year and a half we actually added C plus plus code to silo for this reason It can be compiled or configured without it since we use gonna auto comp for silo You can configure without C plus plus in it But if you do then ultimately you need to use a C plus plus linker to link it all together But it's it's predominantly written in in C Cool, let me actually ask you about the compression thing because this is a topic that comes up In the MPI world quite a bit, too We should you know people say oh use compression on the network and you can you know Reduce your latency and increase your band your effective bandwidth and things like that. Can you tell me what your experiences are with? With compression it when's it good? When's it not good or is it always good? Yeah, well, I Can give you I can give you anecdotal evidence to Experience with it. I actually have not used a lot of compression features in quote real applications So we have test test data to ensure that Silo's compression features are in fact working correctly and doing what we'd expect them to do But experience with real world applications at least for me is it's pretty limited I do know that you know a lot of these applications for their initial time zero Sort of restart files when you're talking about the very you know setting up the problem and getting it ready to run Those come those files compress very well, and there's a good reason for that There's a lot of zeros in them, you know, they're there may be a hundred different field variables that they're gonna model on a mesh But initially many of those are very smooth smoothly varying and as time goes on these things You know if there's if there's mixing and material advection and all sorts of other interesting stuff going on The the data can get very Sort of noisy and then the compression algorithms don't work nearly as well So compression tends to be better at early time than it is at late time We tend obviously we can compress integer data much much better than we can compress floating-point data This is true of of compression algorithms in general not just Silo's compression algorithms But but we take advantage of compression at two levels. We can use whatever is available in HDF five And then as I mentioned Peter Lindstrom a researcher at Livermore gave us some additional compression features Which we added to Silo and those operated at different level of abstraction and in the higher abstraction They operate at the generally the better they can compress so for example an algorithm that is actually aware of a mesh For example is going to do a better job compressing data on that mesh than an algorithm That's basically just thinking of it in terms of a 1d array of floats or integers So so Peter's algorithms do better than what we could do at the HDF five level It with our test data. I don't actually I can't actually say how well as algorithms are performed on our Applications in general. He's got excellent results on stuff that he's done with the same algorithms He's published on widely so I I don't know them in detail, but they're available in the literature So you say you use HDF five internally It's it's in the end everything on disk actually HDF five file with a silo extra sauce on top or do you use it for some parts? Is it optional? No, well when you when you create a silo file, you have a choice of which Quote driver you're going to use and there are two underlying drivers that it can write to a PDB file Again, that's not protein database. That's portable database and an HDF five file In addition, if you're reading data, you can actually read some other formats through some different drivers But that's not as relevant. So when you create a file, you identify which driver you're going to use and When you do that then the file that you get ultimately is either HDF file if you use that driver or a PDB file if you use the other driver So that that means that HDF five tools will operate on that data just fine But you said this extra sauce you mentioned extra sauce. Yeah, there's extra sauce in there that allows Silo level applications to really like I say understand what's there and The more that I hear myself talking about it is rather difficult to explain what a silo file is because it can actually operate at multiple levels of Abstraction for example in a silo file, you can actually create what you would think of as directories in a typical Unix file system. So you can create directories CD into them write data in those directories and the reason for that is allows you to organize your data in a file in a very natural way And you know keep meshes for example in one directory and all the variables on those meshes in another directory If that's what you wanted to do, but that's all within quote one file Of the two Drivers PDB and HDF five is one recommended over the other I I generally recommend HDF five Because there are additional things that silo supports on the HDF five driver that are just not supportable on the PDB driver so And and there are other advantages HDF five tends to perform You know IO performance on HDF five can be better in certain circumstances and you just have more control even with HDF five directly for example when you Open a silo file. I believe you can say which of the various IO drivers HDF five uses because HDF five can turn use section two IO I think it can use the It can use MP IO even if you're doing Parallel which we could talk about parallel that makes things more complicated silo is not really a parallel interface Which makes it even more interesting, but you generally have more control on the HDF five driver And that's why I recommend using it how hard would it be to add a another driver like say you wanted silo on top of a sequel light Yeah, that's that's problematic and that has to do with the way the HDF. I'm sorry way the silo is is designed I have had on the drawing boards for probably three or four years now a complete overhaul of the silo Upper level, but if you were to write another driver for silo The short ant the short problem is you have to do all the work That actually has been done on all the other drivers already And so for example, I'll give you a real simple example if it if it's if at time if it's an initial development it had support for say Structured meshes and structured variables and you had a driver the PDB driver doing that And then you added a number of things such as unstructured meshes and AMR meshes to the PDB driver Then when you came back to implement this new hypothetical driver call it foobar when you came back to implement foobar you then have to go back and implement the Structured meshes and variables, you know unstructured meshes and variables and then AMR You have to implement all those things to get it get that new driver functioning and that has to do with the way that the silo Sort of underpinnings are architected. I really really would like to see that change But that's sort of the the state of affairs right now So you mentioned parallel and its complexities and I'm a parallel guy So of course that piqued my my interest there What what are some of the complexities? What does or does not work well in parallel with silo? well, so silo is a serial IO library and For that reason we do not ever compile HDF 5 underneath it with with HDF 5's parallel features If you use silo in a parallel application, you're going to use it as a serial library now that might sound really really bad and you can of course use it inappropriately and Things can't would be really really bad. So an inappropriate would use would be well You know, you just do serial IO from a parallel application. Well, that's obviously doesn't scale another inappropriate use would be well You just write a file per processor for a parallel application. That's also in my opinion inappropriate So silo is typically used in parallel in what I have referred to as quote poor man's parallel IO and In that scenario What you do is you write some number of files you decide ahead of time what that number is going to be Call it 32 64 128 and this is independent of the number of processors you're running on and What you do then with that number call that in is the the application only has in silo files open at any one time And it defides the processors that it's using into n groups and within a group Only one processor in that group actually has a silo file open and it's it's reading or writing data into it and then across groups You're getting concurrent parallel IO So this is a way of of not doing a file per processor getting it turns out it scales very well We've scaled this up to do. I don't know I want to say 65,000 processors so far and it seems to behave okay on like 256 files That's in a nutshell. What poor man's parallel IO is there are other aspects to it if you wanted to get into that really make it work Well, but that's what it's doing So if you did want to use say, you know HDF 5's parallel IO directly and your multiple processors to a single file Would that require a major rewrite a silo or is that something that's coming or something that's not planned or you don't want to do it or What's what's the well? Let's see like that It's not planned. It's not something I give an HDF 5's current parallel interface. It's not necessarily something I'd want to do either So so and in fact you couldn't really do that With with silo right now the only the only use of HDF 5 that you can get with silo Is a serial HDF 5 I have used HDF 5 directly from Applications to do, you know use their MP IO interface and do collective parallel reads and writes The the difficulty that I ran into in the cases that I've tried to apply that is our Um, we're talking about multi-physics simulations that do just a wide variety of things. So the the size and shape And existence even of data from processor to processor in these applications is of highly variable And the IO patterns that result are are very difficult to stuff through a collective interface a collective IO interface and Poor man's parallel Basically allows us to sort of sidestep that issue and still provide a lot of flexibility and fluidity in IO patterns from processor to processor So say I had a ucd mesh and I wanted to Use hgf5's parallel IO Is would there is it documented such that I could write an hgf5 file thus that I could use silo To kind of read it almost describe the format for me Right There is no documentation on that really. Um, and if let me let me make sure I understand you um I don't The paraphrase would be I'm in application. I want to write direct hdf5 But I want the resulting file to quote look like a silo file Look like an hdf5 file that silo would have produced. Is that what you're saying? Yeah, yeah, so then I could put it into visit or something and not have to make my own custom driver Right Yeah, there's I would not ever really recommend that for a You know the biggest the biggest reason for this is that the hdf5 driver when it was developed for silo was developed With one very specific intention in mind and I'm not sure really why this route was taken at the time Um, strangely enough, I was somewhat involved in it but didn't really understand what was going on in detail at the time but The hdf5 driver is in fact designed to hide a lot of what's going on in hdf5 from the silo client and as a result of that When you look at an hdf5 file that's produced by silo It looks much more Foreign and unnative to hdf5 than it really ought to And so trying to do the reverse and sort of create an hdf5 file that looks like silo is is very difficult I would not I would not even attempt doing that So moving towards something like performance What kind of performance are we seeing for silo? Is there a significant overhead for going through all these levels of abstraction through hdf5 through silo? Or are we getting pretty close to native disk speed? well, um You know, let's that's a good question that I have not looked at in many years That's my first task when I worked at Livermore labs just to look at this issue in detail And uh and at the time um with uh other drivers that we were using we were seeing some Depending on the kinds of Calls we're making whether it's a db put uc mesh, which is a call to write a structured cell data mesh Or db put a quad var, which is a call to write a structured a variable on a structured mesh You know depending on the kinds of operations going on in silo we could see Uh not such great performance at the underlying iosystem the actual bandwidth to disk In addition to that, uh, we were you know Livermore was heavily into craze at the time, but we wouldn't do our viz on the craze We do our viz on on other systems And so you have this problem where you had crave floating point data But you're trying to read it on on a different cpu architecture And so there there were a numeric conversion operations going on to actually support that And those numeric conversions were uh rate limiting factor in doing the ios So you'd read the data off disk to do the numeric conversion. You finally can hand it back off to the client Um hd of five in fact performs that job very rapidly. It's it's generally hd of five is generally better than Uh disk ios so the numeric conversions that it does if it does any are are Faster and therefore hidden to the disk disk bandwidth And so recently I haven't looked at this issue I I I certainly should but none of my customers have been complaining about it And so I haven't actually looked at the issue of performance in detail But I've I've got to believe that for a lot of the large data that we read and write the overhead of silo is relatively small Now and now there's there's this flip side of that If if the overhead to silo is not small if that argument's not true then you push silo out of the way And uh try to read and write to a lower level interface. Maybe direct to hd f5 Well, the effect of that is now you've taken all the data that you were making Accessible to silo and all the tools that support silo and said i'm not going to do that anymore So you pay a price If your data is less shareable and there may be less tools available to you or the tools that you want Simply won't be available to you So so the real win here is abstraction then and the performance is probably Generally assumed is kind of what i'm hearing you say right certainly I'm ashamed to say this but certainly it is assumed In in the latter years of silo development I I am certain that there are issues in there that we could improve substantially But but performance is assumed and the reason it's assumed is people are generally happy with what they can do with silo data Well, they're happy enough with what they can do with silo data once they've produced it That the cost of using silo is not generally a concern I don't know if that answer makes sense. No, that makes perfect sense actually Let me go off in a slightly other direction here How big can a silo file be is there anything restrictive in your meta format or are you really tied more by By the underlying drivers I am totally tied by the underlying drivers. So if you've configured hdf5 correctly You get linux large file support wherever you are Well, then that file can be you know as big as you can store, you know 64 bit, you know 2 to the 64 bytes Which of course the the biggest silo files I've heard of have been on the order of maybe 10 to 15 gigabytes on the hdf5 driver The pdb driver Well, let's see Hank had some experience with pdb driver on the ranger machine where he went up above 2 gigabytes I don't know what that was he might have gone up to 4 gigabytes per file on some Studies he did on ranger scalability studies back in june of this year But but generally, you know, there there isn't going to be a limit as long as the underlying file system supports it So let me ask this then you mentioned earlier that you know, you can kind of have a file containing files So to speak that you can have directories and cd and read and write into them and things like that When you talk about these big files that you're talking about is that you know one mesh or is it generally You know a couple of meshes and the you know the application dumped all of their output there or How does that go? Well, that would typically be um, so Conceptually it's a single mesh, you know, if you're running on 65 000 cpus You may you may have one application that's got a mesh that's decomposed over 65 000 cpus So you write the bits and pieces of that mesh to different silo files And so within one say a 10 gigabyte silo file you may decide to write a thousand mesh pieces to that file And and typically the way you do that is you create a sub directory within the silo file for each mesh piece And all the data associated with that piece goes into that sub directory This is part of using silo in what I call this poor man's parallel io sort of way But but but typically um now you can store completely unrelated and different meshes in the silo file We do that pretty routinely for test data all the time But typically when you're looking at these really really big data sets, you're looking at one Monolithic mesh that has been decomposed into pieces and then the silo each silo file is managing some of the pieces of that large mesh Maybe the entire thing on 65 000 cpus might be broken up into 256 different silo files So we talked about these files and these stored meshes and scientific data What's the strangest thing you've ever seen somebody put inside a silo file? You know I um I was you know wondering that over the weekend and I I really don't have a a good answer to that I um Yeah, I don't I I can't think of the quote strangest thing that we've done with it I I do know that you know people can look at the silo interface and and make the the following mistake They can sort of use They can read and write objects like meshes and variables using you know db put ucd var db get ucd var Or they can use this lower level function called db read and db write which is basically reads and writes raw data arrays And it's not necessarily clear to the novice silo user that if you use those functions in silo Your data is not shareable. There's nothing that visit can do with that data There's nothing that um any other high level tool can really do with that data because all it sees it as is an arbitrary You know bytes so maybe the strangest thing is this This this issue of understanding the level of abstraction at what your data is characterized and its impact on your ability to use Other tools on it. Um, I think a lot of people stumble over that issue when they're first trying to approach these file formats like silo or exodus or net cdf or whatever So you mentioned throughout this To interview you know a couple things that you'd like to work on like you know redoing the top layer and things like that But what are some things that that are coming out that you know you you have had the time in the cycles and New features and people ask for and things like that. What's coming out in the new versions of silo? Well the the biggest um The biggest newest thing is probably support for adaptive mesh refinement data in a silo file That's structured amr And really what's necessary there is to understand that Different pieces of mesh are nesting within Other pieces of mesh and to understand roughly, you know The logical extents of those those nestings and the uh the refinement ratios for example, you know in a typical 2d case you might apparent mesh and decompose it into four quads and child child mesh piece So that would be a refinement ratio of 2 and x and 2 and y in a simple 2d case So what's been added to silo is support for that knowledge about amr data and um So here's a here's a swizzle to this once that gets added to the silo library itself And that was recently added so the 4 7 release about I guess about six months ago or whatever included this capability People then write start start exploiting that capability and then they turn around and use visit and some of them get frustrated because well It's not working and the problem is okay Well now we have a separate set of issues that we need to go to visit And work on the visit silo plugin to support that new feature in silo And so I often often get this double whammy of making the enhancement to silo and then going into visit and having to make the Enhancement to silo plugin so that it will read this new feature in silo Well there you go or or people get frustrated that it's not there yet And but but anyhow the the newest things are adaptive mesh refinement. I've I've added a real real simple things for Bug fixes recently For what we call species data in silo I would say there's nothing profound planned Um bug fix releases Enhancements even even major ones if they're requested are going to fall into my lap and I will do them As they are made by the various funding agencies typically my program programmatic users who need that work done This large rearchitecture effort I mentioned I've I've pestered my group leader about on Several occasions and we both agreed that well, yeah, it would be great to do that But no one's asking for that right now and we have way more other work to do Okay, so what's the license that a silo is under and who can use it? Let's see anybody can use it The actual license Gosh, I should know this I I've looked at it at one point and and I don't have my Notes handy and I'd have to go look through an email history to really Is it terrible that software engineers basically have to have a little part of lawyer in them? It is I'm I'm jeez. I'm just I'm just ashamed that I don't actually know the answer to that question In fact, I think you probably Let me know that you're going to ask that before and I neglected to look that information out. It's either a bsd License or a gnu and if it's not gnu. I think my group leader wanted us to re-license it under gnu sometime soon So, um, I know that there are a couple of users of silo for example tech x that would like to see it See a gnu license for it But anyhow, I I don't know the specific one and I think we are moving to a gnu license for it a gpl So you can just download it free from the silo website You can yes and that website is oh, I'm sorry. It's silo dot l l n l dot gov Okay, and I notice there's a nice manual there too and nice pdf manual on how to use it right And there are one of the other things often I learn to use software by seeing examples of its use So the there's a page there on examples and it shows a number of different mesh types pictures of them from visit and then a little bit of verbiage on what's being Illustrated by the example and then you can download the source code for the example and the data files which Are often useful in trying to understand really what's there and why it's structured the way it is mark Thanks a lot for your time Uh, we're going to go ahead and wrap up here This show will be up on our website at rce dash cast dot com and there's a itunes subscribe and rss feed And you can always download the mp3 files there Also, there's a nomination form where if anybody wants to hear any topics that we're not aware of Go ahead and fill out the form and let us know about it and we will check it out soon Well, thanks so much for having me. I enjoyed it. Thank you mark. Thanks a lot. Thank you And again, Jeff and I will both be at sc this year I believe it's in portland, right? Oh man, I need sunny portland, or again. Yes I need plane tickets. Okay. Yeah. Um, yeah, we'll both be in portland Hopefully we'll see all of you there