 Welcome to another edition of RCE again. This is Brock pale and we've been away for a while the holidays and such and being gone with SC we're all really busy, but we're back. Took us a while, but we're here. Yeah, we're back It's the end of January, but you know, it's all good. We were looking for an extra special set of guests Yes, right. Yes. That's Jeff Squires again from Cisco Systems and open MPI. Jeff's a very famous dude and Also, again, you can find us online at RCE dash cast calm You can find old shows there because itunes almost shows the last few there's also RSS feed and And itunes podcast feed over in the side so you can subscribe with your tool of choice Yep, and bandit mandatory mention of my own blog over there I think it's linked to from RCE cast calm and I felt good because I got linked to from inside HPC this week So that was pretty cool It always makes the the corporate overlords happy But but let's talk about what we've got today because this is an RCE cast first Yes, yes, we have three guests today. I think it and they're all from different places But they're all working on the same collaborative project. That's why they're so many different places It is itaps which they will tell us what that means, but we have three different guests We have Mark Shepherd. We have Tim Touches and Carl Olivier Gooch, so guys welcome to the show. Why don't you take a moment and introduce yourselves? Okay, this is Mark Shepard. I am the Johnson professor of engineering and director of the scientific computation research at Rensselaer Polytechnic Institute Involved with the itaps project in terms of providing primarily tools for Supporting adaptive simulation technologies Hi guys, I'm Tim Touches. I'm a computational scientist in the mathematics and computer science division at Argonne National Lab But with the twist I'm also an adjunct professor of engineering physics at the University of Wisconsin Madison And I telecommute from here I'm Carl Olivier Gooch. I'm a professor in mechanical engineering at the University of British Columbia Okay, so itaps is actually an acronym. So could one of you explain what that is? so the acronym for this Incarnation of itaps stands for Interoperable tools for advanced petascale simulations. I call that this incarnation of it because we had a previous incarnation in our first five years Which was TSTT for terascale simulation tools and technologies so Presumably the next one will have to come up with an acronym that has an X in it I think the only question on the next one is is it X a scale or do we say well? We got to stop just using these numbers and we'll just say extreme That's going to be the current discussion. That's a really good idea That'll get us the X and we can use it for axis scale and then we can keep using it after that But I have to keep changing the acronym right You just go to like the Cyrillic alphabet and then nobody will know what you mean anyway perfect That was actually a question. I thought of we had petascale machines. What was next, but okay? We'll get to that so what is the goal of itaps like it's a Collaboration what what is the actual end goal of the project? Well, the vision is that people who write application software in scientific computing Generally know a lot about the physics Generally would prefer not to have to know about how do you deal with? Unstructured mesh databases or how do you deal with making sure that your your metadata about your problem gets stored and and and is saved and reloaded properly and so on and So we're trying to provide that infrastructure and provide it in a way that doesn't tie you down to a particular Implementation which makes it so that you can Switch to something else that happens to do better for your application for example and also do it in a way that Makes it so that that none of the infrastructure things are the bottleneck issues For your massively parallel simulation Yeah, so whenever you're dealing with mesh data meshes and algorithms that work with mesh Usually it's just a matter of time before something breaks either because the the algorithm you were trying Was implemented with a specific Application in mind or or your your particular data is is Different from what the algorithm was originally designed for so being able to to trade out Different algorithms, maybe that do similar things, but in different ways Improves the chances that that you'll actually be able to string together the algorithms that you need to to get the job done And so that interoperability is really key to getting these things done Right just a follow-up on Tim's comment Also, the fact that we're doing implementations of these things with it within groups that have been doing these things for many many years We also have implementations that don't have make some of the same mistakes That others would make if they were doing it themselves because we made the set that mistake 20 years ago or whatever And I've learned from it and have the capabilities that account for those things That sounds pretty much like the same rationale that we have for for MPI actually since that is kind of my purview there So let me let me clarify on that. So are we talking about middleware here? These are our functions that applications will call from C or Fortran or whatever they're actually So itab steals at multiple levels, so Primarily, we advertise ourselves first as a as a common API for accessing Mesh and data associated with the mesh But in the use of that API There are multiple uses so I'll go over one use and you guys the other guys can Can talk about the others each of us may have an implementation of that API so that an application wanting to talk to mesh data Can talk through that API or that that functional interface and use either my implementation of it or somebody else's implementation of it so another attractive feature here is that If you've got some piece of code that works perfectly well, and you've got your own mesh database that you're perfectly happy with You don't want to have to rewrite all of that to use somebody's algorithm for say mesh adaptation That's probably the last thing you want to do so Having the standard API makes it so that you can look up in the documentation for Say the adaptation service you want to use. Oh, it requires the following 20 functions from this API You implement those functions on top of your mesh database and away you go without having to Change your data structures and or change the way that the algorithm interacts with data structures Okay, so if itaps is all for you implement these functions so that everybody can access data and share things Interactively, how is this different than using a common like data storage library like HDF or CDF or Xdmf who we've had on this show before I think it's largely a matter of a difference between live data in memory and data stored on disk HDF 5 will give you a lovely hierarchical Storage structure for whatever data you want to stuff into it and that's great But when you want to use that data live in your simulation The fact that it came out of an HDF 5 file is is really not relevant at that stage Well a simplistic way of looking at sort of the difference is that The the itaps API's answer questions you might ask about the data So if you want to know something you want to know a relationship it answers the question on the relationship as opposed to just dumping a bunch of data on you yeah, and the Like Carl said, we're definitely geared towards Providing access to data that's in memory. So like in the context of a parallel simulation If an application needs to get information about a mesh Especially if we're talking about pet a scale or X a scale systems There's not the option to go down to disk and get it and then pull it back You just don't have time for that. So it's really an in-memory API So before you mentioned Somebody talking about you know, they wanted to implement a new particular functionality They only had to look up say, you know 20 API's and and implement those is your is the itaps API Logically divided into two groups or how does that work? Yeah, so we we actually have Four different API's for different kind of classifications of mesh of data So one of the API's is called I mesh it's for accessing finite element or unstructured Mesh data another one is IGM for accessing geometry think CAD data another one is for accessing field data, which is You know field data is data that has semantics and operations on the data And then finally the fourth one is relations that allows you to relate things between any of those other three interfaces So we separate them into those chunks to allow an application to pick up any one of the chunks without having to get the other three So that's Those are the interfaces and then itaps also deals at higher levels We describe services that interact with those interfaces and then above that we think about integrated Higher-level applications that may use multiple services along with those interfaces And all of these services are in memory kinds of services. You're not necessarily talking about stable storage or things like that Yeah, typically they're they're in memory although Many of the implementations of of say the mesh interface will Have the ability to save save out to a file, you know stored in HD of five or some other type of format like that Okay, so they're all in memory. There's a I have a question here then is the Distinction in memory like you would are these libraries you all link together and you have one application or is this actually for handing off Between multiple applications so that data stays in memory because it takes too long to go to disk on these huge systems Like I'd run my simulation It would stop and wait for a connection from a post processor visualizer program Or would it leave it in like a shared memory region? Like how is this passing actually going back and forth and at what level is it at? so What we've just described there is a an interesting application that no one's tried to write yet The In fact, we we're thinking in terms at this point in terms of The the simulation code that links in a number of ITAPs libraries and does its thing and then writes the data that needs to be written and it's done In principle, there's nothing that that prevents one I suppose from from having the the sort of shared memory Handoff from one one piece of code to another, but I don't think anyone's tried to write anything like that Yeah, I'll give an example of of that type of thing As was explained earlier We provide the ability to support a lot of operations that get carried out of the mesh that the people that write the physics Codes don't want to deal with One of those is to make the mesh better During the analysis process so you get better solutions. So in that interaction We there's a mesh to start off with the analysis program does some analysis of that We have to look at those results. We change the mesh. We give it back to the analysis Now initially they don't want to touch their analysis program So we use all the ITAPs APIs basically to construct their files and do that But then they discover that when we're doing that on parallel computers all this file. I owe is a real killer So then they say well, we're interested in doing more and we say well This was all done through an API. So instead of having this API just be a file writer and reader Constructing a file writer and reader tell us a little bit about your data structures And we'll use the API to extract information from your data structures and put information back into them so that they can run together and And and basically have those different applications and their analysis application and our meshing services Running one after the other and and handing off the data without having to go through files And we have some experience of that that's limited But that is part of the goal of being able to do one of the goals of doing these things Yeah, so in fact We're even going in that direction In terms of multi physics coupled analysis if you look at those if you look at those types of applications you quickly realize that The data backplane that connects the various types of physics being coupled together is usually the mesh and data associated with the mesh at least for for Simulations that sell systems of PDEs So when you realize that you realize that you're you're in memory Mesh representation and the field data that that lives on the mesh is actually that data backplane that connects the various physics together So we're packaging, you know some services that involve transferring solutions between meshes on That's some of the stuff that I do with my implementation and we're implementing coupled Nuclear reactor simulation codes that talk through that and pass the data through the mesh like that So I'm kind of fascinated with the idea of the composability of what you're talking about here, you know having multiple providers Provide say different algorithms for similar or the same operations How how do you actually implement that is that are these are we talking plugins or? Components or they just different libraries that use the same API and so I just use the standard dash L kind of linking stuff In my make file or how does it how does that go? Carol Okay, I'll take that so as far as Hooking yourself up to one of the existing implementations. It's pretty much a plug-and-play Thing at the make file level And back we've got some some make file include stuff defined to make that even easier from the application point of view You know that that's pretty much true at the implementation level for services At this point our focus there is more on Providing services that use the API so that that People can take advantage of those services those tend not to have a have multiple implementations because We'd rather spend our effort on creating more services rather than more copies of the same service You know, eventually, you know, would it be nice to have say multiple mesh adaptation? Schemes available sure because all of those have their own strengths and weaknesses But we haven't gotten to that point yet in terms of that bread to support So I have a question so you're talking about these common interfaces and stuff. Has anybody ever made a Well first off, do you have you guys actually you're talking about yet make files you have actual Libraries and code that I can download from your website and use them like you've already defined a lot of Okay, yep, so has anybody like say a vendor specific for their system that Say doesn't support certain functionality like runs a stripped-down OS made an ITAPS Like API matching system so that someone can move their code over there use ITAPS, but you guys don't actually support their system no these so these these APIs and implementations are Are still pretty high level in terms of of simulation codes so to compare to MPI for example There isn't that as close a link with With vendors that are selling things although very recently There is a commercial software vendor that's been involved Who themselves market and and develop libraries that provide mesh and mesh-based services and and Marks connect connected with that outfit and so there's some connection there Right, you know so Backing up a little so, you know free as Tim said the things we're doing at are at a higher level and Also targeted to a much newer community if you will It's higher level from the standpoint for example our parallel mesh. Well, that uses MPI. It's on top of MPI and in terms of the point Tim was making about the commercial vendor, so they're in the CAE world the computational engineering world the computer-aided engineering world There's historically software was very much, you know Packaged in a specific way, which was not very amenable for expansion or plug-and-play type of interactions But in recent years some smaller newer companies have started to develop more component based types of tools in that area So this particular company its name is Symetrix that is We're interacting with is actually has Very similar sets of tools and services to some of the ones that we're doing and they have some additional ones so they're creating an There are an ITAPs interface to their tools and technologies so that they now the set of Services and tools that are available and things that can be supported for various applications And at this point primarily in the DOE Can be supported not only with I Specifically ITAPs develop services, but with services that those guys have been developing So I think if you're looking for analogies The one that that probably matches best To us and that that your listeners are going to be familiar with that are in the area of solvers for for systems of equations So you know a number of of open-source solver projects have been developed particularly in DOE Over the last decade and it really transformed How people do large-scale parallel computing? And those are Petsy and Trilinos and there are a few others And so we're shooting for for trying to do the same thing with mesh and mesh related data We're about ten years behind in the The solver people, but but we think we're going in a fairly similar direction Okay, so then a question about the the model that you're that applications write to so in in a Petsy like kind of way Do These rocket scientists and whatnot do they still think in a serial model? You know they call a magic itaps function that is Parallel behind the scenes and scales out really nicely and they just get their answer much faster And they're able to use much bigger data sets than they were able to before or are they still programming in a parallel model and These are just better implementations of on a single machine kinds of algorithms so the I Think that's the answer that may be multiple choice in the sense that it depends on what level you're writing things at if you've got somebody Who's writing a service that uses our parallel mesh API? They're gonna have to be aware that they're programming in a parallel environment because the the API is One way to characterize it is that it is is Written in a way to make it so that the things you want to do are all possible But all the things you want to do are not already in existence in the API So you're gonna have to keep track of oh, yeah So here's some data that's near the boundary between two processors in my parallel simulation and I have to do something special Now at a higher level than that if you say well I want it to do I want to combine three different mesh improvement services to make my mesh even better say You know at that level if those services are written properly, of course You don't have to be parallel aware at all. You can just say improve my mesh and That will work regardless of whether you're serial or parallel So how much external software? Do you guys rely on do you use any third-party libraries to provide some of your functionality or Is everything like 99% of what you've written is yours? I think it depends on the particular implementation or service you're talking about so one thing that's common to To all the implementations of these API's at least is that we all Use an auto tools based build process But then the individual implementations will will do various things So for example the the Moab library that I develop Interacts saves its data to an HDF five based file and also has readers for reading that CDF based data and Connected this tools to this data Through things like VTK and some of the higher-level viz packages that that are based on VTK Our parallel structures, I think all our parallel implementations are using MPI also yeah there's a certain level beyond which you You can't get away from from using outside libraries, of course because if you don't Then your reinventing wheels that other people were better qualified to build anyway so some of the some of the more interesting things that are are Really just now kind of coming into the fore are how do you connect? You know the data that you get on a mesh? With some of the other peer libraries that these large simulation codes tend to use like How do we take the geometric? Look at the data that we get from a mesh and translate that to a vector matrix view of the data that a solver is going to communicate in terms of and When you've answered that question usually the next one that comes up is How are you going to do preconditioning of your system which often? We'll tie back to the geometry in the mesh part of the problem So it's it's starting to get more interesting seeing some of these higher-level connections start to get made All right, so let me go in a slightly different direction here So, you know the the P and itaps is about petascale, but you know didn't didn't we already smashed the petascale barrier? I mean why why does that matter anymore? Is this all the current cool work going on in Exascale? So I kind of think of this in terms of blazing a path through the wilderness So, you know the the codes that have been there done that with petascale and are going towards esca scale They're the ones that are blazing that that path but we're about interoperability and When you're talking about interoperability, you're implying that there are multiple Pieces or multiple tools that you have a choice of So maybe we're not blazing the first ones to blaze the path there But we're the next ones up trying to make it more of an everyday occurrence to be using these kinds of of systems So when you're talking about interoperability, you know the The problem has not been solved at the petascale yet and some would argue even the terrace scale So for codes that are wanting to get quickly up to speed in the terrace scale or or petascale class Machines, that's really where our sweet spot is or one of them is and then we're also Among us are our ones that are are in that part of the Community that's blazing a path towards the esca scale as well right just to follow up in a little bit more on that Well first the funny part of it It's petascale because at the time that you know This proposal was written for this five-year project of which we're in the fifth year of that was the target was petascale So that was the title our next title will be extreme scale most likely that way We don't have to worry about whether it's exit or not. It'll be whatever it could be the XR. What's after that? More realistically as Tim was indicating There's the trail blazers with respect to simply performance There's the fixation on the top 500, which is lin pack well relative to the types of operations we have to carry out and the type of data we're working with Doing glin pack stuff is a triviality. We're dealing with a lot more complex on structured data By the same token we are involved with pushing these things the tools that we are developing We have been running on machines such as well the 288,000 core machine at Eugene we've run these tools on In conjunction with CFD codes that have scaled on that machine So we are involved with running stuff on the very largest machines at the same token though as Tim emphasized What we will insist on is that we're doing things that will be interoperable and deal with the challenges of interoperability And we will get to the size of stuff as we get to there as opposed to worry about well Let's just do something big however limited it is and then we'll come back and add more capability We have we insist on the capability and we'll go to the sizes as we proceed So if I could if I could paraphrase your answers there, so it's it's more about bringing the petascale to Those who need it not just the seven machines that happen to be over the one petaflop mark or however many there are It's it's more about making that available and the commoditization of petascale for I guess is one way of well It's yeah, but it's not so much as machines. It's the applications It's doing the work that people really need as opposed to for example Solving a problem on a completely uniform grid on a cube Which is going to be much easier to get the all the floating point performance on because you have beautiful structure in all directions so Jeff one way to to Push Tim's analogy perhaps a little too far is you got the people who are are Blazing the trail through the wilderness and you know, we're we're working with those people as as that's appropriate But after somebody plays a trail through the wilderness Hey, this is North America. Somebody's got to come through and build the super highway with the nice rest stops. That's us So another thing to keep in mind is that You know that the codes that are running at scale are the physics and now analysis codes But oftentimes the data that they're they're working on has been generated or processed or interacted with on smaller systems And so oftentimes These large calculations are going to break down at the stage where you're trying to To come up with a complex geometric model to to simulate and that's not done at the petascale That's done interactively at a at a high-powered workstation. And so the tools that we're developing Kind of span the spectrum From the the workstation class codes that are you're using to to build your model all the way up to the The petascale and exascale codes that are actually consuming that that mesh So in a sense, that's that's why we justify being ten years behind the solver people Is that you know, we have a what a much wider scope in a sense So if we're actually using itabs, what's it written in the API is a C API That's been designed so that it will know that it will be easily compatible with Fortran And that has some implications for how you pass arguments, of course the the existing implementations Down underneath those wrappers are all working in C++ So before we started recording we were chatting a little bit and getting everybody's name down and things like that and You mentioned that all three of you were on a con call earlier today already So what what is the ongoing work that you guys do in in itapse? Are you still designing new API's or are you collaborating on semantics or or or working together on on software implementations? we're in the process of Kind of working some wrinkles out of one of the higher level API's the the I rel the relations interface as Well as we're moving towards I think at the end of February a 1.2 release Which is basically a Clear specification of what the interface is as well as Already to go implementations of of the interface and some services that that work with the interface Another thing that we're we're working on at this point is We've got a Of what we think is a reasonably complete design for a field interface so that you can can Interact with your your solution field data in Through a common API That's to the point where Those of us who are working mostly on that have Have gotten to the point where we've decided that well We could talk about this More if we wanted to but we're fundamentally unlikely to learn much more about what's Which parts of this are good and which parts of this are bad until we start writing some code so we're at the point of of putting together a sample implementation or two of that and Learn from that and figure out how to improve that API before we release it on the wider world So in other words the answer to your question is yes, we're doing all the above But there's different levels of of maturity of the of some of the different parts So I've got a user question How hard is it to use itaps if you have an existing application? Can you take just bits or do you have to take the whole thing? Or is it really hard to kind of drop in on top of your code? And you really need to build something from scratch when you're using it So that's one of the things that That that we really care about and So the answer is you can you can dip a toe in the water and Interact with with I mentioned particular at a very Crude level or or Just a simple interaction with the library. So for example One of the things that that we've done several times Is you have an analysis code You want to pick up a mesh through the iMesh API Because that might allow you to grab mesh data that was generated somewhere else that you you don't currently interface with so In the code that looks like instantiating the API Telling it to read a file With this file name and then calling an API function to get the To get the mesh vertices and to get the mesh elements of of the particular type you're interested in So that's that's a real Easy way to get into using it is just to use it basically as a file importer And then there are more complicated ways of interacting with the API that you know after your first After doing that little toe in the water exercise and Verifying that yeah, I can I can work with it Then you might want to use it to do some more interesting things So a natural follow-on question to that is is who is using itaps? What what users are there that you're aware of and Perhaps even more interesting is are there people out there who aren't using itaps who who should be? a couple different things that are interesting applications At argon we're We're putting some of our Reactor analysis codes on top of the imesh api and on top of the moab component with which implements this So there's a lot of interest in modeling and simulation of nuclear reactors because Frankly, it's very very difficult to build experiments on anything these days and on reactors. It's even more difficult So if we can simulate them then it's not only cheaper but tractable to to actually model their behavior And then a wildly different application that That has come up is modeling The dynamics of glaciers of of land-based ice as they flow over over the bed And that's actually a quite an important problem In the climate modeling community because The behavior of land-based glaciers over the next hundred years has the potential of of causing meters of sea level rise so they're using They're accessing mesh through the imesh interface and using um a mesh based representation of the underlying bed Whose topography can be quite varied. Those are a couple of applications. I'm involved with Applications that our tools are being used our ITAPs tools are being used with include fusion plasma modeling modeling electromagnetic and linear accelerators active flow control applications and a two-phase flow application And an answer to your question. Are there people out there that should be using it that aren't using it? um Of course there are there's tons of them But we you know, it's going to take us more a little bit more maturity to be able to easily Get more money more of them, you know get a large number of them hooked on using these tools and that's what we're working towards One of the things I forgot to mention too. I'd like to to just reinforce about what mark said um When you first start thinking about interacting with mesh and mesh data it's fairly intuitive and and A lot of codes decide well, I'll just write that data structure myself and then they go and do it but if as you get into some of the more trickier aspects of those things like How do you deal with a mesh? That's tens of millions of elements or or hundreds of millions of elements or how do you How do you adapt the mesh? and How do you deal with a data structure that can be changing like that those Kinds of problems can be kind of a black hole of implementation And there are lots of groups that have dealt with that In very challenging environments and and many of those groups are involved in the the itafs collaboration so basically anybody who's who's Thinking that they need to implement their own uh mesh data structure Should really think about Exploring some of the itafs technologies to provide that level of service So who all's participating in this? I mean each of you guys are from a different Facility and different backgrounds and different use cases How many different people are involved with has kind of given their input on what sort of functionality should be available Trying to think how many individuals there are I know in terms of organizations. We've got people from Lawrence Livermore from sandia From argon from brookhaven From oak ridge. So five of the doe labs From universities, we've got rpi. We've got stony brook. We've got wisconsin, we've got ubc and a university of colorado And now mark and and tim can tell me who I've forgotten And we also have a software house that is providing us some inputs also right symmetrics in PNL Pacific Northwest Yeah, so six national labs and and four or five universities. It's a pretty large collaboration so just out of curiosity do you guys share the code or Or do you have your own software repositories that are you know, you're collaborating most Hopefully on the eye is not necessarily the implementations Well, we all I say we all all of us on this call um Came in with our own mesh database implementations that were pretty mature And so, you know, we've implemented the apis on top of The mesh database that we've got um You know the the it's fair to say that The service software is in existence at this point anyway is also largely Oh, we've got this service. We'll make it work with the api kind of thing So, you know, it's I think it's more a a collaboration in the sense of of sharing ideas and making sure that we uh end up with with an approach that works for a variety of different ways of doing things Rather than a a collaboration in the the extreme programming sense of Of sitting down with somebody else and writing code that would be at the far end of the scale So that said we do have a common repository for um For the header files that define the interface as well as for unit tests There's a single implementation of a unit test or a compliance test that's run against all the implementations and um That stuff I think by and large Except for the commercial software vendor Most of the other efforts are are open source at some level And you can get access to source code Primarily from the individual institutions Uh software repositories, but that's all linked from the itab's website And the unit test stuff is accessible from there as well So let me ask you just a pure curiosity question and I ask a lot of our guests. What what do you guys use for software? version control and uh, why we use subversion for version control um I guess why uh, I think There are probably those among us who feel feel fairly strongly about it having to be subversion and not get or make here at all Um, while others probably don't care at all that much Yeah, for for myself My my research group switched from uh from cvs to svn a number of years ago because Um, we needed the the ability to access the repository Uh through the wire from other machines And since then Basically, we've stuck with subversion because of inertia There's not been to my mind a really compelling reason to switch to something else and So, you know, we've we've never done it. Um, it does the things we need I imagine you encountered the equivalent of of the holy wars on on what kind of Software repository a given thing should be in Yes, we do I I I am asking purely out of curiosity because I I enjoy the the diversity of answers that we get to that question I of course have my own opinions, but that's that's not what I'm asking here Yeah, but I will say just yesterday as a matter of fact and during a team exercise I was having Uh, one of the guys that works for me told me that in the time it took to To check out One of the components he needed to work with that's in a a git based repository Uh, he was able to build the three different implementations of the three different interfaces that that we work on In the time it took to get the source code for for the other thing so Uh, I'm a little up to you They so surely inspire some fire by some of our listeners Yeah, the good news is you guys will get the hate mail on that us. That's right Because we don't know who that was who was speaking right just forwarded to this guy over. Yes so that said I won't tell I won't tell you who it was that told me that How's that? So that said what's the website and contact information for people who are interested in itaps and so they can send you flame messages Uh, brock, what's your email? All right, let's rephrase the question. How does somebody get involved in itaps? How do they find out about your apis and the documentation and the software and all that kind of stuff? Yeah, so you can get to us by basically browsing over to the website. Um, www.itaps.org that's itaps.org And that'll have the coordinates of how to send this email as well Mark tim carl. Thanks again for your time. This has been great. Um, the show will be up soon And I will send it to you so you can send it wherever you need. So thank you very much for your time Appreciate your time gentlemen Good work. Bye now