 Welcome to another edition of RCE. I am your host Brock Palin. You can find our website at www.rce-cast.com. You can subscribe find the RSS feed and submit requests for other shows. I have with me my co-host Jeff Squire from Cisco and the open MPI project. Good morning Brock. Good morning Jeff. And today we have with us two people who work with the HDF5 file and not exactly sure exactly what it does and what you'd call a file a file format or what but I have with me Mike Fulk Hi and Quincy Quincy, I think Brock was being a little too polite there. He wasn't entirely sure how to pronounce your last name Quincy Could you could you say that for us? Sure Quincy cozy all Okay, thanks a lot guys for taking some time with us So quickly where you guys located? We're located in Champaign, Illinois We're at the Research Park That's part of the University of Illinois So are you guys affiliated with NCSA down there also or not? Not Officially anymore. We still do work with them. We have a contract with them We started out as a group at NCSA and we were there for 18 years and then we spun off as a non-profit company whose mission is to sustain and support HDF technologies Okay, before we go further though. Can you guys give us a quick rundown on what is HDF? Yeah So HDF is actually several things We talk about the HDF suite of technologies and by that we mean file format and then software Library that allows one to access data in the format and then various Tools some of which we developed and then those Others There are actually two HDFs The original HDF was developed in 19 first first 1987 and Implemented at NCSA with a variety of really visualization tools and It was always open source and it kind of caught on and Went through several generations Until the fourth generation, which we now call HDF for By that time it was widely used All over the world, but in particular the NASA Earth observing system was using it And that was sort of our bread and butter project Then by that time that was a what ten years we'd been in existence almost We realized that there were a lot of things we could do better The Accelerated Strategic Computing Initiative project came along which was out of the DOE weapons labs and they came to us looking for a Standard format that was scalable and could handle you know the kinds of data that they were dealing with and We said well, you know, we're ready to start over so we Invented HDF 5 which was really Part of the ideas we had from the original HDF But also a number of ideas that came out of the labs and other users and lessons learned and so forth so HDF 5 was developed over a couple years originally released in 1998 and I think we really think of that as the flagship HDF at this point HDF 4 is still very widely used By those projects that adopted it in the 90s And in particular the NASA Earth observing system, so it's still there But the real growth is with HDF 5 and we really think that it's the product that everybody ought to be using For those who haven't used HDF before What exactly is HDF stand for? Oh hierarchical data format So does it actually is it a freeform format or is it a it's like this is the way the files laid out? It's a very freeform format the original idea was We didn't know what kinds of data people would be wanting to exchange and share the real motivation at The NCSA, which is the National Center for Supercomputing Applications Was that we had a variety of different computing platforms That scientists were using and they wanted to be able to move their data seamlessly from one platform to another In an architecture independent way So they might have some data that was on a big Indian Machines some other data on a cray some other data on a you know a PC a Mac and so forth and so we developed this format that Describes how the data is is laid out And then we developed a software that can convert from one architecture to another So it's really designed to be an architecture independent format At the same time we were developing it for scientists. So scalability The ability to deal with complexity The ability to run efficiently on a lot of different platforms Those were all very important principles that we were trying to address So that's That's where that came from the hierarchical aspect of it Came from the idea that we wanted to be able to Store a variety of objects and to be organized able to organize them In ways that were meaningful to an application So the original idea was that it would be a hierarchy in fact HDF and HDF file is not Necessarily a strict hierarchy. It can be any graph structure, but we did have this idea that You'll have what we call a group Which would be like a node in a graph Links which would be like the branches in the graphs and then objects which could be nodes or What we call data sets and that's where the actual data go So you can think of an HDF file as a container That has a user defined internal structure for organizing objects The data sets themselves Support a rich variety of data sets a data types. I'm sorry A data set itself is essentially an array One-dimensional two-dimensional, you know End-dimensional array And you can pretty much store anything inside a data set you wanted to So for example, if you wanted to encapsulate a PDF file in HDF you could store that as a byte stream Associate attributes with it that explains that it's a PDF file so forth if you wanted to store a Finite element mesh You could store that say as a big Regular grid or an array so the idea was that You wanted to have a really rich set of data types and then the ability to just describe Those data types and describe the the sort of aggregate or array that Stored those data types Okay, so let me let me let me ask some clarifying questions here So it sounds like you've you know as part of the suite of Tools and and products that you've got there At least part of it isn't is an API that you would you know Compile link into your application and and call functions that store and retrieve data out of out of HDF files What is that inaccurate characterization? Yeah, sure definitely API's when the core API is written in C We've got languages and bindings built on top of that and then tools that layer up on top of that as well Okay, and so and at least part of the target audience here is the scientist or engineer who doesn't really want to screw around with the You know the bits and bytes and how this stuff is stored But they want to be able to write data from a spark for example And then read it on an x86 machine where the Indians are different So the files are kind of self-describing in themselves that when I write an integer value seven There's enough data, you know stored behind the scenes that when I you know If I write that on a spark and and read that on an x86 machine I'll actually get a value seven on that x86 machine even though the internal data Representation between these two machines is different. Is that also accurate? Yes, that's definitely true We we designed the format and the interfaces so that people could exchange their data and have easily translated binary formats like that Exchange well between the different platforms Okay, and I would imagine that since these files are self-described that kind of leads into all the other tools in your suite So you can say oh well, you know I can tell that what's coming up here is 27 integers and I can perhaps visualize that in an interesting way Is that kind of what your your tools do too? Yeah, I think so We have at least a tool a good tool for browsing and Some visualization aspects of HDF five although we tend to leave that out to some of the third-party tools like MATLAB and IDL But the general goal is to have At least the the low-level structure is self-describing enough that people can browse into them and go Oh look, it's an array of 27 integers, but What does that mean to them? Maybe one of the the more difficult things occasionally they have to come up with a data model that says Oh that data set is supposed to be interpreted like this Right, right, okay All right And so and and and you're also hitting on another what I would assume is a key point too that this stuff the actual file formats are Well documented so that third parties can write tools you mentioned, you know MATLAB right there too So, you know, they could just read the file and actually act on it within their MATLAB scripts That's the goal It's not always the case that they're well-described So that's one of the things that we work a lot on With our users is to make sure that for example within a group They all agree on how they're going to organize their data in HDF five So yeah, that is a challenge right the syntax is self-describing But the semantics are challenging and need a good Community data model to really make them meaningful to that community I see what you're saying So even even if you have a file that's nicely self-described if if I'm expecting to read in 27 integers But there's actually three doubles there then we've got a problem, right? Yeah, definitely Okay Another random question that occurred to me while I was listening to your descriptions there So we heard similar things from the Hadoop guys that they say we're very good at unstructured data You know, we put stuff in files and and it can be retrieved any old way later How are you comparable to Hadoop or is that an entirely different thing or am I talking apples and oranges or their similarities here or what? there's probably some similarities, but I have a And I haven't done an enormous amount of background on Hadoop, but It seems to me like Hadoop is a little more from the database Model of things we've got these tables and records in them and it may be a little more unstructured than that It's kind of bags with key value pairs But we we have a lot more Arrayness to the the datum in our data sets. It's it's a particular Structure that is more science and engineering oriented than Hadoop's kind of Random we can put anything in here thing Does that make any sense? Yeah, yeah, no, I I did some But before the show I did some screwing around with HDF 5 and it was quite neat There was a little command that came with the core product called h5 ls and h5 dump And it really is just like there's the root group slash and then you can have group stick Mike was describing them as like nodes on a graph and stuff But for me being a sys admin it looked very much like the Unix file system I had a group that I could put data sets in that looked like files and the group looked like a folder And but it was all internal and they included extra information like were they doubles How many dimensions they were so if I wrote a three-dimensional array it remained a three-dimensional array with very little work And it was very easy to write. I was quite happy with how easy it was Yep, you've discovered the the secret handshake That's we tell people we don't lead off with this is a file system But functionally inside it's a file system It even had from Unix there. Yeah Interesting. What was the motivation for that kind of design? Well, it was it was well known. It was a nice hierarchical design It mapped well to some of the products and projects that we were working on and it Just there was a lot of experience and and and knowledge to draw from in the file system design community and it and we could Specialize it enough into the science and engineering zone that it seemed to work pretty well Okay, in terms of what you were supporting you said the core language is written in C But you had bindings for others. I noticed the core package had Fortran and a C++ Set up Is there anything else out there? There was something I ran across called PY tables It do you support some of these other languages? Do you support cobalt? That's dangerous. Yes, no, but we have an eight of wrapper that we know about well, there you go We have Java bindings that we produce in-house and some prototype Dot-net rappers for things that are very experimental right now. We do support the Java stuff, but not the dot-nets There's Python one of the rappers that's out there is the PY tables one and another one that's called H5 PY and There's Ruby bindings Pearl bindings lots of people have come along and said gosh, that's cool Let's go write a binding for it because the HDF group guys haven't done it yet But we support basically our C C++ Fortran and Java for now So question about your bindings there Are they kind of a one-to-one mapping to your C bindings or do you try and have them take advantage of language features? Or you know, how do you how do you manage that stuff? Yes We try to make them as native as possible So But the ones that are third-party, we don't have any controller. You know you give what you get For our cases the C++ is somewhat or object oriented It's pretty reasonable in class oriented. The Fortran is very fortran-y Java has good Object model behind it as well. So we try to be very native to the the particular language and not Not just right rappers sometimes we end up having that and our dot-net Rappers are just that they're just plain old rappers and they don't have any object model to them But the best thing to do is generally to write the object model and that the native programmers and feel at home in that Interface Yeah, and I should add that there's been a lot of interest from the community on high-level language Access to HDF So the h5 pi is an example of that and We actually have a little wiki that we set up to discuss How we might create certain CAP eyes that would bind better with the sort of high-level view of HDF And that's a direction. We're trying to move into Because we feel that if we can make it really look natural to a pearl or ruby or Python programmer It'll make HDF accessible to a much wider Audience or group of potential users Okay, and you keep referring to to community here is is HDF open source Yes, it is What what license you guys use? BSD type of license okay, and So is this truly a you know an open source collaborative kind of development or are you guys? You know 90% of the development accepting patches. How do you guys work work the project and work with the community? This has been a challenging place for us. We've been It's a funny specialized product, right? So we'd like to have more community involvement, but at the same time I think over the course of our 12 years or so on HDF 5 we've accepted a total of three patches from users So we have lots of people using it, but not many people really want to walk in there and get involved in 250,000 lines of library source code So we were definitely out there, but people have to kind of pick up the ball. So right now we do 99% of the development We seek funding through contracts and grants and other mechanisms and then basically Take those in the directions that the customers from the grants and contracts want or we have some internal Funding that we try to apply in directions. We think are best for the total HDF 5 community You guys do support contracts as well Yes, we do. Yeah, in fact, that's a lot of where our bread and butter comes from So we had the visit visualization on here under show a while ago and They supported many different file formats and so in my exploration in just one week of using HDF 5 The free formness something That was actually really really nice, but like in visit. They'll say okay. We support flash We've had on this show and we support All these other different formats And it turns out those files are all using directly HDF 5 files But like you said, they're they're internally the actual data model for that project could be completely different Yeah, that's one of the things that Sometimes our support contracts work in that direction as we say oh this community really wants to use HDF 5 for storing their data But they they need to come up with standards for how they're gonna Structure their groups and data sets or what attributes they're gonna apply in them and then it comes out as like well, that's a say flash HDF 5 file and then whatever visualization tool has to be Kind of cognizant of that data model in order to do something meaningful with the data that it supplies Yeah, and one other thing in my quick little you know one week of screwing around with it The IO I could do from HDF 5 was actually a higher performing than the stuff. I could write low-level So I was I was quite happy with that. Yeah, it's a special time machine technology. Yes Lots of wonders of middleware, right? I mean that's that's exactly the goal for being able to encapsulate, you know Specific types of expertise into stuff that you the developer don't have to worry about I mean that's kind of it's kind of the goal I mean, I'm an MPI guy and we do that kind of stuff for networking and sounds like these guys do that stuff for file storage that absolutely is you've hit on I think what are Really one of our strongest selling points is and I think your point is right. It's it's because it's middleware We try to do things that other folks would have to do on their own We do it for them. So Yeah, that's right An example of this is right now. We're Just starting a project in the bioinformatics area and this is a community that in the last two or three years has seen an increase in the Amount and complexity of data particularly the amount of Several orders of magnitude and their existing file formats and technologies for managing that data is just breaking down under the weight HDF already has The ability to deal with a lot of the problems that they would Themselves have to create solutions for so in this project. We're Looking for ways to adapt HDF to handle that specific kind of data and it's actually going very quickly and very well So it's a good example of that Well, cool. Let me let me ask something along my natural bias, you know, we've already mentioned. I'm an MPI guy here So how does HDF 5 do you guys interact with MPI at all to use the parallel IO streams? Or how does that stuff work? Right we do and we've been We've been using MPI almost before it was Certain that MPI would survive That was one of the baseline questions was should we try this crazy MPI thing, I don't know if it's going to stick around But yeah, we do have support for collective and independent IO within HDF 5 for doing large data rights to the datasets and we do a lot of Coordination with some of the Department of Energy labs and other large national labs in order to try to get our performance and All the other aspects ease of use Really optimized for the for their users Okay, no, what does that mean? So are you plugged into the back of MPI file read and write or do you actually use MPI? Technology yourself so that you can paralyze what HDF does so are you in the front end or the back end? I guess is kind of what I'm asking. We definitely need to be on the front end We want to leverage MPI to the the maximum Extent we can we don't actually write any networking code. We don't do any parallel file system IO directly intentionally occasionally when there's a badly performant MPI implementation on some particular HPC platform we have to get in there and then start tweaking with MPI info and other sort of little Hints to the file system in order to get things working better But by and large we try to say use MPI make MPI faster and better and we'll take advantage of that Well as you guys have on your website This got me going really really quick and it goes everywhere from the very basic open a file Um, do most people learn that way or is there a full manual? Is that all available or you guys give training or? We do give We do give training. I would say most people start out by working with the tutorial Welcome any feedback that we can get on them Usually That's the way you have tries a few things out. They're looking for a solution of some sort and then When they really get into the Mitty gritty of the problem then it may Be a good time for them to contact us and then we can come out and do a specific training Consulting as Quincy was talking about earlier We really like to work with people who are Haven't yet decided how they're going to use HDF, you know specifically So we help them kind of come up with a data model Very often for example last week Quincy and a couple of others were out at a Navy project where they have How many several different groups doing four different groups and they're all doing different kinds of things But their data needs to be integrated Some are doing simulations or some are actually doing measurements and others doing and other things others are connecting probably HDF data with some database somewhere and By Working with sort of us as people who know HDF and Trying to pull out of them what their use cases are what their needs are What their performance issues are? We kind of are able as a team to come up with the best kind of HDF solution to whatever their problem is So that's that's kind of way it works I would guess that we have no way of really measuring this But I guess that 90% of the HDF users we never hear from Have no idea what they're doing But that's open source We have exactly the same problem in the open MPI project. I get asked this a lot. How many people are using open MPI? It's really hard to say I can tell you how many downloads there have been that's that's about it What's the largest store of data you're aware of like what's the What's the most number of terabytes, you know, somebody has data sitting around in HDF format probably the Earth observing system they've got online about for petabytes They've collected more than that, but they don't keep it all online There may be larger ones well the earthquake people do Terabyte simulations at a time, don't they? Yeah, so the largest single object we know about is Southern California earthquake Center Has a terabyte size Simulations that they do so they'll have one image. That's a terabyte in size And that's actually a very good example. I think of the flexibility and power of HDF In in terms of bringing solutions to say an existing problem they have They I don't know came to us maybe three or four years ago and they had these images that consisted of so your one image consisted of 900 separate files Okay, so Conceptually it was just one great big Actually, there's a three-dimensional image And if the scientist ever wanted to say access some part of that image They'd have to figure out. Okay, which file contains the data that I'm interested in What is the data type of that data? so how can I go in and Pull out exactly the data that I wanted from the various files Put it back together Convert it to the endian-ness of my machine And then maybe do something with it With HDF We were able to Overlay HDF one of the things we didn't mention earlier is that an HDF file is not necessarily a single file a data set within an HDF file could actually have Pieces that are in a lot of different other files so we would create a data set that sits on top of these 900 files and Looks to the application through HDF Just like a big array and they could say okay I want this subset from this array and I wanted in you know 32-bit float And HDF would do all the work for it So it's that kind of thing where HDF its scalability and it's sort of ability to Provide a view that's meaningful to an application is is really valuable Let me ask I also want to mention since we're talking about big things The electron microscopy community is Working with us now and it's actually probably broadening to biomedical imaging generally But they're looking towards images that are going to be 1.6 terabytes in the next year or two So that's where size really matters cool Let me ask you a little bit of a technical question here You were talking about how the different back ends, you know, it's it's transparent to the user They're just doing you know HDF reads and writes and so on. How do you? How do you actually write on the back end? You just use the normal POSIX read and write system calls or is there more more magic underneath? I mean do you have special kernel drivers, or are you just using basically? Whatever the the file system is beneath you Well in other times we try to reside over a Portable layer underneath us so most of the time it's actually a POSIX file system underneath us Occasionally it's MPI or something else But we really try to abstract out the hardware to the extent we can we leverage it But we try not to depend on it for everything so There's a lot of extensions. So there's some new POSIX IO extensions I'd really like to take advantage of and if more people would Standardize those and use them then we could actually start using them in HDF 5 But a lot of our speed is either through MPI or some aggressive caching and other special technology inside the library So when you were saying the file sizes were like 1.6 terabytes. There's actually no Like that's far from the limit that HDF will actually support right? Oh, yeah, we can go we support it basically well the file format is currently Written to use 64-bit offsets and everything and But the file format is actually it's got knobs in there we can tweak it out to Remember what the exact upper limit? I think it was 8 to the 256th power Whatever that is file size Without changing this file format significantly So exabytes now, you know exabytes of exabytes some day Okay, so size is size is not it should not be So what is the relationship between HDF 5 and net CDF? I know there were two separate things, but I see now that net CDF supports HDF 5 Do you kind of clarify those two projects? Yeah, the two projects started out approximately at the same time in the in the late 80s And net CDF actually Came originally from NASA something called CDF a common data format and net CDF was an attempt to make it platform independent and What that meant was a network? Common data format that's where the net came from And their approach was a little bit different from ours and we were really working You know in parallel universes in a sense They came directly from the atmospheric sciences community and they developed a format And the same idea that you know the format would be Something that the user wouldn't think about or worry about But rather they would work through a library in an API and then there would be tools and Their users which were it was really sponsored by the university consortium for atmospheric research Which is an NSF sponsored consortium Wanted to think About their data in the way that they generally thought about it which was that you have some sort of coordinate space and you define the The coordinate variables or the dimensions within that coordinate space for example, you might have latitude longitude and altitude And then all of your data Which they would call variables which would we would call just data sets or arrays Would be defined within that context So net CDF had a had a fairly well defined view of the data and the variables and the aggregates At the same time with HDF what we were doing was listening to scientists from all sorts of different places and They didn't want to be locked into a particular type of view some said well I just want to throw a thousand images into a file and make an animation and others might say well I want to make a you know a Angle mesh of some sort of visualization people or something like that. So we didn't define We didn't define coordinate spaces or anything like that within HDF But not too surprisingly the two formats Had a lot in common, you know, they dealt with they tried to deal with things that were fairly large They tried to deal with the ability to go in and subset take regions out of things store lots of images and and so forth and over the years We talked and at one point actually in the early 90s We talked about maybe merging our two formats But they were just so different that we really couldn't couldn't make that happen our users, you know We had different sets of users and then Towards the end of the 90s and the early 2000s our emphasis particularly with HDF 5 on Scalability and our ability to handle very large numbers of things and very large objects As well as we had built in the ability built in to compress Objects And we had some of the Filters and the kinds of things that Quincy has talked about The ability to to interface with MPI IO was actually a big thing We got back together and we talked about maybe the next generation of net CDF sticking with a very similar data model that they had always used but having HDF 5 under the hood underneath And it's the same thing that that we were talking about earlier that I think Brock mentioned that that's the idea of middleware It does things for you so you don't have to do them themselves so That would mean that the new version of net CDF if it sat on top of HDF 5 It would still look to the users like net CDF But underneath they could say gee I'd like to compress my data or I'd like to use collective IO In this particular parallel system And so that's we got together wrote a proposal to national to NASA and They funded What we called a merge of net CDF and HDF 5. It's not really a merge But in the process The net CDF folks took advantage of some things that were outside of their data model For example the group structure the idea that you can create You know hierarchies and other kinds of graph structures now the way they use groups is somewhat limited And that's actually in my opinion one of the one of the real beauties of net CDF is they have a very clean simple Model so we were talking earlier about how a tool might open HDF and have no idea What was in there because things were just a jumble of stuff well from the net CDF point of view things have to adhere to that model fairly fairly strictly and So if your data Matches that model makes sense with that model the net CDF is is really the kind of interface that one should be using And from our perspective, it's just just another Data model that sits on top of HDF 5 So does that that's that explains sort of the the differences in the similarities? Yeah, no that that that definitely covers it that they're trying to do something That's very specific to their group and you were very generic. So they are now Using HDF 5 to provide all that lower level mm-hmm Let me ask a question in a slightly different direction here. What's your your favorites? Are you're the most unexpected or kind of the fun the most fun use you've seen of HDF 5? Maybe not even necessarily for scientific code or something, you know Just completely that you would not have have anticipated perhaps. Yeah Let's say I're talking about this earlier as saying, you know, that's a great question And I'm sure I'm gonna come up with better answers tomorrow and the next day because it's just a really neat thing to think about the first thing that came to the mind to mind for me was We we were hearing from this group in New Zealand. It's funny people. It's a funny weather name And we finally asked them what they were doing with HDF and said well, we're we're making this movie called based on Lord of the Rings and we're using HDF to Simulate fog in the in the movie No, we thought that was wow is that the hunt for golem movie? No, it was the very first one. I think it was at least they were sending us bug reports as they were working on the first Parts of things. Oh, it was the actual movie not not the the the fan created movie. No, no, very good Yeah, yeah So that's one of one of our favorites Quincy. Do you have some well? I mentioned this earlier when we were talking but some guy in Sweden I think it was had was sending us these other other end of questions like how can I make my files smaller and Make the software just a little bit lighter weight and we're trying to help them out and he says eventually He says well see I've got the cell phone application And I'm really trying to store some pictures and images that I'm recording off my cell phone in your using your software on the phone And well, that was a bit weird, too Did he end up doing it is that is that so is that Questions when you fix bugs eventually and he's like, okay, you know, by the way Wow, well open source. There you go. Yeah All right, as a as a developer myself I have to I have a standard question that I have to ask all other software projects. What version control Software do you guys use? We use we use subversion Which we transitioned off CVS Three years ago or so Because it was really very limiting and actually I wanted to pass along a thanks to you and the other parts of the opnm PI team because we just blatantly stole your release methodology document And I'm based there kind of reworking how we're gonna do our release methodology on some of the ideas you have there So there's some trade-off in the open source community as well But oh, that's great. I'm glad that you found it useful again Using using other people's projects for things that were completely unexpected. That's that's great. That's what this stuff is for Yeah Okay, guys, so we're gonna go ahead and wrap it up now Where is the best place for people to get information on HDF and where can they download it? You have like a website or a mailing list or how can they get a hold of you? Yeah HDF group dot org That's the website And then we have a Help desk with an email Address That's just help at HDF group dot org And then there's a forum HDF forum Quincy, how do we sign up? Well, I would go into the website and find the right user support link there and there's there's Thanks to get done to the mailing list as well as download stuff and get over to the wiki and what kinds of difference But that's easiest entry point Okay, and is the mailing list the best way to get Get involved with the project like if you did have like like a quick patch for like a found bug or something Just go through the mailing list. You want to be patch number four for you know, HDF five. Yes I want to say a little bit about that. I mean when we spun off from the university You know, we had no idea what I meant to start a company especially a nonprofit company and and so forth and but but one of our real You know strong Desires and it's really a part of our mission is to make our company strong enough and stable enough That we can have the extra resources to engage the wiser community in it in more effectively than we've been able to do You know, we've kind of had our heads down doing our job Just trying to survive but as as we become I think more fiscally sound We do want to try to encourage greater community involvement in things like well everything but Patches for example and and those sorts of things but those take a fair amount of resources just to manage particularly since Many of our users really need this product to be very robust and very heavily tested so if we do take a patch it has to be somebody who's willing to do a lot of testing and And adhere to a lot of requirements. I just want to throw that in. Yeah, there's a reason why we've only accepted three patches Understood. Yeah. Yeah, I'm sorry wasn't trying to be snide there I know no no wasn't taking difficulty of taking a patch from the wild You know works for me is is not always the best reason but sometimes you do get unexpected little, you know Gems in the rough. Oh Absolutely, that's great stuff Okay, well, thanks a lot guys for taking some time out with us If you have any questions, you know, just please let us know this will be ready soon Okay, great. Thank you very much. Yeah, thanks for having your time guys. Thank you. No problem. Bye