 Welcome back. Our next talk is by Robert McClay from the Texas Advanced Computing Centre. He'll be talking to us about Elmod. Okay, so I've been giving these talks for several years now and I know that at least many of you know about Elmod so And it's it's end of day for most of you, although it's the start of my day. So I will try and Stand up and shout and say, hey, here's something new. You might not know So and I'll try not to put you to sleep. Even though I know it's hard after a long day of zooming. So Oh, I have to put them in the wrong window. Yes. So I'm really happy that one of my colleagues designed this nice logo for Elmod So I'm going to talk about a little bit about the features and histories and some of the some of the newer things that are available in Elmod and maybe some future work. So as you may well know, Elmod reads TCL and Lua files directly. It supports what I call the one name rule, which means that It You can only load one GCC or one MPI or whatever. So you can't have a sensor. You can't have to two copies of GCC loaded at the same time, no matter what you do. It supports a software hierarchy. That means that you have to load a compiler before you see MPI and then load an MPI before you can see the MPI dependent applications and libraries. But it's not required. And I'm sure that about half the sites that use EasyBuild do it both ways. Or half the sites run it without a hierarchy. I know Kenneth doesn't and I know lots of other sites that do. There is also a spider cache to make module available be very quick. Module available be very quick. They're support for properties. So things like is it built for GPUs or not. Or is it beta or deprecated semantic versioning. So that means that it knows that 5.6 is older than 510. There's this notion of family support. That means that you can say, hey, this is a compiler. So you only get one compiler or one MPI stack. And that's gets a little funky with the MPI. I mean, with the Intel compiler and GCC, but that all works. You can track what modules are being loaded and you can do it in various ways. There's many other cool new features. There's ML for people who can't type like me. There are collections, which means you can say You can have a group of modules that are a collection and load them as a group. For sites, there are things known as hooks, which allow you to change or modify or extend the behavior of Elmod allows you to go in and say, Hey, what happens during the load what happens, you know, when you're doing module avail For sites using the software hierarchy, the extended default is really useful. That means you can type in, I'll explain that and the nag message. I keep trying to remind people about the nag message. So that you can easily in one file say this module is going out of date at such at such a time and only the people love that module get the nag message. If you ever have to deprecate software nag messages work really well. This is not not new, but it's relatively new depends on is a nice new feature that let's suppose you have Modules X and Y depend on module A. So if you do a purge and then load X and unload X when you load X, it will load a and when you unload X it will unload a But now if you load both X and Y and unload X, it keeps a And if you load both X and Y and unload X and Y, then it also unloads a But if on the command line, you do module load a X and Y and unload X and Y, then it keeps a so this way you don't get those surprises. And it's a way to minimize number of modules that are being kept Okay, so that's a that's a nice feature. Um The large dynamic cash files for large module trees. We at Texas have this We have a whole bunch of bio containers things like top hat and all those other things and While there's a large group of bio users on our system. They are not the majority So we have them in a group which says, Hey, I'm going to want to know about Bio modules. So when I do module veil, I'll see the giant list, but for for everybody else who doesn't know or doesn't want to know about bio modules, they only see, you know, that the bio container module You have to say I want to see these modules. So you load the bio container module and then all these modules become available to you. And there's support for that. And it just means that you have this mode right here. I don't know if that my mouse shows up but if mode Equal is not equal to spider, then it pre pens that path to the bio packages. So when it does a module spider it doesn't see it and then you have this way of adding This script here, which describes where the bio packages are and you have a cash directory there and this pre penned module at L mod RC adds to the script path so we can know about this module but only when that module was loaded module L mod eight has been around for a while. It has a few new features that are that are useful It wasn't the big change that went from L mod six to L mod seven. It just adds extended default, which I'll explain what that is the TCL interpreter is now now you optionally the embedded with L mod, you can undo it. There is this new way of telling a module that it has dependent packages. So for things like Python and are you can have a long list of extensions. You might have hundreds of them. And so you say this particular version of Python supports num pi and sci pi, as well as maybe a whole list of others. And this allows you to be able to show modules. You know, especially if you have like, I know that you get has where they have num pi as a separate module but it's also provided by, by many Python versions. So this way, when you do a module avail or module spider you can find those things. So extended default. I blame Intel for this. Intel has these long version numbers. In fact, I have put my foot down and say we won't their Intel version numbers cannot have four places in them. So instead of being, you know, Intel can have 18, 18.4.0.4.123. And I'm just, yeah. So anyway, but still three digits is just way too long, three separate groups so with extended default turned on do a module load Intel 18 will load the highest or the mark default. It's useful when you want to load modules, Intel 17. And don't remember which is the latest, you know, version of 17 and the module, you know, Intel 19 something rather is the default. As I said, and I can now embed the TCL interpreter, this speeds up avail and load when there are many versions or dot module RC files in the directory tree. This is. So, as you may know the dot version file or dot module RC file is written in TCL. But the, but Elma needs to convert them to Lua. And so what it does is what it used to do was fire up the TCL interpreter. Convert that to convert those dot version or dot module RC files into a Lua file and then so you're forking and exacting the TCL interpreter many times when you're walking the tree. So this now speeds that up because you only it's built in it's a library rather than a system call. But it's still faster to have used the Lua version over the TCL files. And here are extensions. So, as I said, tell users what modules have extensions. So Python has NumPy and SciPy and maybe a bunch of others. So you have extensions. And you tell it that this, this one has this particular version of NumPy and this particular version of SciPy. So you can use the module spider to find extensions and you can use module avail if you've got it turned on to see the list of available extensions. And I'm going to skip over these examples. This is new. I had a user complain or say, hey, look. I don't understand why this module is default. You know, why is it? Why did the fault change? And why does it make sense? And what they had was they had multiple ways there are multiple ways to have a mark default. So what I mean by a mark default is when you have a module tree and you create a dot, something that's either a dot module RC file or a dot version or set up a symbol default semilink. So there are multiple ways to set up the default. And it was really hard to find those places where you could have a multiple, multiple defaults. So I generated a new command, which can be found in the L mod directory. And by the way, that dollar L mod dot D is defined when you that that global that environment variable is defined for all users. You can run the check module tree syntax command. And what it does is it walks the entire tree and try module path and refine it loads every module and reports any syntax errors or reports there is a syntax error on in that module to look at and fix the errors. It also reports which modules have multiple mark default set. And I'd like to remind people that the precedent order is a default semilink is has the highest priority, then the, the lower version of the module RC file, then the TCL version of module RC, and also last is a dot version file. These exist because hey I come from a I come from the world where I use the thing called C mod. A long time ago and uses default links, and then the TCL version of module supported the dot version file and the dot module RC file and then I've added the dot module RC dot Lua file. Note that this does not check if there's a system module RC file for defaults so those are those are done independently. So this is something that Maxim wanted to know about. So suppose a module user does a module load some module called food, and there's a with two versions of it there's a food version of it and a version 3.2 l mod must pick something and suppose that version 2 version is marked default. By the way, this is this stuff is all new as of 8.2 8.4 point 20. So it uses the loose version idea from Python. So it converts the version file into a version string into a number and these are not exactly what it is but these are essentially what they are. They have the loose version and my thing uses nine pads up to nine places and but to fit it on the screen I've just done. Anyway, so it converts this string into that and the food gets the food version gets converted to that string. And then, if it's a mark default, ie it has a dot version file or dot module RC file or default in the directory tree, then it puts it replaces the first character with the first character with a carrot, or an up our whatever you want to call it. So this is marks is a mark default. If the system RC file sets the sets marks it as a default then it puts an S there. And if the user RC file because they can have their own, it changes that to a you. And I looked this very carefully and I have to look at it every time, but ASCII sorting order is that asterisk appears before the numbers. And then, I guess it's asterisk and uppercase letters than the numbers. And then it's a carrot, followed by S by you. So that's how Elmod sorts things so that means that the user hat, the user choice for the default is highest, then the system module RC file is next, and then the mark defaults. So then, Elmod, when you say module load foo, it has to pick something. And so Elmod sources sort sorts the loose version string to find the highest well that's actually not quite right what it does is it looks at all of them and picks the one that's the highest. So it's not doing a full sort. But anyway it finds the high the highest alphabetically. And Elmod then copies. So now to support this new feature Elmod now copies the loose, loose version string into the module table in the user environment and I'm going to explain what that is. What, what all this does is it provides a new function called default kind, which is available in your site package site package.lua file, and it returns the string none if there was no mark default marked if it was done in the file system system it was done by the system module RC file and user if it was done by the user RC and that all by knowing where that stuff came from. Okay, so let me. So two things I want to show. I'm getting good. Um, so. Echo, sorry. He and the rep. So, um, the module table is with that highlight yes. So what happens is that, you know, the user environment gets filled with these weird strengths. And you say what the hell's going on. Well that's the module table. And so you can get and what this is is it's a base 64 encoded string, and I break it into blocks of 256 so I don't overrun any environment and create multiple and I have this variable called module table size six which says there are six of those things. And, but you can find out what it is by doing this. And what I what it has is, it has all the, you know, it has information which tells Elmod. Yeah, there it is. So. So for example, here's version 3.3 of M pitch, and that's converted in this long string. That's what it actually looks like. And so this is how I can do that stuff. And I thought I wanted to show one of the strengths I think of Elmod is that I can store this really complicated data structure in the user environment and just by, you know, taking it converting it to a string, and then breaking up into blocks and you you encode it base 64 encoding it. So anyway, that's one of the that's one of the real strengths. I think of Elmod is to have a have a very rich environment to describe. Okay. All right, I'm just about done with this. So, I want to say that Elma, I'd like to figure out a way to make tracking modules so they forget after some amount of time maybe a year. And there's T mod for has this interesting feature of doing modules with you can set up ranges. I'm thinking about that. And one of the things I've been thinking about is taking a page out of chemists notions that to be the ability to tie up a monthly discussion group and if anybody would be interested in joining that send me some email. Okay. I have I'm happy to say that I have tracking on the Elma documentation. And I can say it's used pretty much somebody has read the documentation on every continent except nobody in Greenland is using it I'm really got to work on that. And but cities, you know, mainly it's the places where people read it the most are the US and Europe and a little bit in the forest. And if I've zipped through this too much. Please look at the Elma documentation at Elma read the docs and if there's something that's not clear. Send me notes saying I don't understand what you're telling me. All right. I'm going to quit here. I just interrupt you there we have what was suggested to have a break for it where I'm sorry we'll have a chance for people to ask questions about our mods now if you're having a robot. And then move on to the second half. So if anybody's got a question can they raise their hand in zoom and we'll allow you to ask it. I think Kenneth's about to ask a question. You briefly talked about things you have in mind that you're still working on. Do you envision an MOP nine in the future and and by extension of that will you be able to stick below MOP 10 before you retire. I have no idea. I have, I tend to, I tend to have lots and lots of versions. In the past year there have been 19 versions of Elma 3.8 and 20 versions of 8.4. I don't know. That's an interesting question. I'm, I'm about five years away from retirement and I'm been thinking about how I'm going to pass it on so I'm also interested in talking about that at some point. But there's no big changes that you think are still needed and I'll not that would warrant a major version boom. But people like you and Max and always come up with stuff and say, Hey, that would be great. Or God, I really don't want to do this. I do feel like sometimes that I am free labor to the world to support their Elma or exalt questions. Okay, good. I don't see any other questions popping up Simon. No, I'm not seeing any more. So we'll hand it back to Robert for the old part of the talk. Thank you. Okay. Okay, so exalt is. Okay, I'm going to fly and I'm going to talk about there are some things that I think are interesting. And so I'm going to talk about exalt. First of all, let me tell you what exalt is or I'm going to. The outline of this talk is what is exalt. How it works. And then I'm going to talk about some of the interesting things that have come up recently. And this is sort of a, a precursor or kind of funny because I am giving a talk at foster on exalt. And, but I've already recorded so I know what the talk is. And so there are some things here I'm going to talk about a few of the issues that are going to come up in the foster talk. So I want to talk about some of the memory allocation issues I've run into some of the container issues and then conclude. Exalts there to help you understand what your users are doing. What programs what libraries or users are using. What are the imports from our MATLAB or Python, where they come from. One of the things that we're interested in is one of the top programs by core hours by counts by users so but I mean by core hours is obviously MPI, where you've got, you know, a number of cores and times time. You know, obviously you're, you know, using the more of your system by counts. I mean, by how many times the program is run, or by, you know, this particular application is used by lots of users. I want to know all of those things. And one of the interesting things about exalt is it can tell you whether exalt whether the program being run was built by the system, either by, you know, the stuff that we install. You guys installed by easy build, or built by a user, or what's interesting is to know this was built by a user but the user, a different user is running it. Tracking things like whether the executable was implemented in C, C++ or Fortran, or something else. You can track the number of MPI task or threading. There's some support for function tracking, but one of the things that's important to remember that Exalt is not a performance tool, it's a census taker it says this got run and it took so much time, but it doesn't know whether it was efficient or not. Exalt was a decade. It was a US National Science Foundation project between me, Mark Fahey and myself. It worked it was it was too useful a tax so the work is continued. It originally only tracked MPI executions, but the design goals are to be extremely lightweight and to know what libraries or applications are being run. So track what functions are being used, and these can be collected into a database for analysis, either the one I provide or one that you're willing to provide. How does Exalt work. It works in these five ways. There's a LD wrapper. There's an elf trick to track executions, and then those things both of those things generate JSON records, which are then transported to a database, assuming you use the what I provide, and then you can analyze the data using SQL calls or routines provided to do that. The LD wrapper is a way to it wraps the it wraps the command law and the linker and it produces command key value pairs and captures the output from LD and this data is collected into a JSON format and it allows you to also instrument executables which are built statically although there are very many systems that's hard to build a static build craze is easy but anywhere else I think it's difficult. But having an LD wrapper is helpful but it's not required to use exalt. It just gives you a little less information. Elf is the binary format for Linux and others, but mainly, you know, I'm only caring about Linux. Elf has many, many hooks. And Exalt uses two of those hooks to run before and after main and even if the even if it's written in portrait and there is really a main, you know, in the C sense. So here's a really here's an eye chart but the point is that I have these two function or two files here hello dot see which is just a standard hello world program, and this tiny version of my library, which has these functions, my init and my phiny, which just print I hey I've run, I'm Princess string out, and then the the last two effective lines set these attributes. And there's nothing special about the my init or my phiny. It's just that I am registering those functions in the init array or the phiny array. And the init array is there to collect functions they get called before main, and the phiny array is to collect functions that are called after main is called. Now, the phiny array will not be called to the program of boards. Okay, so I can, I could build and compile the hello world program. It just prints hello world. Then I'm going to compile by exalt dot see and build it into a shared library. And then finally, I'm going to set the LD preload variable to point to this library just built and run the dot hello, the hello world program. And without changing a line or the build or anything in the hello world program. I've got these two functions that you got, you know, two things that are added the hello, the program that runs before main or after me note that this could have been a hello world could have been a commercial program, not build at your site and it still instruments it it still puts these stuff around it. So that this allows me to get the runtime because I have something I can record the start time in my init, and then call the end time in my phiny to get the total runtime and other information about the program. And I'm also running in user space, I am part of your development team. This stuff gets collected hourly nightly, you know you can use the final interface, or you can write, use syslog to do syslog filtering or send it to, you know, elastocurch etc. Or you can use curl direct to send it directly to either ELK or other other things that are going to be the collector end on the curl. I'm, I don't have a lot of time so I'm going to skip basic over this, but Elmont allows me to know that locations by building a reverse map, know that this path for this particular library comes from this particular version of HDF five. So if you run tmod at your site you can use Elmont to build reverse map. Installing it, I'm going to skip over that I just want to talk about the site, the site that config file as the site config.py file, each site really needs to configure. Exalt to match their site what are your compute names called what actually use do you want to attract or ignore what packages, do you want to track ignore and what sampling rules to use. So, Exalt uses this config.py file to create some .h files, lex files and py files during build. And it also provides, Exalt provides this program to report how you know the program was configured. And it's only used during the building of Exalt and if you want to make any changes you're going to require to rebuild it. This is for speed. I don't want to have to fire up Python on every single executable it just takes too long. So there's path filtering. To control what programs you want to track or lack track or not ignore, and these flex routines provide fast regular expression parsing. And so, so you have a way to set what the what the name of your nodes are. So I only want to track on our compute nodes so this is ours start with a C or an NID if we're on the cray. And then path pattern so I want to, I want to try I want to do something special with Python and R. I want to keep DDT and I want to skip everything else and use your bin, you can have this long list, and you can also control what environment variables you want to track. Okay, sampling sampling turns out to be a real problem because it turns out that short programs just can't be tracked it just takes too much data. And it turns out that MPI is also a problem too because they're a place there are sites or there are groups that are using short small MPI programs to train their neural networks, and that just causes problems. So, Exalt is design, although this is also configurable to say, I want I want to sample greatly, you know, reduce I have a one in 10,000 chance for anything that takes less than 30 minutes, a 1% chance for half an hour to two hours and anything over two hours I want to report them. So I can now sample programs like Pearl and all can said, but I don't have to know about every single one of them. In MPI programs, there's a similar way to do it. One of the things that's interesting about MPI programs is that many people run simulation programs just forever they don't they don't have an end time they just run until they run, you know, they're 48 hours 24 hours or the program is completed and they just let the program be terminated. So, to do that, I say that anything that's less than 128 tasks will be sampled, and anything more than 128 tasks will be recorded independent runtime. Okay, so Exalt is now linking everything this is the stuff. This is part of the stuff that I'm going to talk at Faust and I do feel like there are times where I feel like I'm a developer on every team out there. Exalt shares namespace so I have to do some office investigation. Exalt shares memory allocation and containers. God, I hate containers. And so containers cause a problem because I can't know that system libraries are going to be there so that requires some dancing around. One of the first problems I ran into is the user a user code created a link list but they for a program failed to put a null at the end of the list. But because all it was the first thing they did in all memory is zero before program start the program worked without Exalt there but fails with so to get to understand what happened I got a chance to look at the user code. This was at our site. So I could look at it and fix it. But in order to get around that problem. Any memory that's freed by Exalt is zero so to minimize this problem. You know, not all user programs allocate and free memory correctly. Isn't that a surprise. And Exalt would sometimes fail when freeing memory in the after in the what I call my feet need result is that Exalt allocate, but does not free memory after me and this tends to work better than than free containers. I wish people just learn how to build programs but anyway that's not that's not what's happening. So Exalt requires live you ID on the host, and you can't build Exalt without it. But I can't guarantee that it live you ID is going to be there by the way live you ID provides a universally unique identifier. I need that to make Exalt work but not on containers so Exalt has to use DL open and DL sim to build to to use live you ID. What it does is during the install process Exalt copies live you ID to the extol, the extol, sorry, Exalt install directly, and then it does a deal open directly to that location. And then it uses DL sim to connect to the library routines and this is how Exalt gets around the fact that I can't require certain libraries be there but I put them in a special directory that I know that if Exalt is active I can make that directory be part of the container and this all works. Okay, I'm coming up good we're going to be we're going to be fun. Well, people you know why do you want to use Exalt what you know this is a fair amount of work why you know what's the point. Well, it's useful for us to do lots of things we use this to provide what system what programs we're going to use for benchmarking. So where cues are being overused, who's running in W chem. We wanted to track our packages. So we can do this by intercepting the imports. Same thing for similar notion but has to be done specially each one of these things has its own special way of doing it. So I'm going to go through Kenneth to talk to this Ricardo muti who helped me figure out how to use custom site custom dot py which is read by every single Python, and it run, and it runs on both Python two and Python three. And you just add it to the Python path and it knows how to deal with whether it's Python two or Python three, and it can collect that and then you can control what patterns you want to keep or throw away. And one of the recent things was to read the watermark so if a program is built with Exalt, you can find out information about it, where it was built what modules was used to build it with and what machine it was built on. And a small example tells you what directory was built in what the what the time is what modules were there, what operating system what build host, who built it, what the compiler is and when it was built and what version of Exalt was used to get there. We can track GPU usage. And this is also from help from Scott McMillan. And we can also track singularity as singularity containers as well. And then finally I wanted to mention that Exalt has ways to know what's going on so you can turn on these global bear these environment variables Exalt tracing equals yes, and sampling equals no which means I don't want to sample I want to collect every program. And then it tells you here's the stuff that we got run in my knit that runs the it runs the program, and then it tells you all the things that did on the way out. So, you know Exalt does not have the same coverage of or same usage but, and I don't really know who uses it I just know where the where people are reading the documentation. And so it's, it's, it's used a lot, or people at least reading the documentation, whether the user or not I don't know. Anyway, I finished just on time. And I'm very happy with the Exalt logo, the the eye with the the magnifying glasses is big brother still watching. And if you have, if, if anything piqued your interest, I would definitely recommend reading the Exalt documentation that read the docs. And I'll now take questions. I think Victor has a question. Yeah. I'll just ask Victor to help you turn to last first question. So, so thanks Robert for the for the presentation. I just have one question you talked about that you use Exalt mainly to identify applications like in WK as an example. And the way I see it's that you do partner matching on the name of the applications right. Yes, I only know I only know the name of the application. You know, somebody could rename their whole world program to be NW can, and I would wouldn't know the difference. Yes, yes, we are having exactly this issue many users are using a dot out as the application and just like Gromax, because they are modifying Gromax and they're just making it out or just doing very strange patterns. So in that case what we have done we have a patched Exalt to extract to object them some symbols for the symbol table, and then we are doing some some little network magic 25 applications. So that that that object up additional object up has a has an impact right it's a bit bit slower because we're getting some simple the full symbol table now just execution time. Well, we do fun. Sorry. Are you doing it with through the LD wrapper, or are you doing it when Exalt, you know when the program runs. Okay. So and then, then we are pruning together the know the five first 5000 or X amount we can actually select it with one variable. How many characters you can get from the civil table. And we are not sure if you should push this into Exalt back or if there's anybody interested or if you're interested or if you have any other way that we should do so that we can push this back into Exalt, because my follow up question is about Exalt three, right so what's going to be the plans for Exalt three. Well, I mean, I, you know, essentially, all of all of this stuff is is pretty much what I think Exalt three is going to be. I don't have, you know, the things that are interesting to me are how to deal with the database, rather than questions about improving the way I mean I'm pretty happy with the way Exalt works, but I certainly think that anyway, this would also be a good question to have a user group meeting about Exalt and hash out what might come to with Exalt, because most of what changes have happened, you know, have come through my user interactions or what happens at my site. So I don't have any, I don't have any giant roadmap for Exalt three. Okay, but you guys have interest about having extracting binary data from applications or not. I, we might, I mean, certainly, it's dead easy to have this be a configure option. Yeah, it's a configure option right now, yes. So yeah, I mean this, this would be, this would be useful. And, you know, it depends on, you know, certain sites really really get bummed by how expensive it is to build, you know that build times, but we don't see it at our site at all. You know, we actually, we actually compile twice, once to get those, those missing to be able to do function tracking, and I haven't started, I haven't seen anybody complain about that, but at our site, but there was some there was some concern by some of the national labs that we're looking at it. And so it's optional. Just, just some questions, just some comment about that. Now case we exalt runs millisecond, microsecond, sorry. And then if you add the additional object dump, take the simple table, then it becomes millisecond. So these additional object that we are doing has, has, you know, blocked as a bit with Exalt, but it's going to say that for us Exalt is freaking fast. So I've spent a lot of time minimizing how long Exalt takes. One of the things I've been able to do is run times when you're recording is less than 100th of a second. And the, it used to be used to be closer to a second. And I didn't care when it was MPI only, but now when I'm tracking scalar executables that you know just pop. I'm really concerned about how fast it is. And if it, since Exalt is connecting everything when you're not tracking, it's wicked fast. Yeah. Since there's no other right hand can ask the question about do you are interested about having a group for Exalt, some sort of monthly meeting for Exalt you like you did for Elman. Yeah, I'm thinking about that. If I, you know, I mean, I, my only concern is, you know, I'm going to have these meetings is nobody going to show up. But, but yes, I'd like to do if there's any interest, I'd love to do that. Okay. Thanks. For these monthly meetings Robert I think a good suggestion is to do it every month at the exact same time so people know there's a meeting coming up at which time and if they come up with a problem or a question. They know there's an opportunity coming up to talk to you and discuss it. I think that would help in terms of getting attendance and also sending reminders on the mailing list so. Yeah, and the, I probably have to use the same, you know, same kind of time you use since to be the, well, at least to get would allow, you know, the US and Europe to attend the far East. What we've been doing is switching between 5pm European time and 10am European time so every every two weeks we switch between those two times to allow for both Eastern and West to join. Okay, but that depends on where most of the user community is and if you never see people joining from the East and maybe that's not. I'll put out a, I'll put out a note on the mailing list and say hey I'm planning on having this time, you know, with this work for people who would be interesting to attend. Make sure it works for Maxim as well. Yeah, well he's in the same time. No, he's only an hour ahead of me so his is. I really enjoy working with Max in, but I tell you sometimes he's the most annoying user I have. You want to speak up Max and you can unmute yourself to work. I see he's laughing. It's okay, I'm just laughing. But yeah I've gotten, you know, I think some of the stuff that we've come up with have been worked really well so I'm really happy with that. But it sometimes it's a fair amount of work. Okay, well it's about time. Thank you very much for listening to me babble. Thanks a lot Robert.