 Great, so welcome to everyone to this community call on HTJ2K, so I think we've got a few too many people for introductions, but if you want to introduce yourself in the chat, that'd be great. And there's also a list of people in the agenda. So what high throughput JPEG 2000 is a new standard, an extension to the JPEG 2000, which will hopefully make decoding and encoding JPEG 2000s a lot faster and we've been working on a project to do some tests to see if it is indeed faster and different ways it could potentially benefit a triplaeth. And so that's what we're going to be focusing on this call. And we're going to start with Mike Smith, who's a consultant for Picardu, who's going to give us an instruction to the HTJ2K standard. So Mike, are you okay to share your screen and text through it. I think he's still a mute. Okay. Great. Here we go. This is going. All right, so. Okay, so yes, so I'm a consultant working for cockadoo and other companies in image processing and compression and color science. And so I was the co editor of a new standard that's been published through the JPEG committee jointly with ISO and ITU standard organizations published almost three years ago now. And those are the standard numbers that are T18, 14, 814 and the general JPEG 2000 compression suite is part of this ISO and 15444 numbers suite. And so this is part 15. So it's dash 15 part one is dash one of that and the T. 800 is the JPEG 2000 part one. And this is a 14 is 2015. So that's like zero versus one based numbering, which is funny. But anyway, so this new standard enhances JPEG 2000 part one. It basically can replace the slow block coder, which is the slowest part of JPEG 2000 with a fat something that's much faster and so the drawback of this new faster thing is it's not quite as efficient as the old slow thing and so that's about 5% less efficient. It depends on the image statistics, but that just means that it's approximately the same image quality for it's the same image quality for a tiny bit more bit rate. But everything else aside from the block coder is the same so it keeps everything that we're already familiar with with JPEG 2000 part one. So it can have really good support for quality layers which aren't used in your application but have this granular scalable quality aspect. But and then the other really important thing is it maintains the royalty free intent of JPEG 2000. So, because, because it's not a brand new codec, like h264, h265 or whatever, we, when we just replace the block coder, it allows you to do lossless transcoding to and from JPEG 2000 part one so if you have an old, old, old giant content library of JPEG 2000 part one. You can convert that content to JPEG 2000 part 15 without changing the decoded image quality. And you can go backwards if you make JPEG 2000 part one part 15 today high throughput JK, you can convert it back to part one for legacy infrastructure and workflows. So that's quite a novel idea also for new code for codecs that get updated. So, this is a really high level block diagram of the JPEG 2000 encoding kind of algorithm. And this yellow block is what we've replaced with part 15 and so, but everything else is the same. And so we still have color transform wave of transform colonization and bit stream organization. But we've changed the guts of the entropy coding piece with something much faster. How much faster these are some results I did. I do a lot of work for the entertainment industry which uses JPEG 2000 for video frames. And so they, they, they deal with frames per second units and so I did some tests. A couple years ago with the codec with just a regular MacBook Pro laptop, and at low low bit rates 400 megabits per second for 4k video. JPEG 2000 part one goes at 22 frames a second on the laptop for decoding and HTJ 2k goes at 112. And so that's a speed up factor of five. If you go all the way to lossless which is much higher data rate, 10 times higher data rate approximately the decoding speed of JPEG 2000 part one is 10 about 10 times slower. In general JPEG 2000 part one is linear. The computation time is linear and bit rate. And so the bit rate is 10 times higher and it's 10 times slower with HTJ 2k. It's still maintained it's very fast speed and so that's a factor of 30 speed up. And so these are, you know, video things and Ruben will have a lot of results for testing in your application later. So this just gives you the general idea. And then a J2, HTJ 2k is very flexible, just like JPEG 2000 part one. And so, all that flexibility still applies to HTJ 2k so any resolution and sampling structure and number of components integer floating point pixels signed or unsigned integers. And so that's the base lossy and lossless compression. And HTJ 2k maintains the resolution scalability and spatial random access features which are used in your IIA application and so that's important to maintain that. And then there's it because it's so much faster. It's possible to use HTJ 2k for low latency video transmission. But that prayer important thing in your application. And then there's it's available now so their source code available cockadoo. Of course has a commercial library, but there are open source versions that we've supported as well. Open JPH is probably the oldest open source version of HTJ 2k. Open HTJ 2k from Japan from another man from the JPEG committee made that. And then there's a JavaScript version that's on npm and open JPEG as you know is widely used in lots of tools and now includes HTJ 2k decoding it doesn't include encoding but it includes decoding. And that means that if you get a new build of other open source tools or commercial tools that use the latest open JPEG libraries 2.5 is the one that includes the release that included HTJ 2k decoding. Then you can these tools that you're using now like image magic, for example, will support HTJ 2k decoding. And then these are some products that are more youth that are using the entertainment industry that incorporated HTJ 2k. It's been incorporated into some entertainment video mastering standards. That's with the Cynthia Society of Motion Picture and Television Engineer standard number there is, and then there's this JavaScript thing so that I can demonstrate it working on the web through JavaScript if you go to this, this URL here demo dot no proxy dot cloud. It's going to play back a 4k lossless video in your browser using JavaScript decoding. And it, because it's lossless it's not going to be able to decode the whole. It will fetch in real time at 24 frames a second. This lossless data rate which is approximately four gigabits per second. So it will use resolution scalability to just fetch a little bit of information of the low frequency parts of the image for every frame. And then when you, when you click on the screen, it'll it'll pause the video and then fetch the whole perfect quality. You guys can try that now on your browser. And this is, this is sort of how it works resolution scalability in the wavelet domain there's all these different sub bands. And the sub bands, we talked about high frequency information in the, in the, in the, on the right side, and the lower lower the lower the bottom of the image here. These are mid these correspond to mid mid spatial frequency, then as you go down here, this is the low frequency information. So if you just decode this tiny low frequency piece of information, this is an 11 kilobytes, for example, you get a low low quality low resolution image. If you decode a little more 38 kilobytes you get something that looks like starting look like the scene and then a little more you get actually something looks pretty good. And of course if you decode the whole frame for megabytes, you get the perfect quality and so that's how the demo that you can go to go see that demo working that's how it works. I love this. These are some other video oriented things. Half float imagery light latency. And let's see multi generation is if you encode and decode multiple times for some reason, which is more of a video broadcasting infrastructure thing if you have video here and you have video here and you want to send it in uncompressed form. If you at the next place if you keep using JP 2000 you reencode. Sometimes you are concerned about degradation of image quality over multiple generations of encode and decode cycles and so. So anyway, so it basically is very stable. And so that's sort of my quick overview of the standard. We have lots of results. Ruben has lots of results where we've worked with your application. And so I'm going to close this week we can have questions later I guess. I mean we can have a few questions now if you want to put them in the chat or raise your hands if you've got any questions about the HTJ2K standard itself. Before we go on to talk about how we've done the testing. Also so simple maybe there's no question. So one of the things that I wanted to mention that's really cool and Ruben probably won't mention it also but because it's part of the framework of JPEG 2000 already. And we have the library support decoding. There's, and you use a new version of the libraries that support decoding and the software. There's actually no programming work to do to get it to decode. You just just comes out. The testing I think we did with the decoding part benefited from that. So, and then you also see that with the integration with some of the open source tools like image magic, they're just using the latest library, and it just works. So that's great. Okay. So on to the project then so and after the after we heard about got concerted by Cardiw about new HTJ2K standard. We wanted to have a look to see how it would apply to triple F. And so what we've done is we've taken Stefano's work, which he did a couple of years ago at the Getty, where he speed tested different image formats and also image servers. I compare that against HTJ2K. And so as Mike mentioned, there are a number of different tools which support this new HTJ2K standard. And so we've been comparing that with the original JPEG 2000 also with pyramid Tiff. And so if you pass over to Roover now he's going to talk about some of the results we found. And again, and thanks, Mike. Let me show my screen. Okay, can you see that? So I'm going to present some of the results that we did so. Sorry, it's still. The idea of the test was to. Can others see it? You might have lost Rooven as well. No sign of Rooven. Okay. So maybe we can come back to Rooven. I'll contact him on it's like if he's still available, he might have just dropped off. And so as well as Rooven do some testing, Stefano has also done some testing. Stefano, are you ready to kind of jump in while we try and find Rooven? Yeah, I could kind of introduce my worker. However, that kind of a tax on to Rooven's test. So hopefully he's going to be back soon. But as a summary, but Rooven and I did run some tests. I was actually testing the full real world scenario of an image server with different codecs loaded on it. So we used IAP image that supports different different codecs. So we had a Docker container with an IAP image instance running that we could run in Kakadu, run with Kakadu or with OpenJPEG. And subjected that to some load testing that was done with Locust. Locust is a Python tool where you can actually program some load tests. You can programmatically send a number of requests to a server. So what we did was requesting different images from a pool of images that we first encoded with the different codecs. So we had a set of HDJ2Ks made with Kakadu set of JPEG-2000, made with Kakadu JPEG-2000s made with OpenJPEG, PTIFs, and so on. And for each of the sets, we ran a set, a battery of tests, which consisted of requesting picking a random image from each of the sets, and requesting 4,096 wide derivatives, a full frame, a 1024 pixel image, a small thumbnail I think was 128 pixels. And then a random region of 512 pixels, and also some regions that were aligned with the tiles that the image was decomposing that was a significant difference. But if you see Reuben's back online, let's see if you can. Yeah, sorry, sorry, my connection, web connection. Go ahead please. Thank you Stefano. Just while Reuben's getting set up, Julian's asked a good question in the chat. I don't know if you can address that about, is there a new format for HDJ2K? Yes, so there is. And it's called JPH. And it's a better file format than JP2 because it supports a new modern signaling format using parameterized properties. And it also has a better IIC support that's not as constrained. So it's a better format. It does work to put JP2, HDJ2K in a JP2 file, but it's better to use JPH for this. But it's not a huge big deal one way or the other. And it's something like maybe our little HT test group in IIF might study more in the future and also make maybe a recommended practice for you guys to how to do this stuff and deal with the color information. Thank you. And we can see a screen Reuben. So that's good. Can't hear you. Okay, there you go. Sorry. Sorry about my Wi-Fi. So I don't know where it cut off around here somewhere. And it hadn't started. We didn't see it. I didn't even start. Okay. Yeah, so I'm going to present some of the test results we did for to evaluate HDJ2K. So the objective was to evaluate performance using typical triple IF usage scenarios and the kind of images that are typically used for triple IF. So essentially, we're talking about Pan and Zoom as the most typical use case for triple IF using a viewer such as Mirador or anything else. And typically we're talking about images from the culture heritage sector. And this typically requires also the use of a triple IF compatible image server. And Pan and Zoom requires the ability for the triple IF server to rapidly extract tiles or regions from the image source at a specific size and at a specific resolution. So we have a kind of particular use case, which is different from video streaming and things like this that Mike was showing earlier. So how can we measure the performance? So the idea is to kind of measure the real performance and also to compare the performance with JPEG 2000.1 and to TIFF for both lossy and lossless encoding. So we're going to measure things like compression size, encoding speed, and especially decoding speed, which for triple IF use is probably the most important factor. And we're going to look at different software implementation. So Mike showed us that there are different codecs available, some of them open source. So there's CACADU, which is the most mature JPEG 2000 codec around, but it's proprietary. There's OpenJPEG. And there's also one called GROC, which is originally a fork of OpenJPEG, which has branched off on its own. And we're also going to compare this to kind of a baseline, to TIFF. And for this we're using Vibs and of course LibTIFF. So what's the test environment we're using? We're going to use different kind of test environment. So we've taken, I think as Stefano mentioned, while I was cut off, we're using images from the Getty collection. So there's about 1,000 images that we're using, which range from different sizes. Some are quite small, but some are going up to like 20,000 by 20,000 pixels. So some of them are quite big. They're all color images. For these particular tests I'm going to show you, they're all based on Linux. In this case, you're bound to 20.04. And I'm going to show you test results from running on bare metal. So we're not using Docker or any other kind of virtualization, but running on natively on Linux, you're bound to. And here we're going to, we're running this on a high end production set. So this in this particular case is a 16 core Intel Xeon, and we're using RAID 10 disks. So first of all, let's look at compressed file size. So all of the 1,000 images, I think there's actually 1,014 images in total. They were compressed using both lossy and lossless compression using all of the available codecs. So first of all, using TIFF and using VIPs to encode into TIFF, using JPEG 2000 part 1 through OpenJPEG, CACADU and GROC. And for HTJ2K using CACADU and GROC. However, for GROC, can only do lossless encoding here and OpenJPEG is unable to encode a HTJ2K tool. So this gave us 11 different output files for each of the 1,000 Getty images. So this is the kind of command line. You can find all this on the on the GitHub repository. So this is the kind of command line we're using. We're trying to standardise the kind of command line we're using, the compression parameters. So for TIFF, OpenJPEG, for CACADU, this is pretty much equivalent request. So the top is for JPEG 2000 part 1 and at the bottom for HTJ2K. So in fact, this to get to HTJ2K, we just need to add these, this extra section here. C modes equals HT essentially. And similarly for GROC to get to HTJ2K, just need to add this extra parameter minus M64. So these are kind of the results for compression science. So as we can see, well, first of all, we see that TIFF is much larger. So even for on the left is lossy encodings. So in this case, it's JPEG standard JPEG tiles encoded into TIFF. And for lossless, we're using deflate. Otherwise for the JPEG 2000 format, so for JPEG 2000 part 1 using OpenJPEG GROC, CACADU on the left, HTJ2K through CACADU, we see they're all very similar in science. If you look very closely, HTJ2K is slightly larger. For lossless encoding, we see that the TIFF file is actually almost twice as large as lossless JPEG 2000. And here we see a little bit more clearly that HTJ2K is slightly bigger than JPEG 2000 part 1, but not by much. So, you know, JPEG 2000, as we kind of already knew, is able to achieve significantly better compression than TIFF for both lossy and lossless. And we see that HTJ2K compressiles are slightly larger than JPEG 2000 part 1, but not really significantly much. And reassuringly, the different JPEG 2000 codecs produce very similar output. They're not quite identical, but they're almost. What about encoding time? So, for these tests, we took each combination of format and codec and we compressed them three times and we averaged the timings of these test runs. That's a lot of tests that we ran. And here are the kind of results for the encoding time. So, as we can see on the left, so on the left is lossy, on the right is lossless encoding. And we can see that CACADU is by far the fastest codec around for encoding. And we can see that HTJ2K has a small advantage over JPEG 2000 part 1 for CACADU for lossy. And there's quite a big improvement for lossless encoding. So, especially for lossless HTJ2K is a big win in terms of timing. These are on the same scale, so we can see that lossless, and I've scaled them to so that one corresponds to the fastest, which is HTJ2K CACADU. So we can see that TIF lossless encoding is 25 times slower than CACADU HTJ2K. We can also see from the lossy part that openJPEG for the JPEG 2000 part 1 is about 15 times slower than CACADU. So it's pretty slow here. GROC is an improvement on openJPEG, and on the right for lossless encoding, we can see that openJPEG and GROC are pretty similar, and they're much closer in range to CACADU JPEG 2000 part 1, for example. If we look at encoding time versus the raw image size, so on the X axis on the bottom, we have the size of the image, and on the Y axis we have the encoding times. It's like a scatter plot of image size compared to encoding time. So we're looking if there's any relationship to image size here, and actually we can see it's very linear relationship. So the bigger the image, the longer it takes to encode. There's no non-linearity here. The number of pixels you have is what determines encoding time for all of the different codecs. As we can see here, the lines towards the bottom are CACADU, which are much faster than, for example, for lossless encoding for VIPs. So what kind of conclusions can we draw from this? Well, there are actually large differences between the codecs, especially for lossy encoding. CACADU is the most mature codec, so it's of course significantly faster for all types. HDJTK is faster for encoding than JPEG 2000.1, especially for lossless, less so for lossy. Sorry, that's wrong around, especially for lossy, but less so for lossless. And yeah, lossless JPEG 2000 coding is faster than lossy encoding for all codecs and JPEG 2000 types. So let's now switch to decoding time. So as we said, decoding is probably the most important criteria for triple IF. So for pan and zoom type uses, decoding is the most important criteria as encoding is only performed once and often is performed offline, whereas decoding is something you have to do continually for each user that connects to your, if you have a public website. So the essential processes that contribute to latency during pan and zoom. Let's look at the processes. So our user zooms in to a region on an image and the browser will request in a triple IF browser client, it will request tiles corresponding to the region being browsed. So the triple IF server needs to open the image file format seek to the data corresponding to this image region. And the image triple IF server needs to decode this data corresponding to this region. It then needs to transcode the data into a format usable by the web browser. So typically into JPEG, classic JPEG or into PNG. And then the transcoded image needs to be sent back to the browser. So HDJTK, that impacts really this part here, the third step. So this is within the triple IF server, the decoding of the data within the file corresponding to this region. So I'm going to at least test something so I'm going to concentrate on just this aspect. So one of those tests, which he's going to show later, take in the whole of these steps. So the methodology. So first of all, we're going to use the most widely used and fastest triple IF server. So which is IOP image. I'm the IOP image maintainer, so I'm a bit biased. So we're using this. Another advantage is that we have precise microsecond timing information for each of these internal steps within the server. And so how did we do this? We generated a set of random triple IF tile requests based on a subset of the Getty images. So in this case, we've used the 50 largest images out of the 1000. We could have used more but for time constraints, practicality, we limited it to the 50 largest images. And for each of these 50 images, we generated 100 random tile requests at different locations on the image and at different resolutions. So that made 5000 different requests in total. And the same 5000 triple IF requests were used for all the tests for all the different scenarios. So for my test, I'm going to show and some of the tests for Stefano. And while Glenn was also going to do some tests, but I don't think he managed to finish. So here's the results for decoding time. So as I said, we're just looking at the decoding step within the triple IF server. So on the y axis, we have decoding time. Again, I've normalised that one is the fastest. So in this case, lossy tile pyramid tiff is by far the fastest here. And as we can see, OpenJPEG is significantly slower. It's over 100 times slower per tile than tile pyramid tiff. OpenJPEG, as Mike mentioned, can read HTJ2K. It cannot generate for the moment, but it can read it. So we can see that HTJ2K is much faster. Twice as fast for lossy decoding, if you look on the left here, for OpenJPEG. And if we look at CACADU, we can see that already for JPEG 2000.1. CACADU has a very respectable speed. And HTJ2K is again over twice as fast, maybe from this scale, maybe almost three times as fast as JPEG 2000.1. So it's almost competing with tile pyramid tiff for lossy decoding. And on the right, we're looking at lossless decoding. We see a similar scenario. OpenJPEG here is significantly slower, though HTJ2K makes a big difference here. And CACADU is way out in front for JPEG 2000, both part one and HTJ2K. For lossless decoding, we can see that HTJ2K, the gain is not quite as good as for lossy decoding. So we see that tile pyramid tiff is by far the fastest format for random tile access on large images. For the JPEG 2000 files, OpenJPEG remains significantly slower than CACADU. 10 times slower for lossy and compared to CACADU and up to 15 times slower for lossless. HTJ2K is about twice as fast for lossy and about 20% faster for lossless for both OpenJPEG and CACADU. So here's some kind of conclusions for my part of the presentation. Tile pyramid tiff has a very optimized structure for random tile access. And it's hard to compete with this in terms of decode times. Stefano is going to show you some different kinds of requests. So here I've focused on tile requests, which is the most typical request you get. But you can, with Triple Life, you can also request larger regions or arbitrary regions. And here this advantage is less pronounced. As we saw at JPEG 2000 is, of course, capable of much better compression. So there's this trade-off between speed and size. And in all scenarios, HTJ2K was both faster to encode and to decode than JPEG 2000.1. Even though HTJ2K files are slightly larger, but really not by much. It's not significantly much larger. And as Mike mentioned, something important is that HTJ2K is compatible with the latest versions of both OpenJPEG and CACADU. Which means that in the next iterations of Ubuntu, Debian, Fedora, you will get HTJ2K compatibility with the default environment. You won't need to install anything special. You will already have HTJ2K compatibility. So OpenJPEG remains, despite a lot of recent improvements, it's still very slow with respect to CACADU, unfortunately. And the fact that OpenJPEG and also CACADU are currently compatible with HTJ2K, means that triple IF servers such as IP image and others that use OpenJPEG can decode HTJ2K without modification. So the tests here using IP image did not require any modification to the server. So you get your HTJ2K support almost for free here. Here's the GitHub repository where you can find the results of these tests and scripts for you if you want to try the test out yourself. And I'm going to hand over to Stefano, who is using Docker on Amazon Web Services, and he's going to explain this graph. Thanks, Ruben. And that's my slide pin to Ruben's presentation still. So what I did was, as I was mentioning before, taking the whole environment into account. So I ran these tests on an EC2 machine on AWS. The disk was local, so there was no network IO between the server and the storage because the amount of images was not very large. I used the same set that Ruben used, except while Ruben focused on the raw performance of the inner processes, I basically ran the full HTTP request. So there were already some expectations that between HTTP handshakes and inter-process communication, virtualization, and so on. Oh, by the way, I have to mention that this server was running on Docker, so there is also that layer of abstraction that was in the way. Which is mostly common in everyday, in nowadays deployment. So what I did was using Locust to run a battery of tests on each of the sets. And for each of the sets, you can see the legend is a little cryptic, but there were different derivative sizes that were requested. One was a random region, which means a region of size 512 by 512 on a random coordinates, which were not very likely not aligned with the tiles. A random region that was aligned with the tiles, which means 500 or 512 pixel boundaries. The full frame request of 1024, 128, 4096 pixels wide, whatever the height was proportional. So basically the full frame resized. Those are different operations that are done in different scenarios, but they're pretty common in image server users uses at least the way we use it at the getty. In a way, somewhat surprising because I definitely expect a kind of a compression of the gaps between the codecs due to an overall cost, upfront cost for the server pipeline. But what was surprising is that actually the, as you can see it on the left, htj2k is fastest than ptf in both average and median response times. I can shock that up to several factors. One could be the size of the ptfs is larger so it taxed the storage more and that became a bottleneck. That's because the tests were run with 10 concurrent users. So actually the client application, the load tester was sending 10 requests at a time and waiting for each request to return before it would send another one. So it was similar in 10 users. I didn't include the results for one user here because they were not as comprehensive, but for one of the sets, the one user test was about half of the speed, meaning that probably IAP image was operating at probably half of its capacity with one user and as you increase the users, you know, that became closer to capacity until it started queuing requests and that stole things down. And so 10 users didn't make 10 requests per second, 10 times as many requests per second, it would just twice the rate. The other reason might be that the range of the images was random, so maybe a better distribution could give us more accurate results. But the overall difference between the JPEG 2000 and the htj2k is very visible. As you can see, the JP2 lossless and lossy are definitely much slower than the htj2k and the ptfs. Anything else to note? Well, I will run more tests about this. You probably should take these graphs with a grain of salt as more tests are being done and you can also dig more into the raw results in the GitHub repo that Glenn pasted on the chat. There's all the CSV files for each of the tests, which are much more accurate, much more detailed. I had some problems with JPEG 2000s created with OpenJPEG. I was getting at least 60% of the requests failed, so I did not include those in the test until I find out the reason for that. It might be resource exhaustion as we saw the JPEG 2000s created with OpenJPEG where by far slower, so there could be a saturation of the resources so that IAP image eventually gives up or doesn't, or even Nginx, the web server that was in front of IAP image would give up and return a 403 or 503 or 502 back gateway. There are still some areas to explore but these are pretty much the results that we got so far. That's great, thank you Stefano and thank you Riven for the presentations. We have a question in the chat about sharing the presentation. Are you okay if we add link to this? What format is the presentation? I can put this online. It's using reveal.js. I think a lot of the questions have been answered in the chat while I've been speaking. Are there any other questions people want to ask about any of the testing? Glenn, do you want to talk about maybe what we're going to do in the next few months? Sure, so now we've kind of got these initial results. What we'd like to do is to write them up and publish them as an article. So I think you were doing for the Code for Lib journal to write up the results there. But we've kept all of the code and the scripts used to generate a lot of the JPEG 2000 files in that and GitHub repository I shared. And that's kind of been one of the strongest benefits of this project is kind of finding a definitive way to create JPEG 2000s. I think we can do that in the GitHub pages, GitHub repository. Dan asks, do we know if OpenJPEG have any plans to implement the encoding? I think they're certainly accepting pull requests. So, if you want to add it, go for it. I don't know of any specifically. I know, you know, if you wanted to use open source encoding, OpenJPH can do rock code stream. And rock can do the file format. So that's what I would recommend trying right now. And a couple of questions on, is anyone using HTJ 2K in production? I don't know of anyone yet. We only became recently aware of this. I don't know if anyone else has heard of other people using this yet in production. So, I know, right, it's not related to the IAF, but I know some of the motion picture studios in Hollywood for using HTJ 2K. I should add that the OpenJPEG support only came through in a part way through we were doing the project so that wasn't available when we started last year. So it is relatively new. Yeah, I'm not a real expert in IAF myself, so would IAF have to make a recommendation in order for people to want to switch over, actually, or would it just kind of happen organically, or what does that look like? So we wouldn't necessarily make a recommendation because we can support any type of image format. So it's up to the institutions themselves whether they kind of see the use case and the need to do their migration. And so I think these kind of presentations and potentially the article we're going to write is ways of sharing information with people for them to make a decision whether they should or not migrate. And so somehow this, these lower timings and smaller file sizes translates to saving money, right? Anyway, that's up to the institution to figure that out. Okay. So no more questions. I want to say a big thank you to Stephanie and Mike Riven for all the work on on doing all this testing. I mentioned that does that have a repository with all the scripts that we used if you want to try them out yourselves. We've also got an HTJ2K Slack channel on the TripLife Slack. You're welcome to join that. If you've got any other questions about HTJ2K, feel free to ask on that. So other than that, I will say a big thank you to everyone and hopefully look out for the article and any questions ask it on that channel. Thanks all. Okay. Thanks everyone. Thanks Glen. Thank you. Thank you.