 OK, so my name is Colin LeVette Brown. I'm from the University of Victoria, which is right across the water from where the next OpenStack summit is. But we have much better weather than Vancouver does, so we'll feel sorry for you when you arrive. Anyway, I'm going to be talking about Glint, an image distribution service for multi-cloud environments. OK, so here's the overview of my talk. I'm going to be talking about who we are and what we do. I'm going to talk about why we did Glint in the first place. Then I'm going to tell you all about Glint, and there's a one slide summary. So first of all, we are a high-energy physics research group at the University of Victoria. We particularly support networking and compute solutions for high-energy physics at the university. But we also support other disciplines that use high-throughput computing. And specifically, we support the astronomers who are also at the university there. We've been actively working on clouds and virtualization since 2006. So we've had quite a few years of doing this now. The picture you see is actually a particle trace from the discovery of the Higgs boson. I'm reliably told. OK, so our primary interests are in the Atlas experiment in Geneva, and also the Bell II experiment that's based in Japan. And there's a definition there with what high-energy physics is, which is basically the study of the fundamental particles that make up our universe and the interactions that they have. These are pictures of the Large Hadron Collider, which is in Geneva, Switzerland. At the top of the circle, you can see the airport and Lake Geneva. This is a 27-kilometer circumference circle. And this is the Atlas detector under construction. And as you can see in the well of the detector there, there's a man standing, which gives you some idea of the size of the detector. The next picture is of the Bell II detector, which is under construction. This detector is going to be coming online in about 2016. But in the meantime, there's quite a bit of preparatory work in order to do the analysis of the data that will come off this. So these experiments are many decades in preparation. There's lots of work and cost in producing these things. They produce huge amounts of data. I think the Atlas data set is currently 200 petabytes. I think it's very large. So there's a lot of processing. And here we have the processing that's been done this year on the distributed cloud model. It actually shows you that we've done 1.2 million jobs. Each one of these jobs is about 12 hours long. And they run about 95% efficient. So the wall time and the CPU time are pretty much the same time, actually. So yes, it's 1.2 million jobs. In the case of Bell II, we just started the processing here. This is 30 days of processing. This is the worldwide processing that's been done in the 30 days from the first week in September through October. And basically, it shows you all the sites in the world that are doing it. The first two are the one in Germany. It's a hardware site. It's not a cloud site. The second one is in the home of the experiment. It's a kek in Japan. And that's a hardware site. But the third one is actually the UVic site. And it is the cloud site. And you can see that we're the third producer in the world right now on that site. I see we are running currently about 12,000 jobs a day for Bell II processing. OK, so this map shows you roughly where our clouds are. And the main thing I want to point out here is that they are actually processing on three continents here. And we are using about 25 clouds. And we have to synchronize the images between those. So what is high throughput computing? Very simply, high throughput computing is a batch processing scenario where a user submits a job to a job queue. And the job scheduler runs it on the available resource. It's that simple. So we have this high throughput computing model. The problem is we only have a very small amount of resources at the university. And we need to use other people's resources to make that up. So we actually use anybody's resources who wants to donate them to us. We'll make use of those as long as we they're a sort of cloud variety. And because these resources don't belong to us and because they're kind of dynamic in nature, especially when we create VMs on them, we need a job scheduler that can actually handle dynamic resources like that. The next thing we need is a way of automatically starting virtual machines on the clouds because we don't want to be doing that manually. So we've written a piece of software called Cloud Scheduler. And it manages the instantiation and the destruction of virtual machines as and when they're required. So how does this work now? Well, basically the user submits the job just like they did before. There's no change in the workflow. But in this case, Cloud Scheduler is checking the queue periodically once every minute or so. And then seeing the queue and there are no virtual machines to run the jobs on. It talks to a cloud that it selects. It picks them on some algorithm either around Robin. There are several algorithms you can use. And it starts a virtual machine there. And then the virtual machine registers with the Condor queue. Condor sees a resource that it can run the jobs on. And it dispatches the jobs to that cloud. And this continues on all the time. So the Cloud Scheduler continues to watch the queue. And it's continually to drain the queue. Eventually, there are more resources on the cloud than there are jobs to run. And so you reach the situation where you're wasting resources to keep the virtual machines running. So Cloud Scheduler is continually checking the queue. It sees that there are no jobs to run. It talks to cloud and says, OK, shut everything down. And eventually, we end up with the resources available for another workload. So that's how we run the batch work through the high throughput computing. But we also need some other facilities to help us do that. So for example, we need to distribute software to these various machines that are scattered all around the world. And so we actually use CERN VM file system, which is a read-only HTTP file system together with squid caches and a little piece of software called Shoal that we wrote again to do dynamic squid discovery because these things could be running anywhere. And basically, those will allow us to efficiently retrieve the data from the nearest source for those three components. The next thing that we have is data distribution. And we're using the standard distribution of Atlas data, which is distributed around the world anyway. And then this piece of software called UGR, Unify Generic Redirector, to actually find the closest source of the data via GOIP. And then it's an HTTP protocol. And again, it's read-only. Also, we don't submit jobs manually. Obviously, we don't submit 1.2 jobs with one man pressing the submit key. We actually tie these into the high energy physics job management cues. In the case of Atlas, it's called Panda. In the case of Bell II, it's called Dirac. But what about image distribution, which is what we're supposed to be talking about here. So let's get back to image distribution. Well, just to give you a little bit of history, when we started in 2006, we started off with Globus workspaces and Nimbus, as it was at the time. And Nimbus had the advantage that it could retrieve images from remote sources and instantiate those. So we actually wrote an image repository manager called Repoman. And Repoman had the advantage of being able to make public images available through HTTP and private images available through authenticated users through HTTPS. And we used X509 authentication to do that. But when all the clouds have basically become an open stack clouds or some variant of that, and these clouds require the images to be saved locally within the cloud, within the glance repositories. And so at that point, Repoman basically doesn't work for us anymore. And so we needed to have something else do that for us. So this situation with automatic instantiation and the distribution of jobs and so on and images was manageable when we had a few clouds basically in North America. But when we had this map, which we showed you earlier, that starts to get very error prone, very time consuming. It really isn't manageable manually anymore. So we came up with another solution. Basically, we needed something to solve that problem, to distribute the images. And we needed something running so that we could do that. We looked around. We found this blueprint. This was actually, I think, inspired by John Bresselhan. He wrote this blueprint. Unfortunately, this blueprint was abandoned at the beginning of 2014. So we looked at Staccato, which was also like an incubator project at John Bresselhan. And it kind of stalled as well. We didn't have a solution here, but we knew what we wanted. These are the design objectives that we had. There are a few there that I want to just pick up on. First of all, we don't like to rewrite code that works very well in the first place. So we think Keystone works great. We didn't want to have our own identity solo. We think Glance works great. We didn't want to rewrite how Glance works. We basically wanted to use the services that they provided and just write the glue that did the bits that we wanted to do. The second thing is that we have a problem with instantiating by IDs and AMIs and things like that, because we have different AMIs and UUIDs of the same image in 25 different places. And then you have to be able to say, I want this image and I want that image. And it starts to get very confusing. So we wanted to instantiate and distribute by name. And basically, we thought, well, users have complete control over the names. They can decide to upload an image called name A, and they can upload a second image called name A. We don't like you to do that, but that's what you can do. So you have complete control over the names. You can rename them. You can delete them, and so on. That was an important design consideration for us. Next thing we wanted, of course, was a pluggable architecture, because originally we were dealing with multiple cloud types, as we are today, actually. We're still dealing with Amazon EC2. We're still doing GCE. We're still doing OpenStack. And we still have a couple of Nimbus clouds out there. So we wanted a pluggable architecture, so we could actually support those clouds and write the support in for that. I'm going on from there. So we actually ended up with four components. We have the Glint service itself, which is the piece that actually does the distribution. We have the Glint Horizon, which is our modifications to the Horizon dashboard, which is our current user interface to the service. We have Glint service, which is basically just the collection of installation scripts in order to get Glint installed. And there are a variety of ways in which you can install it. And we have a backup utility. And I had just one slide on the backup utility later on, so we'll talk about that later. OK, just a quick review of what about Glint, because Glint uses Glint. And the important thing that we want to say about that is that Glint is really both metadata, images, and services. So those are three components, and we use all of those. The important thing about this slide is that the metadata, there's one table within the metadata called images, and it actually has the attribute of the property of the owner. So basically, if you look at the back end storage, it's got a bunch of images, and you can tell who the owners are. And that's quite an important thing for us. And the way this works is that a user logs into the Horizon dashboard with the username and password. They talk to the identity server. They get a token from the identity server that sort of identifies who they are when they go and ask for other services. And so they can talk to Glint and get back the list of images or the images that they actually have authority to access. Likewise, Glint has its own metadata, and it has a cache, but we'll talk about that. So Glint has three main tables that it has. First of all, it has the repositories table. And the repositories table basically just points to identity servers on the remote repositories or the remote clouds that you want to actually distribute images to. The second table it has is a credentials table. So this is just basically your credentials, how you would log into that remote service. So it basically identifies a tenant on the remote cloud, one credential, one tenant, one remote cloud. And it effectively links the current tenant on the local cloud to a remote tenant on a remote cloud. So that's what the credentials does. The third table it has is a state table or a dynamic table, which basically tracks the user's session and keeps track of tokens that it's needed to access the other clouds. So how does this work? Well, in this case, the user would ask for images to be copied or removed. And the first thing that happens is that the Glint service would then use your credentials, and it would go out and get tokens from all the clouds that you've just specified tokens for. It then calls the Glint service and says, tell me which images you have for that user. And so it can compile a matrix of where all the images are and, in fact, where the source of the images that it's going to copy would come from. The next thing it would do is it would pick a source and it would stage the image into its cache. And after that, it's fairly simple for it to replicate that out to the other sources. So it can actually copy from a remote site to a local, from a local to a remote, from remote to remote, and do all those combinations. So this is what it looks like from the user's point of view. These are screenshots. I hope you can read the screenshots. It is really hard to get it so it was clear, but I hope you can read them. So the logon looks pretty much the same. We branded it a little bit to show you it's Glint enabled, if you like. So they would log into that, and the first thing they would be confronted with is a pretty standard overview page. There's not much to changes there. The only changes we've made are to the images tab on the dashboard. If we go to the images tab, this looks pretty much like a standard images tab. And the important thing there is that we've actually created three sub tabs on that page. And the local images is basically the same as the standard images page that you would see. All the real work goes into the other two tabs. So we're going to spend most of our time on those other two tabs, and we're going to follow this workflow. We're going to distribute some images. And the first thing would be to select the local tenant, because basically we're going to connect the local tenant to remote tenants, as we described earlier. The next thing we're going to do is we need to add repositories. We need to say where we want to push these things. And then we need to give it our credentials, and then we can go through the process of distributing images. OK, so the selection of the local tenant is done with this standard drop-down box. We just can pick it, and you can see we're currently on the Bell 2 project, and we can select the Atlas project, which you will do. And then we'll select the remote repositories tab. And when we do that, we can see we actually have no defined repositories. But we do have an action button which says that we can add repositories. So we'll go ahead and do that, and we get the add repositories dialog. At this point, we can provide three mandatory fields and one optional. The first is the name. This is a short name by which the cloud will be referred to from here on after. The next is the identity service URL of the cloud that we want to talk to. And the easiest way to get that is actually to log into that cloud and go into the access and security tab and actually grab it from the identity source right there. And we can paste that right into the dialog. And then the format is a mandatory field. But right now, we only have one value because we've only written the support for the OpenStack cloud. So that's the default. And the description is optional. So we can just hit the Enter key on that, and we'll come back to the repositories tab. And you can see now we have a repository. And we're given a couple of actions associated with that. We can delete the repository, or we can add credentials to it. And probably that's what we want to do. So we'll go ahead and add the credentials. And again, we have just three mandatory fields. There's the tenants, the username, and the password for that. And you need to provide all three of those in order to access the cloud. OK, so I've taken the liberty here of adding two other repositories just to show you. If you look at this screen, it does indicate one other thing. And that is that of the three repositories, I've only added credentials to two of them. There's one repository without credentials. And we'll come back to why I've done that later on. OK, so I can go to the image distribution page now. And what I have is a matrix. And the first column of the matrix shows you the list of images that are available for distribution. But the other three columns are actually the repositories where these images are actually residing right now. The first column is always the local repository. So in this case, our local repository is rato1, and the tenant is Atlas. On the other two are the remote repositories that we added. So we're going to an Alto and a mouse cloud. And you'll notice that the tenant names on the remote cloud do not have to match the tenants on the local cloud. They can move them between tenants. They're linked by the credentials. And as you can see, all the images are currently located on mouse. Now, the way you distribute things is you just change the checkboxes. You toggle the checkboxes. And so if we go ahead and do that, you can see that we've got some pending images now. And basically, we're saying we want to copy those two images from mouse to both RAT and Alto. And we hit the Save button to do that. And it will show us a progression bar that is happening. And obviously, it takes time because you're actually moving data across the network. But when it's finished, it'll give you the matrix, and it'll be updated with where the images actually are. So we've done that. Now, what I'm going to do now is I'm going to switch back to the project that we came in on, the tenant we came in on, which is the Bell 2 one. And I'm just going to show you the image distribution page again. And you can see we don't have much on it, because first of all, we have no local images. So there are no images listed. And there are no local remote repositories listed because we haven't added any credentials for this local tenant to remote repositories. And so we'll go back to that page. And as you can see, all three repositories which are available to this tenant need credentials added. Basically, when you add a repository in the system, it's publicly available to all tenants on the local cloud. And the idea is it's just a URL pointer, basically. And it's your credentials that are private 2 one tenants. So in this option, if you look at the actions, you'll see that the first and last actually only have the add credentials options. The middle one, which doesn't have any credentials, would actually allow you to delete it because other is currently redundant. But it will go ahead and add all the credentials. And then we'll take a look at the distribution page. And as you can see, we have now four clouds listed. We have the three remotes and the local. The local is always the second column. And the only other thing I want to point out here is that you can actually make multiple selections at a time. Last time, we just showed two copies. But in this case, what we're doing is we're doing copies down here. But up here, we're actually, sorry, I was just moving the mouse. We're doing an actual delete. And we've unchecked the box. And it will actually do that when we hit the Save key. Take that out of the way. So I said I had one slide about the backup facility. We did a lot of movement divisionages. And we didn't have a good way of backing up the repository. I don't know whether anybody else has had that problem. But one of the things we wanted to do is we wanted to backup both images and the metadata. And we just wanted to have a local directory. It was either NFS mounted, or it could be a network file system of some kind. And we just wanted to be able to have a local directory mounted there that we could backup the images to. And so this takes incremental backups. It creates a version backup for each time. There's changes. If there's nothing to do, it tells you there's nothing to do, and it doesn't take another backup. All right, so that's the backup facility. We have four links that I want to give to you. The first three are the sources of all the Glint codes. So the first one is actually the Glint service. It contains the backup utility as well. And the second one is the Horizon updates, dashboard updates. The third one is our Glint service. That's all the installation codes. And the last one is our team website. So you can see we have the documents and presentations and various other reports. You can see who a personnel is on the team and has that. We'll find that interesting. OK, we are not finished. We've got things that we want to do to it. And one of the things we'd like is a command line interface. So all our code is on GitHub. It's all open source and available. You can download it, play with it, modify it. But basically, when we have a problem, we go up an issue on GitHub. In this case, we actually documented the kind of syntax that we wanted to see in the command line interface. And this problem is one of our high priorities to address. OK, you can try it. 20 minutes, you can have it running. So there's the five-set process. It'll tell you how to get it going. And you follow this. And it's going to use those URLs that I gave you, the first three. And it will get it running for you. OK, so in summary, we continue to develop Glint. We believe it has applicability to other users. We know other users are very interested in using it. Like Compute Canada, I know, is interested in using it. I believe CERN is interested in using it. Anybody who's using lots of clouds and has images to move, probably going to be interested. And we would like to see Glint incorporate into OpenStack as a project. Now, to be honest, we didn't go around the standard process of how we've learned today that we should have developed an OpenStack project. Because originally, when we started, we were just creating a standalone utility. But now, we eventually evolved into, hey, we should really incorporate this into the dashboard and make it very easy. So we would like to see it adopted. We're going to be trying to talk to the people, the Glants, PTL, the Glants development team, and see if we can get it involved in there. And the last bullet there is I'd like to acknowledge our base of the people who fund us. So the people there is University of Victoria, obviously. The IPP is Institute of Particle Statistics of Canada. Our PI is a member of that organization. Canary is the national research network provider. And NSERC is a major science and engineering funding agency of Canada. So with that, I'm going to just give you my email. If you'd like to email me, that's available for you too. Are there any questions? Sir, I'm sorry. It's very hard to hear with the background. Yes. OK. To be honest, we haven't played with Swift a lot, because we have very small systems that we're developing on. I would think that you could use Swift as a back end to glance, but this would all still work. I mean, basically, all our testing has been with the file system back end. But basically, you can use a variety of back ends with glance. And then we're just using glance, download, upload, keystone, and authentication. Of the image, while you will need to work on the metadata and on the glance without working on the... On remote clouds that you are not the administrative domain outside your administration. OK, yes, if these are out of the administrative domain, of course not. That is a problem there, right? I mean, basically, we're using other people's clouds. Yeah, maybe. So I was thinking, how can you use these kind of things in a multi-regional environment? I'm sorry I didn't get it again. Yeah, I mean, I was thinking that this could be also very useful in a multi-regional environment without, let's say, the need of cross-border domain solution. OK, I think you're right. Sir? I have a simple question. How hard is to integrate that with a glance? So when, for example, you have someone that is asking for a virtual machine and an image, to be able, glance to be able to look to foreign URLs and other repositories? And why you don't try to do that? OK, how hard is it to integrate it with glance? Well, first of all, it uses glance. I don't think we make a single glance modification here. Basically, it's a service that runs beside glance. It has its own endpoints. But in the back end, what it's doing is basically getting a Keystone token, either your local token, which it already has, by the mere fact you get through to our endpoint. And then it's using the remote credential tokens that we retrieve. And then it's doing image download, image upload. So it's not introducing anything to glance. The hard thing is the modification of horizon at this stage. In that case, we are actually taking the horizon install and modifying the framework there in order to produce our screens. Sorry, can I ask one more time? Certainly. I mean that right now, if you want an image, you have to go manually and select it. Please download this image I want in this specific cloud. How hard is actually to just keep it everything in glimpse? And when someone from one of the clouds asks for the image, will be able to integrate the glance with the glimpse to ask, OK, I don't have the image of glance, but maybe there is in glimpse, so go fetch for me. OK, so as I understand it, what you're asking is, how easily would it be to script the distribution of images basically automatically? And I think the answer is with the way it is right now and everything through the dashboard, it wouldn't be easy. But that's one of the reasons why we want to write the command line interface. Because basically, if you look at the structure of the command on those slides, and I think you'll have access to the slides, you'll see that effectively, you can say list the images and you will get the matrix. And then you can just say, these images on these clouds and they should go. So basically, you could cron that or you could run some procedure automatically to do that. So I think it would be easy with command line interface. Sorry, you had a question. Firstly, thanks for the very interesting talk. Certainly as an ex-physicist, very interesting talk. As an ex-physicist and a glance core, I feel it was tailor made for me. I think just in some of the newer code bases there's things in glance such as image cloning. I'm sorry. It's just there's a lot of background noise. I'm very sorry. Yes. OK, so the command is about image cloning. OK, so basically, it does clone images from one place to another, either from remote to local, local to remote to remote. OK, can do all those. Within the same region, that's functionality now that's being added to glance. Yes. Yes, so basically, by using the services of glance, we think that any improvements in glance in this area, like, do we get the benefit of that? OK, so we'll get efficiencies from doing that. Right now, there is some delay in uploading images. We do thread it all. So if you actually copy one image to multiple places, you will actually get, it'll actually go through in parallel. It doesn't happen serially. So it is actually quite performance at the moment. Yes, sir. I should give you the microphone and you can try. What size images have you been working with? Size of images. Well, actual fact, we have been trying to reduce image sizes, OK, because up until a few years ago, our image sizes were typically 10 gigs, OK? And we even had them as big as 20 gigs, all right? Nowadays, we're actually using something called CERN VM3, which is a microkernel VM. And it's basically a few megabytes. So this distribution happens very quickly, OK? But it works with big images. It just takes a long time. I mean, that's one reason why we didn't do a live demo, because you don't want to be sitting there tapping your foot while it happens, right? But with micro VMs, literally, images get trends. And also, most of the images are going out across the world or right across Canada. That's the best part of 3,000 or 4,000 miles. Those images go across within 20 seconds, typically. I mean, it's pretty fast, but there's a microkernel VMs. Sir, can I give this gentleman? So it looked like that you had to do the transfer. You had to do this tenant by tenant. Is that true, or you could do it at the? OK, so let me just clarify the tenants. Basically, you have credentials for your local cloud, right? And you have credentials for every cloud that you borrow from somebody else. Like, we borrow probably like 20, 25 clouds, OK? And we got credentials for every one of those clouds, OK? So basically, you know the credentials for your local cloud. When you log in, you add repositories. Every tenant on your local cloud gets to see those repositories. But not every tenant can use them until they can actually add their own credentials for those clouds, OK? And then they become available on your image distribution page. Does that make sense? Well, for example, if you looked at some of those clouds there that we were showing, we actually only had one tenant on the remote cloud, but we were actually propagating it to two tenants on the local cloud. So it's not a one-for-one match. It's whatever you need it to be, actually, if that makes sense, OK? Sorry, there's a question here. First, did you try image compression? Image compression. We haven't, to be honest. We haven't tried it. Yeah? Because I'm giving a talk on Wednesday, and we have pretty much similar needs because we also have issues with distributing up. I'm for CERN. OK, OK. Cheat. And yeah, we have a very similar case, but we don't care about multiple clouds. We only care about one specific cloud and how fast we can distribute the image. And I'm going to give some examples only by compressing the image. You have like 60% to 70% more speed up. OK. So are you planning, are you going to contribute? So the discussion is about raw images versus compressed images. OK, that's going on here. OK. Anyway, if you're contributing that to glance, then I think we're going to benefit, all right? OK, because that's the way we want to be. We don't want to write any code that is being done. In fact, the deletion, we don't even allow you to delete from the local repository through our page because, hey, you get it through the default page, right? And so we don't even replicate that functionality. You have to go to the default page to do that, OK? Anybody else? Yes, sir, over here. Hello. So you have a couple of sites, right? Couple of regions. So how many images do you have now? Five, seven? How many images? I know that we have at least 15 images on the cloud in Alberta, OK? I think we have somewhere around about 20 something images on the cloud in Victoria. And we have other clouds around. So there's probably some in the region about 35 images. And they go through quite a bit of churn, especially the older images do. We've been trying to standardize the images as much as we can, get them smaller, and do a lot more of the software distribution through CERN VMFS, OK? So we are actually narrowing the number of images we need to manage, and they're much easier to move around than they used to be, OK? But we still have anywhere up to about 40 images that we're floating around right now. Yeah, because we consider to building daily, let's say, images, because there is a couple of environments. The kernel updates, stuff like that, everything should be up to date. And there will be a problem, because when we will build the daily images, we will have at the end of the month at least 30 of them. And then the name will be problematic. I just wonder if you deal with that kind of problem, because since you are replicating, it seems that the images are important to you guys. OK, so when we replicate, we will replicate to the other repository, the destination repositories, as the same name as they came out. So whatever name they had at the first, they will have it on the new repository. If there are name conflicts, that's an issue that we want to deal with. We want to say that's an error, actually. We want to highlight that and say, correct this. We currently don't do that, because we're careful to name our images. But we'd like it to actually sort of flag that as an error and give you the ability to change it easily. As far as building images dynamically, as opposed to having special copies in our field, typically the image, like the level of software that we need is specific. And we don't want compiler updates happening or application changes coming in. Because if you do the same calculation using different versions of the compiler, you can get different results. And then the science goes wrong. So typically, we don't want dynamic loads there, dynamic images. You agree with that? Yeah, OK. OK, thank you. OK, you're welcome. Oh, yes, sir. In your presentation, you talked about it's kind of net new image distribution to remote sites, right? So what if you're updating an image and you want to redistribute that to remote sites? OK, so is that handled at all? OK, so if you're updating, like the image is being updated, and then you want to distribute it while it's after it's updated? After it's updated? Well, if it's OK. So basically, we change this image and you want to update. Actually, let me tell you about the backup facility. The reason why we included the backup facility here was because there was a thought that the local image repository changes take it back up. And we know when that happens because we're doing the change. In the end, we decided just to make it a standalone utility because sometimes image changes happen other than through Glint, right? So we didn't do it that way. But the thought comes to mind is here is an example where, hey, I changed this image here. I want it propagated. And I think that that would be a possibility to do that. Basically, our key as to what an image really is, like it's not the name, it's not the UUID. The key is the checksum. The checksums are the same. That's the same image, right? And so basically, if we saw that an image used to have this checksum, but now it's got a different checksum, we could automate a change on that or a propagation on that basis. So I think it's possible. We don't do anything yet. But I think it's very possible to do. That's the only way you can then cron it, I guess. Well, you could cron it. But the service is running all the time. And so theoretically, we could put in a process to go around and check the images for you. We could do that. I think it's possible. We have to see whether that's something people really want. Oh, there's a question over here. Thank you. So I want to clear up about that. If the image is synchronized to each tenant, so each tenant needed to do that, or your image is single or distributed by the admin credential, then it's shared to all the tenants. OK, I'm sorry. I'm having trouble hearing because of the thing. So what I understand you're saying is, do you need to go into manually distribute it, or can that happen automatically? No, I mean, so if you distribute it with the admin credential or the user credential. OK, the images are always moved to any destination using your credentials. OK, so that means if you have the multiple tenants on the other cloud, then you need to synchronize or distribute it to each tenant. Basically, if you want to move an image from cloud A to cloud B, you have to have credentials on both of them. OK? All right? And normally, you would have to enter the credentials of one, download. You distribute it once. OK, good. When you show up on the image, just share it with me. But it's not on the other two, just manually. Yeah, you are right, OK. So gentlemen, I've just been told we're out of time, and we have to stop. Thank you very much for coming and listening to us. It's very much appreciated. Certainly.