 Imaging, we take images at the MRI scale. We take them at the microscopy scale. Now we're near on the same scale as Alan Brain, but we still take them. And we also do some work at the microscopy scale. So it means we take images across all the scales. So we do these sorts of work. So this is an in vivo system, taking images of a mouse brain. We'll then take that same mouse and we'll extract the brain from the brain case and various things. And we'll do in vivo imaging as well. So this will produce a slightly bigger data set. It means we get a resolution of about 30 micron. And then once we produce our averages, we get a final resolution of about 15 micron. We then take that same mouse and we'll do histology on it as well. As part of that, we do our, what I like to call ghetto tech block face imaging setup, which is a $3,000 digital camera pointed at the top of a vibratome. It works for our purposes, but if Alan Brain would like to send one of their systems to us, I won't say no. And it results in this sort of image. So these are the resolution, which is very dark, so we'll skip past that. Of in vivo imaging, that's X vivo imaging of the same mouse brain. We'll then get to the block face imaging of that same mouse. And then we have the histology on an aperiod slide scanner. So then we get to play CSI, except we do it for real. Where we can zoom in on single neurons. So that's one entire mouse, the whole processing pipeline. So what it means is, is we have various data types and it means we have various data volumes. So at each of those resolutions and modalities, there's a certain size associated with that data, which meant we had a problem. We can't view this on a normal computer. It was a big data problem like everyone else. And it's only going to get worse. So some of the things we're working on now is this calcium gated imaging, which a lot of you have seen. And these data sets we acquire at five hertz for 10 minutes. So we're into the tens of terabytes of data here. So how are we going to deal with it? Well, you look around at the things in some restricted fields like in MRI and you'll see nifty, where they use lots of desktop software. And suffice to say it just isn't going to work for these size data sets. I mean, you just cannot load two terabytes into RAM yet. If you'd like to give me a computer that can, I'll take it. But the point is we can't. And then we also deal, a lot of these projects, we deal with information naive or adverse collaborators. And this is sort of the flowchart of how you deal with them. Because they will come to your lab, they'll acquire this data. They'll say, give me it on a USB disk. They'll run away with it. Great. Then they'll come back 14 times saying, which software do I need? How do I load it? What do I do with it? Where's the processing for it? And we've all gone through this. So what do we do? Well, we have to use existing paradigms and develop new tools which work around that. So we look at Alnbrain. They're using web-based portals to view all their data. Google Maps have been doing this for years. They expose multiple petabyte data systems, which we all use every single day on our smartphones. So any of us can access any point in that petabyte of data for the exact thing we want to know about it, but we don't download the whole data set. Why don't we do this in neuroscience? So this is the question we asked ourselves about five years ago now. So we set about to develop a system to do this. And this is what we came up with. So this talk from now on has nothing to do about neuroscience, sorry, and all to do about informatics and nerdliness, but we'll continue on even though it's Saturday afternoon. So this is a screenshot from the manual. There's various user interface elements, and I'll just go through now in the next couple of minutes some of the things which I find interesting and useful for our neuroscience users. So the big difference we make with all our data in the center is that we abstract the voxel sampling from the world space, which is what our previous speaker was just talking about is that we need to get away from thinking about our pixel and voxel-based data sets in terms of pixel number 14 versus 53. We need to talk about a world coordinate system wherein we can reference all of our data across all of our subjects. So what this means is this is a static, I'm going to do a live demo because I like to live dangerously soon. But this is showing how we can show an MR image with this is a Allen Brain data set which I built a few years back where I got in trouble from the university for downloading a large number of their image data sets and I produced one of these morphometric averages and we are realign this with our MR data at the top and it means if you browse through these points the two views will follow each other because while they're different resolutions we use the same coordinate system for both. The system had to be fast or our users just wouldn't use it. So this is a lovely scientific graph but what it's showing is that it doesn't matter if the data set underlying is a gigabyte or 12 gigabytes the access speed is the same which is what we wanted and we've tested this both on mobile and on good Wi-Fi. We wanted to have arbitrary color mapping that was important for some of our users. This is a, this is I think a one terabyte scanning electron microscopy data set so we can overlay colors on top of things and use user defined color maps. We can define the color map ranges. We can overlay labels upon our data sets in real time. I'll show you these things shortly and we can also display metadata associated with those labels so when a user clicks on a point we can say what's underneath that point. It also has a mobile interface so it'll work on your iPad or your Android thing so you can either access the website with a desktop browser or with the mobile system. We integrated some features around sharing so this means that you can go to a point in the data set and you can get a URL for that point and then the process is then you talk to Charles Watson or someone and say I have no idea what this data is at this point. Can you click on this link and tell me where I've gone wrong in my structural postulation? Charles clicks on the link because he's a nice guy and he comes back and he gets, this web page comes up instantly and then he can tell me that that's the exact point so this is an average of some of our seven Tesla data so far. So how does it work? So it's basically everything is done for the image tiling on the server side but there's a lot of work also done within the browser and that's all the color overlays and these sorts of things. We've now got rid of the Java web services layer and everything is done with C. That's just a bit of history. Sharing and Federation, the system also allows you to connect to multiple tissue stack servers with a single client. So what that means is that you could overlay labels from the almond brain if they happen to run tissue stack. Talk to me and I'll show you how. With data which is running from the CAI. So this means in my view of the world you can share data around the world with multiple servers. You'll note the map is the correct orientation. So how does it work? So the underlying data we read natively nifty. We read mink.com and bio formats so we cover some of the major data types out there. We then can convert that to a data format we call tissue stack raw. It's essentially where we blast the image out in the three most commonly accessed directions and we do that for speed alone. You could then go whole hog Google Maps for those who've done any work with GDAL type systems and you can pre-tile the data set into PNG tiles across the entire data set for all the levels that you're interested in looking at. Of course this means there's a speed penalty to pay if you're using one of the top systems. And there's also a space penalty to pay but I think space is pretty cheap at this stage. There's lots of offline tools that go with it so you can convert your data formats, you can set up pre-tiling. You can also embed a tissue stack instance using JavaScript into an existing webpage div that you have and there's an API to do that so you don't need all the extra things around it. You can just incorporate just the viewer. We release everything on GitHub. There's a fairly standard release schedule and there's also pre-built builds for most of the major Linux operating systems for your server. It's GPL v3 licensed. Do what you wish with it. Just contribute the changes back. We've had some uptake over the years so that's the graph of our blog site. You can see a nice pretty graph and you can see who uses it. Good on you America. So that's the short static version of the demo. These are all the people involved with the data and the development of it and now we're gonna have some... This is either gonna be very exciting at this stage or we can shift on to the next speaker. I'm not sure which way it's gonna go. I'll let you know. So I'm gonna be doing this via my mobile phone because we've all tried the wireless here. I think this is gonna work even on this little screen. Did it work? Yes it did. So this is loading over my mobile phone. It's a 3G connection. It works pretty well. This is a one gigabyte, sorry, this is about a two gigabyte seven Tesla average image of morphology and we can move around, we can change the different views, we can zoom in, we can zoom out, do a lot of sorts of things. This is some block face histology data which we can also load so we can swap the views and it will load it again. This is if we overlay a color map on top of it, we can choose from our list of color maps we wish to use and then it will actually do the rendering. This is done on the client side, that particular part of the rendering. So it's fairly quick. This is the serial electron microscopy data set. So this data set is about two terabytes and this is being loaded over the 3G connection but of course we only load the part we're interested in so it's quite quick and we can zoom around and move to where we want to go. This is the overlay of the labels which I was demonstrating before. So you'll note that the two views here are synchronized with each other so when we click on one it's a bit slow but then the command will get and it'll update the top window. Now what I didn't actually say is that these two developments here are actually completely separate web elements so we've tested this a little bit but it's not quite ready for release and that is where you can actually have these two divs on two separate browsers on different sides of the planet. So the idea is I can click a point here and it will send a web socket connection and update the other view on the other side of the world. Maybe someone wants it. It was an artifact of how we designed it. So we said, all right, cool, let's do it. So this is the overlay of labels. You can change the intensity and those sorts of things. So it's not, this is running on our own server. It's not just us, there has been some uptake so some of you are familiar with the BigBrain dataset so this is the dataset which is acquired by quite a few people in this room are associated with it. So there's various volumes you can download. I love this nifty file too large, won't work. So this is the BigBrain dataset once it refreshes. So this one is now loading from Canada. I believe that's where the Loris server is running this. Sameer will know. And this is loading, I think this is a three gigabyte histology dataset and it works quite well. So we can zoom in and zoom out. So for a little bit for that to redraw, it will come. So the previous demos I was showing of all the mouse data was running in a virtual machine in the Australian research cloud. It's a single core VM. TissueStark actually runs across a distributed Apache cluster. So if you have huge datasets, you can make that work as well. So that's about all from me. I trust we're back on time. Good enough. So thank you very much. So must be questions to this, I believe. In the back first. Very interesting work. Do you plan to extend that for 3D volume stacks as well? No, no. We looked around and there's lots of toolkits but they're already ready to do that. So the design decision we made was to write everything in HTML5. It's all based upon a canvas element. So that means you can use any of the WebGL toolkits and just add that data in. The coordinate system is all exposed by the API. So the idea is is pick your any volume based representation and just overlay it on top of TissueStark or vice versa. Okay, I was more thinking of querying the data through a REST interface and then render it in a native application. No, it's really made for Boxl data only because we saw that as a whole. There's other systems, I think, for doing what you're after. So maybe I missed it. Which data set is it? At which spatial resolution is this here? I think I clicked on the, let me see. This is the 100 micron isotropic data set. Would it also eat the 20 micron? Yeah, it's no trouble. We've tested up to two terabyte data sets with no trouble. Two terabyte, okay. I mean, it takes a long time to pre-build the data but you do that once. So me is looking worried. And this, what you have shown here is there's a service which, who can really reach this and possibly use it? Yeah, so everything I've demonstrated is available on the Web Open. TissueStark.org, it's all there. And this is running out of the Loris big brain site. So that's all publicly available. When you show the, let's say, mouse brain there, I believe, and the postulation structures and so on. From which Atlas are those postulations? So that's from the AMVMC project, which was run in Australia, the Australian Mouse Brain Mapping Consortium. All those structures are available on the Web, imaging.org.au, just download them. Great. Just a quick question. So this is really great work. Obviously, we have some software as well that is doing some visualization, but I actually think I'm tempted to go down this road. How complicated is it to install? We talked about it briefly, but is there any hangups, or can I just install it in the next hour here? Is that possible? We would hope so. So the first version you installed at the big brain site, that was version one. I mean, you look at the interface, there's quite a lot has changed. At that stage, the install process was manual. At this stage, now there are, there's a Debian, there's a Ubuntu, there's a Red Hat Linux, and a CentOS package. So you install it, and then there's one or two steps you have to go through. But it's a lot smoother than the original very bumpy ride you had. Okay, and the image format is Mink? Mink, Nifty, Biiformat, Dicon. Okay. Except for where the Nifty file can't handle things that big, I guess. Well, look, if you have enough memory in a machine to load a 30 gigabyte Nifty file, go ahead, it'll work. But the Nifty libraries require you to load everything into RAM as the first instance. Okay, thanks, very nice. Okay, thank you very much again.