 We're going to say that we're ready to begin. Are you ready to begin? All right. Do you some sort of yelling or something? This room is very, very quiet. He's a ringer. He's my co-worker. All right, so I am Carina Cisona. This talk is called Converged Compute, Fast, Secure, or Cheap, Pick 3. I am joined today by, as I said, my colleagues. Lars Butler is also on the 0VM team. He's an engineer on the team. We are also doing a workshop immediately after this talk. And he will be leading that along with Aglai Sigler and Cody Bunch, who are both from Rax-based Private Cloud. So they'll be here as well. What we're going to do is the format is basically do the talk, then the workshop. And we invite you to do that. Get hands on with the stuff that I'll be giving you. I'm giving the high-level overview of what this stuff is. And then what we'll do is, in between, we'll do Q&A. So you can basically squeeze in all the questions about all the things at once. So that'll be kind of the middle between these two sessions. So as I said, my name is Carina Cisona. You can find me everywhere on the internet as cczona, except in my own Rax-based email, where it's carina.zona at raxspace.com. They're special. So secure, fast, or cheap. We have gotten to an interesting point. Containerization is the great new hotness. But it's also making some decisions that we either have to have secure or we have to have lightness. And that really puts us in an awkward position, right? We're making some hard choices as if we have to. And what I really want to talk to you today is about ways we can not be compromising on this stuff. We should be able to have both secure and fast and lightweight. And it would be really great if it's cheap, right? So we can accomplish those things. Oh, I forgot to have a remote in my hand. OK, so what is 0VM? Well, it's a couple of different things. It's an open-source project, which is sponsored by Rax-based. They acquired the AP last year and have continued to keep it open-source. And there's a number of teams behind it at this point. I think we're 12-ish or so. So it is, at base, a technology for fast and safe execution of untrusted user code. So there is the project, 0VM, and then there is the base technology. And that's describing the nutshell of the base technology. So taking that down to some buzzwords here are the key features. It's secure, lightweight. It's an application execution environment. It's very scalable. And it is doing process-level isolation. And this is something that we'll go back and go into more detail with. And the reason I want to give you this is to kind of plant in your mind how these things contrast with other related technologies in the same category. So secure execution. It is a secure execution environment. And what do I mean by that? Well, a couple of things. Knackle is a native client. It was developed by Google for Chromium. It is technology for isolation of essentially server-side code for execution in the browser. So they put a lot of their time and money into developing that. They've paid some tremendously large bug bounties. We get to a wonderful world of open source. Just take that for free. And we do. So the other thing Knackle is providing, actually two things. OK, so it's providing two things. One is that it's providing validation. So before code can even be passed off to 0 VM at all, Knackle is doing the work of validating whether this code is safe to run. Does this through static binary validation? And what I mean by that is that processes cannot jump. They cannot communicate. And they cannot coordinate. If it violates any of these things, I'm sorry, Knackle will not pass it off to 0 VM. The other thing that Knackle does for us is it locks down a number of syscalls. So Linux kernel has over 100 syscalls. Knackle reduces that down to some sort of double-digit number. And then what we do is we take that further. So 0 VM is essentially a trampoline. So it gets passed off the application assuming that Knackle has validated it. And then what we do is we also lock down the syscalls further. So there are essentially, at this point, not essentially, there are only six syscalls that are allowed within 0 VM's environment. P read, P write, jail, unjail, fork, and exit. So we're creating deliberately a highly constrained environment. And that also means that there is deliberately access to very few resources. If you want to do networking, you can't. That's not the app to write for this. If you want to work with file systems with persistence, you can, but only under a great deal of constraints. And you have to make deliberate choices about that. We call it channels. This is your method for being able to make the choice to communicate with the kernel. And what we're basically doing is we're creating a virtual file system, an in-memory file system. And you can choose to specify a channel that is essentially treated as just IO and designate that the application process can write to this particular designated channel. But all of those are things where you're choosing to open up the security model. 0 VM is really lightweight. And when I say lightweight, people are always comparing it to other containers. And I'm like, no, no, no, no. Lightweight, I mean, OK, let's compare to old style VMs, right, to hypervisors. VMs are so fat, right? They share resources, first of all, which is an exposure vector. They're slow to spin up sometimes as much as minutes. And there's such a resource hog. And resource is on a variety of levels, right? I mean, they're using network. Their use of hardware is really excessive. You have to pass stuff back and forth across the network in the conventional way that we're using them. And then they've got all this resource load, all this stuff that you really don't need for cloud applications for the most part. So they're really not a great model for what we need to accomplish these days. Containers are the new hotness for totally valid reasons. I'm going to take a bit of water here. They are, though, actually making, they're less secure, ironically, than VMs because they're opening up more surface area by sharing more resources. So we're actually increasing that contamination risk. They also, while they're not as excessive as a VM, they're still using far more resources than are necessary. So we have excessive just in the form of less excessive. Excellent. So the typical metaphor for containers, especially if you think about something like Docker, which is the first thing usually that comes to mind for people when you say containers, the typical metaphor is a shipping crate, right? And you could think about what that metaphor implies. Things like stackability, right? Modularity, interchangeability, something that's really hardened around it. Zero VM, the better metaphor is something like an egg crate. It is about isolating those things. We don't care about how they stack on top of each other. That's not the important point. The point is to cradle each of those processes and protect them from the rest of the world or protect the rest of the world from them as well. So right now what we're trying to do is we're trying to use containers in ways that they were not at all meant to. And so we kind of do this thing of like, if I just spend a little time hacking this, I can make it. I can make Docker what I need it to be. It's so close. It's almost there. You can do that. That is the thing you can do. It is not a great thing to do. Zero VM, the kind of lightness that I'm talking about here is 75 kilobytes for the total executable file. And what that means is that the spin-up time on this is about five milliseconds, depending on whether you execute in demon mode or not. In demon mode, five milliseconds is genuinely realistic or less. If not, then like 30, 35 is a more realistic number. I do have to clarify that those numbers apply just to zero VM. Knackle's validation time is its own amount of overhead, and then of course your application, that's up to you what kind of overhead you're imposing. But this is zero VM's footprint itself. So it's massively scalable, and in part because of this. Oh, sorry. Back one. All right, so I said zero VM is kind of a bunch of things. At its core zero VM is just Knackle plus the zero VM runtime called ZRT. And I hope that I've already run through some of the ways in which you get secure, fast, and cheap out of those. We're using a lot less resources. We're doing that very deliberately. You're gonna have to make choices to use resources for the most part. It's fast. It has extremely constrained environment. You have to make a lot of choices here to actually lose those benefits. So because we have this likeness to benefit from, we can actually embed zero VM, and that includes inside a data store. And that gives us some neat opportunities, which Swift provides. Swift, you know, it's incredibly scalable, right? So it makes a nice, easy target something we would want to be able to do fun stuff with. It's got a great community supporting it, and it's got an API. So we use that API to create some middleware that basically marries zero VM in Swift, creating some new functionality. So it's kind of extending the API and allows you to do stuff like map reduce inside Swift itself. You don't need to move anything outside of the data store. You don't need a compute cluster. You're doing that work on the storage cluster itself. Hey. Okay. So that's the portion that is zero cloud, which is another part of the zero VM as a project. That's one of its pieces. So we have zero VM, the basic technology, Nackle plus ZRT, and then zero VM, the technology with Swift via middleware is zero cloud. And that too is getting a lot of those benefits and creating some of its own in terms of that secure, fast and cheap. So let's look at some of that stuff. All right, unfamiliar buttons, sorry. So one of the benefits you get out of this is because you can cross compile executables for Nackle, it means that anything you're able to cross compile, you can then use essentially natively. And so one of the things you can do is use Python as essentially your querying language. You write Python apps to do that work for you, to do the compute within Swift. So you could think of those as stored procedures, Swift doesn't have its own stored procedures, but you're getting stored procedures that you can hopefully use in the language you're using every day, right? Something that you don't have to learn this special flavored language of a particular data store. You're using knowledge that you already have. So let's look at some use cases for this stuff. Compute on cold files we found is incredibly useful application. We had one example where we were given, okay, so like I said, we're sponsored by Rackspace, which means we get to work within Rackspace, which means we have access to the obscene amount of data that is produced every day by a large cloud provider. And so they gave us this problem of, here's like tons of varying log files from throughout the company, and they're in all sorts of different formats and they were just tossed into the data store compressed. And what we need is to find a needle in a haystack. Can you do that? Because when we try to do that, it's taking a bunch of time to decompress the stuff and we have to move it out to some place to do that work and it's taking five hours just to do any search whatsoever on this 70, I'm sorry, 17 gigabytes. And this was really just not realistic time. I mean, you're doing more than one search. So we took that and because of cross compilation issues, we couldn't just run grep in zero VM. So instead we wrote a Python script that essentially accomplishes the same thing. And this is a suspense moment. And so what we were able to do then is not only were we doing that search that was taking five hours with grep, but we actually were also doing the compression, the decompression, pardon me, in memory in zero VM. So nothing had to move out at all. So in that unfair fight of decompressed files already available versus compressed files that have to be decompressed in memory went from five hours to three minutes. So that's the kind of speed benefit that we're talking about here. Next one, text analysis. So Project Gutenberg, I assume you've looked and noticed they have a lot of words. So we've done some fun analyses on some corpuses from Gutenberg. We see a lot of potential there. Image manipulation, image manipulation, video manipulation, we've had a variety of examples. You can do a lot of stuff that's sort of like image magic. We've done things like watermarking, that kind of stuff, resizing, blah, blah, blah. Video, we had, one that we are doing very successfully is essentially screen capping. You can just take a slice periodically out of the video and just take an image out of that. So you can do something that's a little bit like, say somewhere between YouTube and Flickr in terms of just being able to do screen capping to represent what the content is. We had imagined that we could do live transcoding of video straight out of the data store. What we found is that that's really, really hard and it has nothing to do with zero AM. It turns out that video is really hard. Surprise. Yeah, so there's a reason why there's a whole specialized industry just for dealing with transcoding. It's incredibly hard and it would be really great to be able to have that kind of expertise on our team. We just don't. So it is something that we do have, we've investigated enough to think that it is, in fact, viable to do stuff like, okay, let's get all those mobile formats right at the same time. You need high resolution, great. We can, you know, feed all of those out, whatever is being requested in the API, like boom. But we can't accomplish it ourselves. So that's a great project if someone else wants to take that on. That would be, I think, very viable in our minds. Auditing, so there's so many fields in which you've got regulatory requirements, some sort of compliance requirements, things like protecting who has access to files or being able to prove essentially the tracks back. How did these modifications come? The security model provided by NACL really gives you a lot of security guarantees on that, so that is a great opportunity. We've also been approached by people interested in embedding and things other than SWIFT, including hardware. Manufacturers of SSDs have been really interested in, hmm, how can I exploit this idea of being able to have a little execution environment right there on my SSD? We're not pursuing that ourselves as a team, but that is another one of those applications that you could look at. Trying to think of some other ones. There was another one with embedability. Oh, yeah. So antivirus researchers actually find this intriguing, the ability to really be confident in your isolation and actually still be able to do execution. So there have been a couple of antivirus companies who've been very curious about what the possibilities are here. So those are the major use cases that we've looked at. I would also love to hear from you if you are thinking of something else that is potentially useful here. We're really curious ourselves. So typically, as soon as you talk about 0VM, you hear, oh, give me like Docker. Because a lot of the vocabulary is legitimately the same and we are containing, but we're doing it for really different reasons in really different ways. And so we've got this shared vocabulary that sounds like we're all talking about the same thing and we're completely not. So let's run through it. Okay, so obviously the isolation is built on Nackle. Docker is using Linux namespacing. 0VM is at its heart an execution environment. Docker is about running isolated apps. So the big emphasis, of course, on Docker and one of the reasons why it's so popular is it's so easy. Ease of use is like right there at the top of their feature list, right? We do not offer ease of use. I'm sure that'll be on Twitter. The model's really coming from like two different ends of the scale, right? Security first, we'll build on that things to make it easier. Docker is starting with the philosophy of easier use and we'll work on adding more security to that. So there's a lot of ways in which we're similar but we're coming at it from a completely different point of view and that means depending on what your particular problem is, one of these is probably a much better fit for you. This is not a matter of there's the right one and the wrong one. Now let's see, so scaling. So they're both very scalable but the difference is we're talking about scaling execution, that ability to scale processes rather than say scaling deployment of apps. So that's a really different scenario in which we're using these terms that sound like exactly the same thing, right? So in primary use you're really talking about a production tool versus a deployment tool and that's not to say that they don't have that Venn diagram overlap some, right? I mean obviously many people are using Docker for things other than just deployment but those are sort of the primary context in which you see these tools being used. The isolation when Docker conversation about isolation comes up, it's usually about isolation of an app, right? What we mean is restricting access to everything else. Zero VM instances essentially have no, they're isolated from the kernel. That is the point there, not to layer neat little tidy applications on top of the kernel but to keep everybody away from being able to talk to the kernel. So very different form of isolation. Some of the strengths that actually I didn't mention actually determinism. So it is a completely deterministic environment and we'll talk about that also as a constraint on what kind of applications you can use with this technology. Docker, ease of use, portability, right? These are all the things that when you talk excitedly about Docker, these are the reasons, right? So because it's deterministic, executables are gonna run the same way every time if the inputs are the same, the outputs are gonna be the same and because zero VM is also single threaded, it means that any given point in time if the inputs were the same, you can count on that given point in time, that process to also be the same. So you can get essentially snapshots whereas the sort of sameness that you get with Docker is about things like server templates. Isolation kernel, we've talked about that. Ease of use, we've talked about that. There is no process reuse whatsoever. Nothing is reused unless you're talking about say something you persisted to one of those channels. Otherwise, that process dies forever and everything with it. That is not to say it's short-lived. The timeout starts out pretty small but we did some math on the timeout. You can, if you want to, run a zero VM instance for 68 years depending on what your problem is. Academia loves to run things for this long. But yeah, so disposable, not necessarily short-lived. Fine green metering, if you can spin up in five milliseconds and you're just executing a process, all of these processes in parallel and essentially it's an infinite number of processes that you can run in parallel, it becomes really viable to be metering in milliseconds as well. Instead of an hours, instead of in seconds, we can really get this down. We talked about embedability, we talked about parallelization. Zero VM is a much smaller project. We don't have anywhere near the kind of adoption that right now Docker and other containers like LXE have seen. And that includes institutions like Google and Microsoft and Rackspace, New Relic have really gotten behind the bandwagon of Docker and we're not in the same place in development. We're not as far along but we also have a lot of different hard problems that we're trying to solve. Okay, so if Docker isn't exactly in the same space, there actually are a number of projects that are in a much closer space that are a lot more comparable. One is JointsManta, suspense again, it's all a trick. So Manta is a platform as a service. It is very similar to the attributes I'm describing. One of the things that they don't mention is security. It is not starting off with that foundation of let's wrap all of this in a tidy little bow of isolation. It is also a proprietary service, it's not open source. So you're looking at something where it may be great but if you need some flexibility on how you're using it, you're stuck. Hadoop is, a lot of times when people ask me for like the one sentence, what the heck is this? You kind of go like, eh, it's kind of Hadoop-ish. And that's fair but we are in some respect, 0VM is the isolation, right? ZeroCloud is that conversion compute. So ZeroCloud is in many ways providing capabilities similar to what Hadoop could do but there's a number of differences, right? One is that ZeroCloud is not a database. It is merely some glue between Swift, the database and all the features that 0VM is adding to that. There's also issues with ease of use, right? Have you ever tried doing MapReduce? I mean you can do MapReduce in Hadoop, right? That's the big selling point. Have you actually done it? There's a lot of pain, no? Mongo, if you love JavaScript, Mongo is awesome. So I'll say. So I said there's constraints. There are a bunch of constraints. Most of them are deliberate, a few of them are just because of where we are in the course of development but let's talk about those. Okay, first of all, it's x8664. Secondly, you have to be able to cross-compile to the Knackle tool chain and that eliminates a whole lot of things. So for instance, what you do have available is Core Python 2.7 is the one that's been ported. You have C. We actually have a Lua port that passes Lua tests. I don't know whether anyone's used it but anything you want to port is gonna have to cross-compile and that means any library with C extensions means you have to cross-compile each of those and all their dependencies and so it gets really complex which is why we have Core Python but then if you want to use things like SciPy and NumPy, you're kind of off on your own trying to make that happen because that's a really hard port to do. But surprisingly, there's a tremendous amount of work that could be done with nothing but Core Python and we've had a lot of people to say like, I don't care, it's fine. I'm still able to do all the number crunching that I needed to do. So, but typically you're talking about C, C++, Python, assembly, correct? Yeah, I think assembly, everyone's writing assembly these days, right? As I said, it's deterministic. That means that you have all the time and date functions are stubbed out. I had an interesting conversation with the guy who does this. So, it doesn't return a zero, it doesn't return a null, it doesn't return a random value or a consistent value. It returns a constant based on, I think it's number of instructions executed at that point. Let's just say it's a number you shouldn't rely on for anything. It does return values, but because of that deterministic environment, if you need randomization, the only way you can get it is by passing in a seed. It's single threaded, which means that your application needs to be as well or at least able to handle being in a single threaded environment. Right now, and this is the only one that's really not deliberate, MapReduce is limited to 1,000 instances, that is 1,000 zero VMs, 1,000 executing processes. We semi-know why and just have to dig down and fix that. And I wanna be clear that that's not the same thing as file handles. Each of those instances can pass around a lot of file handles. In theory, they can each pass around 1,000 file handles, no one's actually tried that, so don't call me on that number, but a lot, we'll go with a lot. So all of these technologies really, I mean, I like doing the addition, right? The NACL plus ZRT plus Middleware plus Swift, they're all building blocks. And the great thing about that is, each of them has these benefits, at least one, two, even three of this, fast, secure, cheap. And that means there's a lot of opportunities to build and assemble these things in different ways. For ZeroCloud, we've told you what that particular line of combinations is and what we get out of that, but you can use them as different building blocks to do very different things. Swift, writing Middleware, it turns out, really opens a lot of opportunities. The Middleware that we've written, essentially, is extending the API, which means that you can either look at it as an example for what kind of adaptations you would like to do, or take advantage of it to extend further. What I would really hope that you would do is we have recently started writing, I should say, Lars has recently started writing tutorials, and this is something that's been lacking for a while, because it's been such an R&D kind of project. And we're doing a tutorial, obviously, today, next. And what I hope that you will contribute, if you're looking for something to contribute back, is more examples for tutorials, things that seem relevant to you, or that you think are really creative and interesting uses to allow other people to kind of expand their imagination of what are the possibilities with this. So, as I said, we're starting that workshop soon, and how much time do we have before it starts? How are we doing? Oh, great, okay. So why don't you guys come up, and they can explain a little more about what the workshop will be, and then we can do Q&A for this and for that. Does that sound good? All right, I think I don't... Oh, there's a break in between. Oh, we have tons of time. All right, hit me. Okay, all right. The mic's up here. Oh, you got one there too, okay. Can you describe how you can throttle the processes, say in a Swift model, if you have a bunch of these running on object storage nodes? What do you mean by throttle? Make sure that they don't take up all the processing assets on the computer. Go for it, yeah. There's a mic right here, Lars. Yeah. Is this work? Yeah, cool. Yeah, at the moment, the scheduling for the orchestration of the zero VM instances inside Swift, it's super simple and super naive, so you could easily just eat everything up. So that's something that needs work and optimization. So it's incredibly naive. And I should add to that. One thing I forgot to mention is that Rackspace, in addition to funding the project itself, is also funding a bunch of research projects, academic research projects, at the University of Texas at San Antonio, they have a cloud big data doctoral candidates thing. So there are a bunch of doctoral candidates who are working on various aspects of zero VM and scheduling is one of them. That's one of those projects. So that's a couple of years of work being put into making the scheduler considerably less naive. That's part of where the project status is, that it is in R&D essentially. I have more questions. Go for it. You do get two, yes. On a process, say you want to parse a big file and what happens if it takes a long time and the HDB request times out? You have load balancers, you have all these things in the middle where if there's no activity going back to the client, it'll just drop the connection. What is like a process, maybe you can describe the life of a request coming in from a client and then, you know what I'm trying to say? So first I want to just kind of, so here's what happened. Lars just inherited the zero cloud code base, like he's one of the new maintainers as of a couple of weeks ago. So we're going to cut him some slack because while he knows a lot, he doesn't actually know that much about zero cloud itself. So he's been on zero VM, but that particular aspect just so happens that the poor guy is going to be asked questions that he is not yet able to answer. So let's see if this is one of them. So in terms of timeouts and the life cycle of an application, the only timeouts I have seen are the ones that are put on zero VM itself. So Karina talked about how you can run these things for up to 68 years. That's actually a hard coded value in the configuration of an instance. Typical values are like 60 seconds or whatever. Obviously you'd want to tune that for whatever kind of jobs you're running. So those are really the only timeouts I've seen. Probably there's something else that would happen within Swift and the middleware. I'd have to look deeper into that, but yeah. That actually answers my question. Okay, cool. Okay, thanks. Thank you. Why don't you stand up so the camera can catch you. Okay, more people. Come on, I'm being blinded. You have to give some sort of reward. Ask questions. Another ringer. I have that suitcase there. I already demonstrated that. We have t-shirts. Not the red ones, I'm sorry, those are just for team, but you get lovely blue ones. The red shirts are for the away team. We don't have many red shirts left. There is a reason why he just took over the code base. Someone else wore a red shirt. So the question is why do you need security around the processes running in the data store? So let's say you're a service provider and you're running in a Swift cluster. You want some guarantees that your various users can't talk to each other intentionally or unintentionally. You don't want them, like technically you could just run Python code right on the host within the Swift cluster, but then you can do bad things. And Cody wants to say something as well. So it goes a little bit beyond just service provider stuff. If you have a, oh, up on the stage. There's a camera pointed at the stage, yes. Stage. Uh-oh, I think I, yeah. So say you're processing large amounts of unknown data. You want to be able to work on that data without having it in, is the microphone? It is, it's fine. Okay, you all hear me, because I can't hear myself. There's no, like. You're good. So large amounts of data that are not necessarily trusted. You're acquiring them from somewhere or you need to provide access to said data. One of the most interesting use cases what got me really interested in this is like the quantified self data. So when I go and work out, I wear a heart rate monitor. I wear a power meter. I wear something that tells me my lactate threshold and so on. And that's all really valuable data that I then turn around and upload unencrypted to a cloud service, right? That's, we're still in early days and honestly I don't really care that somebody knows that I'm going to die of a heart attack in eight weeks because that's probably a good thing, right? Maybe they market some baby aspirin to me. But the idea that we can turn around and microsecond by microsecond audit access to that data via these processes and allow you to run analysis or operations on that data in a way that will not affect the data itself and cannot infect other processes. So if I give you access to my data, your process cannot get access to his data, right? Does that help a little? So you don't always, this is another layer in that model, right? And then not everyone has the ability to do that. So if you're collecting a bunch of log files and you need to provide access to three different auditors, right? And they each need a different section of those log files but you've just logged them all to one centralized location. So you don't have the data segregated that way. You can still provide access via this mechanism. So just to give you a little bit of detail about how this actually works inside of Swift. So when ZeroCloud receives a request to execute some code on this data, basically you can think of it like shipping the code to the data and then the middleware in Swift will figure out where that data is physically located in the cluster, like which file represents this data and it will actually just give access to that file. So you need to, and of course this node that you're running on has data for anyone and everyone else that has an account on the Swift cluster. So you want that process to only be given access to very specific data and you don't want it to break out because it's just a process running on one of the Swift nodes. Does that kind of make more sense? Okay, so the middleware has a clue about permissions in Swift which user has access to what data. So it tightly controls, what you doing there? So when it starts at zero VM it has to explicitly declare which files it gets access to and it will only give access to files which that user is authorized access to. We'll get to that a little bit in the workshop. It's basically just a manifest file that's fed into zero VM when it starts. It says, okay, so this process is gonna run this code and it has access to this, this, this, this and this file and that's it. And so that's like the trusted part of the code. It passes execution off to the user code which can be doing any sorts of horrible things you can imagine but it only has access to those resources. It can't just open a socket or it can't just go grab another file that it wasn't explicitly granted access to. Does that help? Cool. Back there. Please use the microphone. Well, for the sake of the recording, yeah, they will be recording. Shumla is so tall high for me. Oh. Perhaps here, yeah. You can tilt it. You can actually, you can pull it right off of the mic stand to hold your hand. That's okay. In terms of basic information, like step by step, how to use what that means because that's new for me. I'm not, that's a completely new concept for me. What can I learn that from? Zero VM.readthedocs.org. You have the docs there, okay. And another question is in terms of, I have a test like with very, very big data size, like pentabites, something like that. Does it work? I don't know of anyone that's actually done petabytes of data. I can say a little bit about that. So some of the UTSA research projects are specifically on creating data stores with zero VM for processing big data. So I don't know what stage they are on that, but that is exactly what they're looking at is processing way more massive data sets. That is not what we're doing, or at least we're not doing that part of the zero VM work. Oh, could you do that in the microphone again? Who are they? Who are they? Oh, I'm sorry, UTSA is the University of Texas at San Antonio. There's a blog post that gives a little bit of information and if you want to email me, again it's careena.zona at rackspace.com or tweet at me, CCzona. I'd be happy to give you more information about that. More? Come on, there's always more. Ah. Sadly, you're gonna have to go to the back of the room. There we go, it's like standing in chair. So you mentioned several times Hadoop and Mongo and other big data solutions. Do you see in your ability to bring the code to the data, do you see the future of zero cloud like a building block for distributed databases because it's like a common theme for basically everything that very high process and big data. That's a very good question actually. That's actually the subject of a couple of the PhD students at the University of San Antonio. So they're trying to implement like a SQL like query engine on top of this platform. So I would say right now no, but that's something that people are very interested and actively working on. You're allowed to ask questions too, Agla. So I don't have a question, but I would like to encourage everyone to stay here for the workshop. I promise you it's going to be really easy. At the end of the workshop you will have, you will walk away with a zero cloud on a USB key and you will have applications running on zero cloud using zero VM. It's going to be pretty simple. We'll start from zero to cloud. There's something pithy there about zero to zero VM. Can you just put your slide with your email address please? Yes, I will. What we can do is go back to the beginning actually. Close your eyes if you have seizures. So that's the easiest way to reach me is just be a Twitter or you can reach me at karina.zona at Rackspace. I'm sorry the email address isn't there, but yeah, the spelling is. And you're welcome to collect your shirt on the way out. How does this even open? This thing is enormous. And I have sizes in women's extra small to men's four extra large. So don't be telling me I don't have your size. No, there's women's extra small all the way to men's four XL. So yeah, men's start I think at smaller mediums. All the ones with the round collar are the men's. That's the v-next are the women's.