 Welcome to another edition of RCE against Brock Palin. You can find us online at rce-cast.com. You can also follow me on Twitter at Brock Palin, all one word. That's Palin with an E, not with an I. That's been a common mistake lately. I also have here Jeff Squires from Cisco Systems and OpenMPI. And I keep forgetting, HWLoke. He's also HWLoke. That's right. So I saw Bryce send out an email that he's going to be at SC. So you can meet the HWLoke guys and some other guests that we've had on the show before. And we're going to be giving a talk in the Cisco booth. Oh yeah, give a talk about HWLoke in the Cisco booth. So come there, get your free Cisco shirt. And if you're lucky, get your free RCE shirt. Oh yeah, yeah, we have the RCE shirts to give away. Both of us will have some. Jeff will be around the Cisco booth and I will be just floating around all over the place as Michigan does not have a booth. OK, so our guest today is Daniel Templeton, formerly of Sun, now of Oracle, representing the Oracle Grid Engine, formerly Sun Grid Engine. And hopefully a lot of this confusion with the transition of Sun into Oracle, Dan can set us straight. So Dan, why don't you take a moment to introduce yourself? Sure, so yeah, I've been at Sun for, well, way too long. I drank quite a bit of the Kool-Aid and Blood Sun Blue for many, many years. I started out in the Grid Engine team as one of the developers and then just kind of couldn't keep my mouth shut and eventually ended up as the product manager, which is where I sit today now at Oracle. OK, so you've been product manager of SGE, OGE. What is the, I've seen Oracle Grid Engine out there. A lot of people still call it Sun Grid Engine. What is the official name of it right now? Is it Oracle Grid Engine? The official name is Oracle Grid Engine, although all of the binaries are still acronymed to SGE. So if you look in the SGE, QMaster, SGE, Executive, et cetera. And I don't think we're actually gonna change that. OK, so can you give us a rundown? What is OGE, the Scheduler Resource Manager, a product like Globus, what is it? Well, so OGE is, it's the Alpha and the Omega. It is the Universal Resource Manager. The whole idea of Grid Engine and the other similar products that are out there is to take a set of resources and a set of incoming workload and make the best use of those resources to satisfy that workload. And the place where Grid Engine really shines is when you have, say, multiple users from multiple different organizations running multiple different applications and they're all competing over those resources. And the more users and groups and applications you throw at it, the more interesting Grid Engine gets because scalability is one of our hallmarks. So what's the history of SGE and OGE? And how did you come to be where you are today? Where did you start from and all that? Yeah, so Grid Engine came about as a company founded in Germany in 1990, maybe 1989. The way back when Grid Engine is a very old, very mature product at this point. Sun picked the company up in 99, I believe it was, and then released the first open source version of it in 2001. So it was actually a closed source product prior to Sun's acquisition of Gridware. And then Sun opened it up in 2001. So we had the product and then also the Grid Engine open source project. And that went on merrily for a while under a variety of names that had been Sun Grid Engine, Sun Grid Engine Enterprise Edition, Sun in one Grid Engine, back to Sun Grid Engine, and then the Oracle acquisition happened and now we are Oracle Grid Engine and we now it's actually okay to just colloquially call it Grid Engine. No, well that certainly simplifies things quite a bit. So you mentioned a second ago that scalability is one of your trademarks. What do you mean by that remark? Well, so if you go look, for example, one of our top customers that if you've ever talked to anyone from Sun in the first three minutes of the conversation, they will bring up Texas Advanced Computing Center. We were all brainwashed to do that. Texas Advanced Computing Center's ranger system which debuted I think at number three on the top 500 list is 63,000 cores running under a single Grid Engine master. Grid Engine is designed for scalability in these cutting edge HPC type environments. TAC is the largest system in deployment that I'm aware of but they are far from the ceiling of the product's capabilities. So when you say scalability, do you mean scalability of cores or scalability of queue depths or policy limits or what exactly do you mean by scalability? Because frequently just that word is kind of bandied about without real strict definitions. Sure, so well actually I mean in all categories although not necessarily all at the same time. So if you look at, for example, we've got a customer in the EDA software space that's doing 30 some odd million jobs a month in a relatively small cluster. So throughput, we can handle a really decent number. We can handle a really decent queue depth, particularly if you get into parametric or array jobs. Grid Engine does that better than most other schedulers out there. The way we treat an array job, a million task array job is no different than a one task array job to us. And so you get really great scalability with regard to array jobs or parametric jobs. But even so you talk about a queue depth of a few hundred thousand jobs, that's not a big deal. And yeah, and based on cores and for us scalability is really the deciding factors are queue depth and number of machines that you're managing. It really doesn't matter how many slots are on those machines because slots for us is a very arbitrary concept. So whether these machines are single core or 16 core is irrelevant as far as scalability goes. So what about managing a dispersed systems? We call it a grid engine and a lot of people think of grids as dispersed machines, volunteer computing, something like that. How's it compared to something like say Boink or Condor? So well, it's kind of a philosophical difference I think between say Grid Engine and Condor. Grid Engine does not assume that it is strewn out all across the world in the way that some of the grid purists might define grid. And so this is grid was as specifically defined back in the day that Grid Engine was named Grid Engine as cloud is specifically defined today, right? It wasn't, there's no specificity to the definition then and there isn't now. So what we consider grid is being able to put your machines together to aggregate the compute, aggregate the resources and derive greater business value out of it. We do differ from the grid purists who say, it's not a grid unless you've got a machine in London and a machine in Tokyo and the master sitting in California and everybody's sharing work around. And in a large degree that's because in a practical sense, that's just really hard, not necessarily from the scheduler's perspective but from the making use of it perspective because it's all about the data, right? And you don't wanna go ship terabytes of data all over the world, but that's just a bad thing. So looking at things like Condor, Condor is more of a, like you said, a voluntary system where the nodes are independent and are volunteering to participate in a cluster so that everybody can get some work done, kind of like a steady at home-ish sort of attitude. Whereas Grid Engine assumes that there is a set amount of dedicated resources, this is in a data center, there is a master that owns the machines and it's more of a union model, right? That the orders are passed down from above and the nodes that are in the compute grid are slaved to the master. Okay, so would you say that the philosophy is similar to more like Slurm, Torque, some of the other resource managers that are out there? Absolutely. So can you integrate, so you said that you have a scheduler, how complex a scheduling can you do? Grid Engine supports about as complex a scheduling as anyone would be capable of understanding. One of the problems that you can easily get yourself into with Grid Engine is there are so many knobs and switches and buttons to play with on the scheduler that you can end up with configurations that either make, that are so complex that you can't tell if they're working or they're so complex that they're not particularly useful, but we support all the good stuff, the fair share scheduling of a couple of different varieties, ticket based policies, being able to do fine grain resource quotas, being able to do advanced reservation, resource reservation to prevent starvation of large jobs, pretty much anything that's out there in the other scheduler systems as far as scheduling capabilities, we also support. And we're looking, going forward, we're looking at doing some really interesting things. We've got a guy from the old Sun Labs group who has done nothing all his life but scheduling work and we're working with this guy to apply some of his experience and skill set around the mathematical aspects of scheduling to do some really interesting stuff with Grid Engine, we'll see where that goes. So what's the actual architecture of this? Is the queue and scheduler all one process or are they independent? What actually goes on a compute node? What goes on a submit host? That's actually one of my favorite questions because I think we have a particularly good answer for that. So with Grid Engine, you've got aid master process. The queue master is a multi-threaded demon. The scheduler is a thread in the queue master. It's got a couple of threads for handling incoming communications and a couple of threads for handling events and so on and so forth. So you've actually got the advantages of a fully multi-threaded master which not a lot of the other DRM systems have gone through that pain. We went through that pain around about 2005. It sucked but we're on the other side of it now and we're able to leverage the advantages of having that multi-thread architecture. On the client side, there is a demon that runs and again, multi-threaded with regard to communication. So you've got in total a master demon and then on each execution node, there is an execution demon. There is one port that you open for the master. There is one port that you open for the execution demon. All very, very simple to keep track of. So let me change direction here a little bit and ask about the elephant here in the room. So there's been a bit of a brouhaha on the interwebs about the licensing issues and then there's been talk of a fork on the open source side. What can you tell us about this? Can you clarify any of the rhetoric that's been going around about there are people who are talking who are angry and not necessarily talking with their brains? So I wonder if you could just kind of clarify the whole situation for us. I'm not sure that anybody's really all that angry. I think people are just a little bit confused about what's going on and rightly so. So there's a couple of different issues that are coming up there. So one is, is Grid Engine alive? Absolutely, hell yes. We've got a roadmap coming up that I think is gonna be really interesting. We can get into this a little bit later and I think we will, but it's fairly logically what you would expect a company like Oracle to do with a product like Grid Engine. The other part of it is okay, it's not free anymore. Hadn't actually been free for a while now. It's been maybe two years that it's not been free. Sun had this phase that we went through where everything was free. We were gonna give it all away for free and make it make up for it in volume somehow. I'm not sure how that was going to work. And for Grid Engine, it really didn't work. And we actually exited that quite a while back. So Grid Engine has been under a 90-day eval license as opposed to the free for everything forever license since 6.2 update two, I believe. So that's, people are only just now discovering that and I'm not sure how that slipped by. We, it's not like we kept it a secret. So anyway, that's the second thing. And then the third thing is what's going on with the open source. It's no big secret if you go out to the open source website and you go look at the CBS logs, we haven't checked in anything since the acquisition. That you can take for what it is. I honestly can't say whether we're going to continue contributing to the open source, if we're going to go do a delayed commit, if we're gonna be able to adopt like the MySQL model of doing open core, if we're gonna do something like the Solaris model where they're doing delayed commits. Right now we're obviously not doing any commits and I don't know where we're gonna go from here. The forks, there's actually, I think there's two forks now if I'm reading this correctly. I saw on the mailing list today. So there's a open grid scheduler fork, which I love the guys that are doing the fork, right? One of the things, one of the great things about Grid Engine has always been the community. And so, I'm annoyed that there's a fork, but I love the guys that are doing it. So I can't be annoyed with it really. So open grid scheduler, they're gonna try and have a release of that before super computing. And there's another one that just came up. I find this amusing, they call it SGE, the son of Grid Engine scheduler. That's great. Yeah, so go out to the mailing list. They just sent out the link. It was last night or this morning for that one. And so if Grid Engine lives on, if they fork off the open source and it lives on that way, that's fine. The plans at least for open grid scheduler is that it's a worst case scenario fork. In other words, they're doing the fork just in case Oracle decides not to return to committing to the open source. And at any point that Oracle does finally come back to doing it, they're willing to just give up and go back to whatever the main source base is that Oracle's offering. Okay, well, that's fair enough. So then let me ask, what do you guys get out of the community? So if you haven't really given anything back code-wise, what is Sun slash Oracle's participation? Well, so aside from the contributing, we're still doing all the participation that we always did. So the Grid Engine community is built up around the mailing list on the open source site. And we're still participating full bore on answering questions, helping customers figure and stuff out, helping guys get up to speed on Grid Engine on those aliases. So the only thing that's really changed since the acquisition is that we're not pushing the source code out right now to the open source site. So what we get out of the community, the community, so I've been in this team since just after the point where they open sourced. And it's been really interesting to watch the community grow up around these mailing lists. Back when I first joined, it was a mandate that all the developers would listen in on the alias and answer questions, right? We're trying to build this community. And at this point, it's rare that one of the product engineers actually gets to answer a question because the community is self-supporting at this point. We've got some really impressive folks who I don't know what their day jobs are because they know more about Grid Engine than any one person should. And their prompt answer questions 24 hours a day. So we've got a really strong community. And these are actually the guys who are out there doing the fork too, because they're highly invested in Grid Engine. So it's gonna be a little bit interesting to see how the community manages to divide itself among the three current source bases for Grid Engine out there in the open source. And I'm gonna do what I can to try and get all of that unified in some sensible way, right? The last thing we wanna do is have this thing fracture and fragment into a non-useful community. Okay, so we talked about the status of OGE, SGE, and what some of these features were. Oh, what is, when going back to compare it to some of the other resource managers and schedulers out there, what would you say is the biggest strength of OGE versus some of the other products you've seen? Well, so it kinda depends on exactly which one you're comparing against because they all have their own unique spin on how they approach the problem. Grid Engine, the things we do really well, awesome at scalability, obscenely flexible, I would even say. Grid Engine is flexible to the point that there are more things that you can tweak than most people understand. It's kinda funny, I've been on this product now for just about eight years. And still, about every month I will come across a feature that will go, wow, I had no idea it could do that. There's so much stuff buried in there. And a lot of it is around the ability to configure it in interesting and unusual ways. For example, once upon a time, I used Grid Engine. This is a really inappropriate use of Grid Engine, but it was a fun use. I set up Grid Engine as a way to manage an application server cluster. So as the latency on HTTP requests went up, Grid Engine would automatically go start new app server instances. And it was a really odd bastardization of the concept of a Grid Scheduler, but because of the flexibility that Grid Engine offers, it was not a particularly challenging thing to set up. So other things that we do well is, you know, Grid Engine is zero touch, right? You don't modify your applications, you don't recompile, you don't relink, you just run them. The applications for the most part don't even know that Grid Engine's there. One of the things that we do really interestingly now is focus on Hadoop. So Hadoop is this great technology that everybody's interested in, but it's a very, very young technology. And it's still kind of at the point where it's neat for developers, but as you start trying to talk to the IT guys about Hadoop, there's some real obvious issues that come up. And oddly, or maybe serendipitously, Grid Engine actually plugs all those holes. So if you use Grid Engine as the framework on which Hadoop runs, most of the issues with running Hadoop in a enterprise IT environment kind of go away. So there's some really interesting opportunities that are cropping up around Grid Engine and our support for the Hadoop environment. So talk to me, though, shifting back a little bit to HBC. Talk to me about the Zero Touch with regard to MPI applications. How's your integration with various MPI implementations out there? So that goes back to the flexibility thing. I am not aware off the top of my head of an MPI that we aren't able to support. We effectively have the same, let me rephrase that, Platform LSF effectively has the same MPI integration framework that we do. It is a script-based framework that you can go do whatever you need to do to get an MPI to work. Some of the MPIs, like the ones based on OpenMPI, are natively aware of Grid Engine for the ones that aren't. You just go out mostly to actually the open source community where we've got how she's posted on. These are the scripts that you need to put around your MPI implementation to get it to work with Grid Engine. And then we've got customers that are kind of pushing boundaries on what you can do with MPI as far as being able to suspend and resume things and what their needs are. So MPI is certainly something that we support and we support well. So you mentioned that you had used SGE to automatically kick on more of these application server instances. What's entailed with doing that, like having SGE kind of monitor a state of something instead of a user driving a queue, pushing jobs into a queue? So I can give you the answer of what it is I actually did, which was I think I probably did this back in 2004. Or I can tell you what the current way to do things is, which is probably the more useful thing to do. So somewhere around 2005, 2006, we introduced this new sub module for Grid Engine called Service Domain Manager. And Service Domain Manager is kind of the corollary to Grid Engine except for services. So Grid Engine is about making sure your jobs get done. The job is something that goes out, executes, and ends. Whereas a service is something that you hand over and you really don't want it to end. In fact, you might want it to multiply out across other servers, across other hosts based on the incoming workload. So Service Domain Manager is this thing that sits there and brokers resources among services and it actually considers Grid Engine to be a service that it brokers. So you could, for example, have an application server cluster that has a Service Domain Manager watching it and it's sitting beside maybe your Grid Engine cluster. And depending on which one's busier and which one's more important, the resources may migrate from one side to the other. Something that we're going to be coming out with very soon here is a friendlier generic service adapter plug-in for Service Domain Manager so that you can do that kind of integration with whatever services you have at a scripting level without having to write any code. The interesting thing, there's a couple of interesting things that Service Domain Manager brings to the table aside from the ability to broker resources among services. One of them is that it can plug into the cloud automatically so it essentially is able to treat a cloud service provider such as Amazon EC2 as a service that never has demands of its own but always has free resources to share. And it can also do power management where essentially you create your own greedy cloud if you will that always wants resources and when it gets them it powers them down. And when those resources are needed by another service then it can power them back up and hand them back. So you get some really interesting opportunities for cost savings both by reducing operating costs through turning things off that you don't need at the moment and also not having to buy the machines in the first place so you farm out your peak capacity to the cloud where that's relevant. So you always have to throw in that big caveat because a hybrid cloud model is far from ubiquitous. Let me ask you another forward looking question because a particular feature that I've been working on over the past couple of months for OpenMPI is revamping our support for processor and memory affinity simply because for example, Cisco even sells servers right now with 64 cores, right? And so I'm sorry, 32 cores and 64 hybrid threads. Core count is going up. What are you guys doing about that? Well, so grid engine has had since 6.2 update through update five I think is where we brought in the topology aware scheduling. Yeah, it was update five. So you have the ability now when you submit a job to say and this is the way I want it bound and we used all the same sorts of metrics as the OpenMPI work so you can have striding or you can have linear or there's a third option that escapes me at the moment for how you can specify your process or affinity and that maps down to processor sets in Solaris and Linux. And of course, the subject to the behavioral characteristics of the operating system. So I believe it is on Linux that when you do a processor set, it's only that thing is allowed to run. No, it's the other way around the, oh hell, I don't remember exactly the specifics of it. I think Solaris is more restrictive and better about it than Linux is. Yeah, they do it differently. That's all I remember at the moment. But yeah, so anyway, it maps down to the OS and the OS is able to then bind the processes to the cores as appropriate. And it also plays well with MPI. So there's an option that you can pass to the binding parameter that says, and by the way, this job is an OpenMPI job. So instead of trying to go do this yourself, just pass all this information off to OpenMPI and let it worry about it. And then how do you guys handle that in terms of scheduling? Like, does your application say, oh, I want every one of my executables to take an entire socket. And so I want 500 sockets, go figure it out. Or do they still have to express it in terms of say nodes and cores or whatever? So the syntax expresses how you want to pick cores and sockets on the machines to which you were assigned. So it's essentially giving it a template for how to fill in a machine once you're assigned the machine. I see. So I need four cores on the same socket or I want the first core of each of the first four sockets or whatever it is that you want. Or I just want four cores and I don't really care where they land. So you specify how on any given machine you would like things to be laid out and we do our best to honor that. And so that's actually playing on one of the other strengths of GridEngine. I failed to mention before, which is the fact that we have an extremely extensible resource model. Basically, anything that you can programmatically measure we can treat as a resource and schedule against it. And the way that we're really doing the core binding is we're exposing the topology of the machine as a resource. So when you go look at the machine and you've got a 16 socket machine that's a quad quad, if you go look at it, there'll be a topology resource and the topology resource will say something like SCCCCC SCCC SCCC SCCC SCCC SCCC. This long string that says it's a socket and four cores and the socket and four cores and the socket and four. And that is what we're using to schedule against it. I see. So are you capable of scheduling, say, 17 of those 32 cores to one job and then another five cores to a different job and then another seven cores and so on and kind of do a dense population even though they might not be contiguous within the topology of that machine? Are you able to schedule and manage all that? So it depends on your MPI configuration, but yes, so that falls under the way that you can figure MPI for grid engine. It could be that the MPI demands that every machine have the same number of instances, same number of processes on it or you could have just a fill up pattern that you would then have some on this machine and then spill over and some on the next and spill over and some on the next. And in that case, the processor topology would be decided by that template that you provided and once the template's exhausted, you just fill out whatever else is left. So what about hyper threads? Can I submit a job and say I'm okay using hyper threads or I don't want hyper threads? So given the flexibility of grid engine, I'm certain the answer's yes, but off the top of my head, I don't actually, there's nothing built into the product that is... Well, actually that's not true. The topology where scheduling does, so you might have a topology string that looks like SCTT, CTT, CTT, CTT, SCT, right? So it's aware of the hyper threading on the cores, in which case you can actually schedule against that. And for us, the way that we manage slots, right? Slots is just kind of an arbitrary concept. So you could have a 16 core machine with one slot or a one core machine with 128 slots. We don't really care. You just tell us how many jobs you're allowed to run on the machine and we'll work it out. So does SGE have something so that when I install the daemon that goes on the cluster nodes, that it has a way to figure out what's this installed that this is a dual socket quad core with hyper threads and has memory layout like this? Does it figure all that out? Does it use a library like HWLoc or something like that? Or did you do it all yourself? I don't believe we use the third party library to do it. I'm actually not intimately familiar with the core binding source code, but it is handled automatically when you start up the execution daemon on the machine. It goes and figures out what the topology is it queries the machine. So you mentioned Solaris and Linux. Do you support any other operating systems or do you support any particular hardware or what's your support matrix like? So we are completely haggler hardware agnostic. We don't care what physical machine you run on. Operating systems support, we run on anything that you would care to run it on. So the official support matrix, let's see if I can recall every entry on it correctly. We've got Solaris, which we support every version of Solaris since Solaris 8, I believe, Linux. And we don't care what flavor or what version as long as you've got at least a two four kernel and a G lib C of two, three, two or better. We support AIX. We support HPX, Mac OS, Windows. Am I forgetting one? I think that's our current matrix. We actually just in the last year or so dropped IRIX finally because apparently no one cares anymore. And if you go out into the open source, either grab the grid engine open source or one of the forks, you get support for just about everything under the sun, everything from SIG win all the way up through ZOS. ZOS, wow. Amen, that's like the big mainframe operating system, right? Yeah, fancy. Yeah, so that falls into the, I think you had mentioned in our emails offline, you know, what was looking for weird use cases of grid engine, that's actually one of them, these grid engine on a big machine like a mainframe to do scheduling. So there are more than one single node grid engine clusters out there. So I don't know anything about mainframes, but it strikes me that that would probably come with their own, you know, vendor supplied, resource manager, whatever kind of mechanism for running that stuff. Why would somebody run, you know, grid engine to do that on their mainframe? So I think that's a completely valid question. Quite honestly, on the mainframe use case, I'm not entirely certain why that they wanted to do that, but there was a customer that wanted to do that on a less than mainframe machine, you know, things like the fair share scheduling and the advanced reservation and such, definitely does make sense. And that may even be part of the impetus for using it on a mainframe. Yeah, so quite honestly, I agree. I don't know why you'd be using grid engine on a mainframe, but somebody wanted to. So what's the future for OGE? What's the positioning and Oracle's portfolio or any other features? Yeah, well, you know, so Oracle is exceedingly tight-lipped about that sort of thing. So I can't get into very specific details, but we can kind of muse about what logically one could expect from a product like grid engine at a company like Oracle. And before we go there, there's one other comment I wanted to throw out there with regard to whether grid engine is alive or dead. And that's that we have the ultimate in job security. The boat that Larry used to win the America's Cup, there's two firms that worked on it, BMW, Oracle Racing and Cape Horn Engineering, both of which use grid engine to do the CFD for his boat. So we're safe. Outstanding. Yeah, exactly, exactly. And we're Larry's best friends now. So going forward, looking at grid engine. All right, so grid engine at sun was always very HPC focused, right? We pushed that scalability boundary. It was about power. It was about having to change all with none of the safeties because damn it, we were pushing the cutting edge and if you weren't bleeding, you weren't doing it right. Looking at Oracle, that's not really the way Oracle does things. Is very much very squarely focused on the enterprise. They're very squarely focused on IT. And quite frankly, we could use some of that. So if you look at where we're likely to be going with grid engine, the big theme there is enterprising up the product, right? Making it integrate better into an enterprise IT environment, making it friendlier to use. Getting it to the point where you could hand it over to a knock operator who is doing this as a night job and doesn't know the first thing about grid engine or distributed resource managers or jobs or anything else and could still look at an interface and figure out what's going on and know when to pick up the phone and call somebody for help. So getting the product cleaned up if you will in a way that it is more enterprise palatable. One of the other interesting things is if you look at the rest of the products in the Oracle portfolio, there's some really interesting synergies out there. For example, go look at Oracle coherence, formerly TangoFall coherence up to a couple of years ago. Really brilliant technology. It's effectively a data grid. Grid engine is a compute grid. Put them together, it's chocolate and peanut butter. That one's just dead obvious that we should be doing something with those guys. We landed in the enterprise manager organization actually specifically under the Ops Center organization. So Ops Center being the systems management tool that came from Saunders now part of the Oracle story. So looking at that, I think it's also a fairly brain dead conclusion or a drop dead simple conclusion that we're going to have some kind of plug-in into enterprise manager such that you can do your, both your management of the grid environment and your management of all of your systems. And by the way, the management of your database and your BEA and your everything else all from one pretty web-based UI. I think that's kind of a foregone conclusion. And there's a handful of other interesting technologies floating around Oracle. And we're finding some odd connections. Like for example, I did a session at Oracle Open World with the guys from the Oracle data mining team because they're looking at doing data mining in a cloud environment. And so we actually had a demo of the Oracle data mining software. So a database with the data mining built-in that was launching data mining jobs which amounted to a SQL plus command through Grid Engine. And Grid Engine was automatically pulling machines out of Amazon EC2 firing up this AMI instance that had you database already baked in and then running that SQL plus command against those database instances and then when they're done letting the machines go. That kind of use case, I think, is going to show up more and more as we build out our Grid Engine offering under the Oracle roof. Okay, Dan, well, thank you very much for your time here. Will you personally be at SC? I absolutely will. So Oracle is going to have a booth at supercomputing. It is going to be largely storage focused but there will be a cloud station there and I will be stationed at the cloud station. I'm actually scheduled for a booth duty approximately 50% of the conference but I have no life when I'm at these conferences. So I tend to hang out at the booth unless I'm talking to customers. So if you're looking for me, come find me at the booth. I will probably be there and I'll be looking for an excuse to not be at the booth. You're scheduled to be there. Okay, well, we'll all be walking around there so we'll have to drive by and say hi sometime and actually meet some of our guests face to face. We're always looking forward to that. Actually, Jeff and I will be doing some different from normal recording and providing of information on the show after the SC conference. So we'll be cranking out some. Yeah, we kind of neglected to mention this up front. Yeah, we're going to do a few different things this year. So listen for upcoming stuff in RCE from live SC coverage or at least on-site SC coverage. Yeah, I mean, I'll definitely be tweeting. I'm not very good at it but I'll be tweeting from my Twitter handle which you can find on the RCE website or just Brock Palin, all one word. Okay, well, again, thanks for your time, Daniel. I think this was very useful and I hope this clears up a lot of the confusion out there in the community about what's going on with OGE. Yeah, I hope so as well. Thanks for having me on guys. Thank you. Okay, and we'll see everybody at SC in about a week.