 Welcome to another edition of RCE. Again, this is Brock Palin. You can find us online with our entire back catalog at RCE-cast.com Also again, I have Jeff Squires from Cisco Systems and one of the authors of OpenMPI. Jeff, thanks again for your time. Hey Brock. Yeah, actually it's been a good week here at OpenMPI. We just... I know this will go out in a couple of days, but we just released OpenMPI 1.7.5 yesterday So feeling pretty good today. Also, I'm going to be hosting at the University of Michigan and Xseed Workshop, which is open to anybody you know Xseed Award U of M student or not for OpenAAC. So those of you guys who are interested in programming GPUs, Xeon5s, or just learning different pragmas, go ahead and check us out. It's April 1st. So it's kind of short notice, but go to xseed.org and it's X-S-D-E to check it out. So that's the open accelerator workshop. Okay, so today our guests are Greg Konersberg and I'm... Greg, you're gonna have to correct me on that. Yeah, and Vic Iglesias and they're here to talk about eucalyptus. So guys, why don't you take a moment to introduce yourself? Vic, you want to start or should I start? My name is Vic Iglesias. I'm the director of quality and release at eucalyptus systems So I battle with myself to make sure the code is as good as it can be and right on time. My name is Greg DeKernigsberg. I'm the community for eucalyptus, so I care about making sure we're building the right product and making sure the open source community is a part of that process. So, you know, our focus tends to be high performance computing, scientific computing. So a lot of our listeners probably haven't heard of eucalyptus. So why don't you guys give us an idea of what is eucalyptus? What is eucalyptus? So eucalyptus is an acronym. A lot of people don't know that and it stands for Elastic Utility Computing Architecture, linking your programs to useful systems, blah blah blah. But it's the Elastic Utility Computing Architecture. That's the key there. We're basically open source infrastructure as a service platform and we do our very best to keep high fidelity with the Amazon Web Services API. So the idea is that if you want your own private cloud and you want it to be highly AWS compatible, we're the guys you come to. Now, how did you guys get started? Because you know, clouds are all the rage these days, but you seem to have a very particular spin on doing private clouds, being AWS compatible and so on. So how did this how did this start? Oh, let's see. I guess it was a research project at UCSB and 2000, what? 9, 8 bit, you know? Yep. Yep, that's right. 2008 and it was a bunch of it was a professor, Rich Wolski at University of California, Santa Barbara and his seven Mary graduate students and they wanted to set up their own version of the Amazon Web Services Cloud that they could use for their own research and their fellow researchers. So it sort of came out of needs in academia and they turned around a version 1.0 pretty quickly. That was a fair facsimile of some of the more basic functions of AWS and then they realized, hey, there are a lot of people who might need this and so it sort of turned into a company from there and here we are still plugging. Yep. So around that idea, there was already, as you said, AWS already existed. So there was public offerings what's the real, what was their driver for wanting to build a private cloud and why do you see people building private cloud now? I think I could take this one. So you know that the university has certain amount of hardware allocated for these projects and they receive that hardware and weren't given extra money to run stuff in the public cloud. So they had to utilize that as as efficiently as possible. So I think that was a big driver for them writing the software and we see similar ideas still where a lot of companies and users have hardware laying around and they can utilize those cores and that RAM and that storage that they have there to run a workload whether it or not it's all of their workload is irrelevant. They get to leverage the things that they have purchased in the past in order to run their current workloads and with eucalyptus you can use any tooling that you had existing for running against AWS or if you want to pick up a new tool off the shelf that was written for AWS you can point it at eucalyptus and use it equally. Do you see many customers coming to you guys or people on the on the open source portion of the mailing list coming to you because they're concerned about putting that much of their infrastructure into hands of a third party? I haven't seen that concern usually, but you know, we have a lot of land and expand type of installations where people will start with a cloud in a box which is, you know, one single node and run eucalyptus on it poke around with it and then add additional compute nodes to it. So there is the possibility to scale this thing at pretty small and large scale. Yeah, I don't think people, you know, it's it's difficult to tell people's motivations, right? They there were a bunch of possible motivations for private cloud. There's regulatory you know, for whatever reason they can't put their data into a public service. There's cost considerations. I don't think that, you know, I think whenever you basically use any ISP or service provider of any kind you are quote unquote putting your data in someone else's hands, right? So it's just a different kind of hosting model in that way. But I think that what people do want is the same kind of self-service that AWS provides. They want that inside of their own organizations more than anything else, right? There's a story I tell called the Sysadmin's lament, right? The Sysadmin sort of has this fiefdom, has control over these resources. If you want a system you have to fill out a form, it goes through a process. The Sysadmin basically puts up these kinds of walls to make sure that they can manage the complexity of making sure that everyone has compute power. Well, the Sysadmin starts having a problem when the end user rather than having to go to the Sysadmin can just take a credit card out of their wallet and go get a testbed server at AWS. And now that that is reality, Sysadmin in the real world are having to sort of scramble and say, oh, well, how do we compete with that? And so providing the same kind of self-service capability in a private cloud is something that organizations recognize that they need now and there are lots of ways to do it. Okay, now part of the value add here, of course, like we said, multiple times is the compatibility with AWS. And you said a very interesting thing a moment ago that you can take any tool that is created for AWS and just point it at Eucalyptus and be good to go. Tell me a little bit about how that works. Let's say I go buy this package and it says, oh, you can run your services out on AWS. Is there some central Eucalyptus server that I have on premises that I just changed some server name in the package and I'm good to go? Right. In essence, what we do is we try to mirror the API and its semantics. So we use all the SDKs and a lot of tools to verify that functionality. Once you have one of these tools, you would change, in most cases, it's presented as an endpoint. And you would change the endpoint URL to point at your Eucalyptus installation and start plugging away. All right. Now, with the Eucalyptus infrastructure, does that allow me to overflow to AWS if I need to? Like, you know, I have 100 resources and I have 150 requests. And so my 100 resources are full. Can it fill over those extra 50 into AWS? That kind of thing? That is the dream called hybrid cloud. And I think that everyone is sprinting to get to that same place. You can do that today. There are some caveats. There are things that you have to do correctly, right? You need to make sure, for instance, that you've got images that are identical, that are running in both places. And you have to make sure that you're using the same subset of services that Eucalyptus supports, right? AWS has 40 plus services out there. And we support seven of them. We support EC2, S3, IAM, ELB auto scaling. Can you rattle off the others? Yeah, ELB auto scaling and CloudWatch are the last three that we had in. Right. Hey, can you give us the one sentence description of what each of those are? Because some of our listeners might not be familiar with them. Yeah. So EC2 is what you would generally think of as the compute side of the house. So that's spinning up virtual machines, attaching volumes, which are your block device to those virtual machines. S3 is your object storage, where you would put your files, your images, your data that you want to grab as whole files rather than as blocks. IAM is the identity and access management, which goes across all the services. That's how we manage accounts and users and policies and all those kinds of things. ELB is your elastic load balancer that basically allows you to spin up instances and associate them with a load balancer automatically. Auto scaling is the ability to use triggers to scale a group of instances that are equivalent, a cluster maybe in your guys' parlance, that will scale up and down based on certain criteria. It could be CPU usage, it could be disk usage, anything like that. And you can scale up and down programmatically. CloudWatch is the service that monitors your instances in order to provide a data point for auto scaling and or it's also possible that you just want to use it to monitor a group of instances to see trends over time. Okay, so you're doing this AWS compatibility. Is this kind of being done in a black box? What is eucalyptus as an organization's relationship with Amazon? Are they kind of promoting this idea of having some local, some public, or do they kind of not care? Like what's kind of their take on this? Yeah, it's completely orthogonal to Amazon. We have a relationship with Amazon at sort of a high level that allows us to explore some business opportunities together. But the code is open source. We've been basically writing a completely reverse engineered application stack based on the API itself since we started. And we don't share any code. We don't share any development resources in that way. So it's completely engineered from the ground up based on the APIs. So now you say it's orthogonal, but how much do they care? Do they see you as a competitor? And is there ever a threat someday of doing an Oracle Java-like thing like, oh, you're using our API, so you're infringing on our copyright, blah, blah, blah. So I'm not a lawyer, and you'd have to ask Amazon that question, not us for the Amazon perspective. But from our perspective, we have every right to write code based on their API. Current legal precedent upholds that right. And it's likely that if there is any patent war to be had in this space, there's plenty of opportunities for that kind of shenanigans completely outside of the API itself, right? The API is probably the last place where someone is going to throw up some kind of lawsuit if a lawsuit is to be had. But again, consult your own counsel. Okay, so right now you guys are based around AWS APIs and being able to be compatible with them. Do you plan to support like OpenStack or any of the other kind of VMware service or anything like that to be compatible with tools that use those APIs? We have a very base level compatibility with VMware, but really AWS is our focus and will be for this foreseeable future. It's so obviously the dominant cloud provider. If you've looked at the, for instance, the Gartner Magic Quadrant for public cloud, AWS is farther up and to the right than any entity I've ever seen on any Gartner Magic Quadrant, right? They completely dominate the space. And so there's nothing in the Eucalyptus architecture that precludes pursuing other APIs at some point. But we think the most important thing is to be the very best at one thing first. And that one thing is the AWS API. What are some common scenarios that your customers come at you with? You mentioned the, I love the phrase you used earlier, what was it, land and expand? But what is driving this? Is it the stuff that you cited earlier, such as regulatory, or is it people just experimenting with consolidating IT resources? Or why do you see customers exploring the private cloud space? Well, I think there's myriad reasons why people do this. And the market is still forming itself around different use cases. But one of the ones that we see a lot of is continuous integration. This is kind of an engine that drives software development, right? So you have a fleet of servers that come up and down hundreds or thousands of times each day in order to test a piece of software. And having that process go through a manual process where people are creating VMs slows everything down. If you slow down the testing process, you slow down the software development, you don't ship as quickly. So that's one of the big drivers we've seen for private cloud. This also provides data locality. You have your own repos locally and all that stuff and your own corporate resources that you can access. So let's talk about the technology a little bit, how you actually do this. What is the architecture of eucalyptus if I'm setting this up? What's the minimum number of machines I need to get going? What do I need to be familiar with? And let's assume I just want to set up all of the seven services you guys support. Right. So we also try to be easy to use as well and kind of come up out of the box with a lot of functionality. So the minimum footprint is one machine. That machine, obviously, if you run us in a single machine deployment, it's not going to be the most performant ever, but it does let you kick the tires. So you basically install CentOS. CentOS 6.5 is what we currently support on a machine. You load up the eucalyptus packages, configure the config files, and start the service. And you're up and running with those seven services that I mentioned. So we also have an ISO that you can download from our website that lets you have CentOS and all the packages in there and a workflow for configuration that should get you up and running in half an hour. Now, what kind of software pieces are in there? Are you reusing any notable open source packages inside or is this all from scratch or what are you doing? Yeah, we're leveraging a lot of open source software and utilities for orchestrating the VMs. So we use KVM in order to run the virtual machines Libvert in order to orchestrate the creation of the process in order to have a open source implementation of EBS, the Elastic Block Store. We use TGT and iSCSI-D. And then we run in OpenJDK JVM for our control components. So what kind of control do I have to kind of give over to Eucalyptus? Because if I spend up VMs, they need to be given addresses and host names. Does it need to run its own DHCP and all that kind of stuff too? Right. So Eucalyptus in the Amazon model, when you run a virtual machine, it gives you all of those things. You get IPs, your host name can be provided through the metadata service. So these VMs basically configure themselves. And the way we do do that is on the networking side, we're using IP tables for NAT and firewalling and DHCP for the DHCP process of the VMs. And then how do you exactly, I'm curious how you do this, having just gone through an AWS workshop of the ELB, the load balancer, how exactly you guys handle in that? So this is actually a VM that Eucalyptus spins up as kind of the system user, if you will, or the administrative side of the house. It spins up a VM when you create a load balancer. And that VM is a prepackaged VM that we created with HA proxy and some control code around it that basically hits our service API, figures out which instances it should be load balancing for and what configuration, and then begins load balancing for those. Now how do you orchestrate all the networking stuff? Do you have to integrate in with the networking gear that is there or do you just kind of assume that you're given say a giant L2 space in which to play and you can do whatever you want in there? Right, so we have in our older modes, we have these managed modes, which basically take care of all the networking for you with the caveat that you've given us at least some L2 space to, sorry, L3 space to chunk up. And that basically handles in the Amazon model, you have a private and a public address. So your private addresses end up on the L3 subnet that you've handed us and we chopped that up for each security group. And then you also provide Eucalyptus with some public IPs. These public IPs are things that are routable on your current network. So what about availability though? I mean, Amazon has the idea of different regions and availability zones. Can I use Eucalyptus and kind of set up my own like two split data centers and say like run over there run over here and ELB across them or that kind of deal. So currently we only support one region, which is the Eucalyptus region, but you can set up multiple clouds. We don't currently have any credential federation across the clouds. But what we have seen is that people will use trains to come around. People will use a separate cluster in order to provide hardware availability or to just have a separate place maybe for a different hardware profile where they can run their VMs. Now I want to go back in time a little bit a second ago because I forgot to ask this question when you were talking about it. You said use KVM. Are you guys looking into containers at all that are a feature coming up in the recent Linux kernels or is that outside of AWS and therefore not interesting at least at this point? Well, we just haven't heard the customer demand in the use cases around containers. So we haven't invested too much time. We've been tinkering as most people have with containers and docker and such. But yeah, we haven't seen anything to drive the product roadmap that way. Having said that, using Linfort allows us to easily implement losing Linux containers because we already have an abstraction layer above that. Okay, so what's the actual management of this stuff look like? Is it a bunch of Java things or can those servers be federated to provide availability or is this kind of a single service right now that then it orchestrates everything? So we can split out all of the components. We have five main components, the cloud controller, which handles kind of the services, the web services, the endpoints that you hit, the cluster controller, which handles scheduling and networking, the storage controller, which handles our EBS service, exporting a volume to OVM, and the VMware broker, which is the kind of API endpoint that we use to get to vSphere and ESXi, and then the node controller. And the node controller is the machine where instances your VMs will run. So that's the one that's running the KVM hypervisor. All of the components there are able to be set up in a highly available situation where you have a redundant component that is passive until it detects a failure and then comes up. So the node controllers we don't have high availability for, but we do allow for VM migration. So if you didn't have to take one out, you could migrate your VMs off of that node and on to another and then take that node out of service. Now, you mentioned you're referring to KVM and open source, but you also mentioned VMware on there and ESXi and vSphere and stuff. So what exactly is going on there? Why both? So, you know, we've seen a lot of enterprises have, you know, paid out licensing fees to VMware. Now, like we discussed earlier, I want to use my existing stuff. So we have to overlay on what we see customers have around. And ESXi boxes we use as, you know, a kind of node controller where we can basically spin up the same VM we spun up in KVM on a different VMware node. Yeah, the purpose of the VMware integration though is really for customers who are ready to start moving from VMware to either the public cloud or their own more cloud based infrastructure. You know, and the story really there is a waypoint along the migration path, right? That's sort of the, you know, we're not going to be supporting deeply every feature that VMware supports. I would say we have just enough VMware support to help users move cleanly from sort of the locked in VMware world to, you know, either their own open source private cloud or as a waypoint towards AWS. Going back to the storage a little bit, you said, you know, all kinds of, well, no, let me just ask what kind of storage do you support, particularly on the management and then the back end hardware itself. Do you have direct integration with specific RAID controllers or are you just looking from the Linux file system and above, you know, what's your level of integration there? We have a few different options there. So yeah, you can use basic Linux file system. We call that the overlay driver. It's basically we're going to drop files down and then export those files. That was kind of the original design, you know, back in the day, just because, like we said, people just want to drop this stuff, get it running on whatever box they have. And that's the easiest way to do that. We also have the ability to use direct attach storage. So if you, you know, mount a LUN or have a JBOT or something like that and attach it to a block device on your machine, you can use that and then we carve that up with LVM. And we also have the SAN integration. So SAN integration for NetApp, EMC VNX Equalogic and NetApp cluster mode are the ones that we currently support. Now what about backups? So does the Eucalyptus infrastructure handle backups automatically or is that something that goes outside of it or what do you do? Currently we don't do any automated backups. Generally the users are using snapshots and EBS volumes to, so EBS volumes to hold their data, that's their block device, and snapshots that are a point in time copy of that. The snapshot actually gets stored in two places on the EBS side and in Walrus. So you have redundancy there. As for the object storage, we use DRBD to replicate the object store data across two nodes. So actually, explain what Walrus is and where that comes from. I just find it funny. So Walrus, sorry, Walrus is our S3 implementation. And so S3, the nomenclature for like a folder or anything like that in S3 is a bucket. And if you've seen any of the memes where Walrus has my bucket, that is where that comes from. And you can look that up on Google, Walrus, my bucket, and you'll see the pictures of a Walrus holding a bucket. And that's where that came from. Outstanding. Sounds like something that needs to go in the podcast on the on the show page, Brock. Well, it's actually kind of funny because aren't all the buckets in like Varlib, B-U-K-K-I-T. Yep, yep. We went for it. You can't go halfway when you're going for a meme. No, you got to own it. You got to own it. Just using your marketing material sometime. Yeah, still still there today, five years later, six years later. So expanding on the storage stuff a little bit. So because it is like, you know, Varlib bucket is where all these buckets are created. Can you have, can I set up like many Walrus nodes and kind of just have this, you know, infinitely large S3 equivalent service or can I keep stacking more machines that export EBS volumes? And when I like, you know, snapshot one and make a new one from that snapshot, it'll do all the right things moving it between them. So currently the initial Walrus design, Walrus itself was to be on a single node and then have replicated data that could be brought up on another node. In 4.0, which should be released in a month or so, we have the ability to use a redesigned architecture which uses an object storage gateway. Now this object storage gateway becomes the endpoint for our S3 implementation and then uses a scalable backend to store the data. So the scalable backend, in our case, that we're supporting out of the box is going to be React CS. So React CS has this ability to scale out. So you just set up your React CS cluster and point Eucalyptus at it and we'll start doing all the administrative part of putting in and getting your objects out of there. Do you plan to expand and include something like Ceph and, you know, its whole CRUSH system and then you could also use that for block devices and everything else possibly later? Right. So the design is specifically in order to support these kinds of things where some, you know, backend is, you know, scalable and supports the S3 API. We can interact with it. So we have seen requests for Ceph and Swift and others and this, the design is explicitly in order to support those at some point. So the actual S3 service and you're able to kind of, you're able to request objects by HTTP but you can make them public, private, multi-part uploads, all that stuff. Are you generally supporting all that API and that capability? Right. And in 342 we support versioning, bucket logging, the ACLs and that kind of fine-grained access control and no multi-part upload in 342 but in 4.0 with the object storage gateway and its redesign we are supporting additionally multi-part upload. Oh and getting back to EBS and the block storage capability in terms of adding capacity to, you know, either create more block devices or things like that. What are the options for scaling that and like adding capacity? So currently, you know, we recommend using a SAN for a highly, a system that needs to scale. The commodity block storage, scale out block storage options right now are not quite stable enough for us to be comfortable to put your data into it. So we do have, you know, the NetApp cluster mode driver where that allows you to scale at a pretty easy rate by just adding more disks or, you know, adding another shelf to your cluster. All right. Let's move on away from storage here. Let's go into a couple other things here. Just random question. What's the biggest eucalyptus deployment you've seen? We've seen clusters in the 10,000 core range, clusters and installations and people who run many, many clusters at that scale. So with that size of a cluster, are they mostly just using EC2 or they actually tend to use all the features and is it generally inward facing services or is it actually people running production public services on these platforms? We've seen in that scale, we've seen mostly EC2 usage and even in Amazon, that's 80% of their their bill there is EC2. So we do expect that to be the main driver. People just don't have yet an application or a use case that can leverage ELB. They're working on getting it to be cloudier and, you know, to be able to scale horizontally rather than vertically. So I think we'll see that in the coming months and years, but not quite yet. Do you see any HPC deployments in eucalyptus like people deploying, you know, a bunch of EC2 nodes together and using it for actual, you know, compute jobs, MPI jobs, things like that? I think that we've actually got some of that going on in the academic space, which is a where a lot of that kind of thing tends to happen. I think Indiana and Cornell are both running things along that line. I couldn't tell you any details because they used to retell us, hey, we're running HPC. You know, they don't share a lot of details with us because they don't need to. But it's a platform that's used for those kinds of applications, certainly. So actually, my experience with eucalyptus was actually on Futuregrid. They're running some eucalyptus there. They're also running OpenStack. You know, they're a testbed platform. They run a bunch of different things there, but those nodes have infinite band and the whole deal. Okay, keep, again, a direction change. You keep talking about the open sourceness of eucalyptus. What's the eucalyptus community like? Do you actually have developers outside of your organization? Or do you just get a random submission of patches here and there? Or, you know, what's your involvement like with both the user community and the developer community? I would say, generally speaking, you know, most of our engineering, most of our product comes from internal engineering, right? So it's a complicated product. There's a lot going on. There's a fair amount you have to know to really get into the code base. That said, we get a, you know, we do get patches, you know, primarily from our users who have particular pain points that they want to address. But sort of more largely speaking, our community is essentially the entire AWS community, right? We've got, you know, lots and lots of open source tools out there that work with AWS and also work with eucalyptus, sometimes right out of the box and sometimes with minor tweaks. So, you know, most of our community contributions actually come in the form of patches to various tools in the ecosystem to make sure that they work with eucalyptus as well as they work with AWS. All right. So just out of curiosity, this is something I ask a lot of developers and I just love to hear the different responses and why, what version control system do you guys use and why? Get it and get hope because they're awesome. Okay. Yeah, I think that the move to, you know, a public and easily used GIF repository was a huge benefit for us. And also, I think it benefited our developers to get on what I like to call the new hotness, the thing that everybody's doing right now. So we're really happy with that decision to move from BZR to Git. So what are some of the challenges you see customers run into? Like, you know, they're moving to this private cloud to get the flexibility, self provisioning, all that sort of stuff. But what's some of the problems they run into when they do this? So, you know, what you're still doing here underneath everything is you're essentially right. And so I think a lot of what we see are there's some folks out there who sort of want to build a distributed system at scale. But you still have to do a lot of the basic sort of Linux system and kind of tasks to do that effectively. Right. So, you know, we see, we see a lot of folks who sort of don't necessarily know how to set up and maintain a distributed Linux system underneath. So that's one set of problems. Another set of problems is that once they have the private cloud, they're not always sure what to do with it. Right. I think that everyone is interested in private cloud, but you really have to write applications to be to some degree a cloud aware. Right. There's, you know, if you look at Netflix is a great example of how to do things the cloud way. And if you've heard of their chaos monkey, for instance, that's a perfect example of an organization that is really committed to doing development the cloud way. And, you know, every so often they just turn the chaos monkey out into their infrastructure and start blowing stuff away. Right. They blow away instances randomly to see if instances stand back up and do what they're supposed to do. That's sort of a mindset. And users have to learn that mindset to get the full value out of, you know, the elasticity that cloud offers. And so people who tend to see it as just, you know, more virtualization, they don't necessarily see a lot of more benefit than that. Right. There's, there's things you have to do to get full value out of a cloud and people are still learning that, I think. Here's another infrastructure and developer question. What language do you guys typically write in? Or is it a smattering of different things because you're spanning so many different stacks and there's different tools to use for different areas? So we have a smattering definitely. Most of the code base is written in Java. And that's the control components generally. And then we have the C components that are kind of the ones that orchestrate the node controller and the networking stack. Those are pretty lightweight in written in C. And then we have some Perl scripts, some Python for tooling, as well in there. So I know I want to ask something because we already asked about containers and you're like, you haven't had a lot of requests for it. Have you also not had much requests for just using this for provisioning bare metal? You ever see anybody do that? I want this AMI image to effectively be put directly on that hardware. I think we have seen requests for that. It does kind of break the Cloudy model that we expect for things to kind of share resources. Basically, in AWS, you have an instance type and that instance type fits within a single node and can be possibly run concurrently, many of them in a single node. So we would probably have a hard time kind of defining the type of a bare metal machine if there was a large kind of deviance in the hardware profiles of each node. But presumably it's possible. We just haven't seen people kind of go back into bare metal as much. So what license do you guys distribute Eucalyptus under? Eucalyptus is distributed under the GPL V3. And what is your value prop? Are you the paid support? How do you exist as a company? Yeah, it's essentially the same as many open source software companies. You're paying for the support model. So you can get the bits for free and you can run Eucalyptus perfectly well. On your own, but when you need help, you basically buy a support contract for us and that's a subscription. Generally, customers who pay for it are customers who need both the support and they also need an insurance policy so that they've got one through to choke if something goes wrong, if they have concerns. And the more your infrastructure, the more an open source product sits underneath everything, the more mission critical it is and the more important it is to have someone to be able to pick up the phone and call on your day. Now, what parts do you actually support? Because you're using a ton of third party software inside Eucalyptus. So how do you draw that line? Like, do you actually have to go in and fix bugs inside other software packages? Oh, yeah, the customer doesn't care. The customer is buying support for the whole thing. So we have gone in and found bugs in Limpvert when we have both worked around those kinds of bugs and sent patches for those kinds of bugs to multiple projects. So that's the table stakes for running an open source business that relies on other open source software. You have to be conversing up and down the stack and we are. Okay, guys. Thank you very much for your time. Where can people find more information about Eucalyptus? Join the mailing list and download it. Well, so you can go get Eucalyptus just by going to www.eucalyptus.com and you can see all of our source code at github.com slash eucalyptus. We've got tons of repos out there and the eucalyptus slash eucalyptus repo is the main product. And then if you go to the github wiki, you can see all the mailing lists and we hang out on IRC on FreeNode. So it's PanEucalyptus on FreeHead. Okay, guys. Thank you very much. Thank you, guys.