 OK, it's two o'clock. I guess we'll go ahead and get started. So thanks, everybody, for coming to hear about what we're doing with OpenStack on Solaris, and generally OpenStack at Oracle. So I'm Dave Minor. I'm the architect in the Solaris organization for system management and deployment and cloud-related things. So I do, if you're a Solaris user, you've certainly used things I've worked on, like SMF and the automated installers, the boot environment software that's in Solaris 11, all these sorts of things. The last couple of years, I've spent most of my time working on OpenStack. So it's become a big focus for us. And in reality, on the OpenStack team, my role is not so much development, but to run our internal cloud. So as the team calls me, I'm customer zero. I'm the guy who gets all of the first look at everything and all the worst bugs right away. So let's get the legal stuff. So we'll start with just to give you some context about what's going on with OpenStack and at Oracle generally. It's a big focus for us. We do infrastructure in all sorts of forms between Solaris and Linux, Oracle VM, and then all layers above it. And we do the hardware underneath. So we've got great opportunities to integrate in a lot of ways with our products around something like OpenStack, which has such a vast reach in terms of the functionality and the layers of software that are the effects. So if you look at a typical OpenStack architecture, we've got integration going on and compute with Oracle Linux and Solaris and Oracle VM in terms of the hypervisors and management and the OSs that we run on and the guests that we can run. And in networking, we've got virtual networking products that are separate from Solaris, that are usable with Linux. And we have in Solaris its own virtual networking functions. Storage, well, we all know about ZFS. It's the best thing since Sly spread, right? And we make great advantage of that in building out our cloud functionality around Solaris and integrating that behind Cinder. And hopefully, we'll be doing some interesting things with Swift as well as we go along. And glance image deployment, we've got functionality in Solaris providing images. And generally, Oracle has VM templates that we offer. Historically, those have been for the Oracle VM product. As you see over time, we start making those things available through things like the Morano app catalog and containers and things like that. You'll start seeing those show up as artifacts that you can use with your clouds. So there's a lot going on at Oracle related to OpenStack and making OpenStack a really viable thing to use with all of our products. In terms of Solaris, which is what, so that was the general thing. We're going to talk about Solaris pretty much the rest of this time, so that's what I do. Really, our strategy with Solaris and the strategy that Oracle's followed since acquiring Sun has all been all about building that entire integrated stack top to bottom hardware all the way up to applications. And we see that in what we're doing around Solaris, where we're co-engineering with the database teams, the Java teams, and working with other application teams, and also driving things down into the hardware to support those functions. So it's a highly integrated vertical strategy for horizontal scalable computing. Our most recent Solaris release is 11.3, came out last fall. And as part of Solaris 11.3, we include OpenStack Juno. And we've started with Havana back in Solaris 11.2. If you look at what we do with Solaris generally, a lot of what we're focused on is security, compliance. Those are some features that you've seen show up in 11.3. A lot of focus on simplicity. We're really doing a lot of things to try to make an operating system simpler to run. I mean, Solaris is a big complicated beast. Roots go back 30-plus years. We won't talk about how long I've worked on it. But the reality is that it's a very complex system. And what me and my colleagues have really focused on for most of the last 10 years has been, OK, how do we make that simpler? How do we integrate it better? How do we make it not just a bucket of parts that you're putting together, but truly a easy-to-use operating system that you can deploy quickly and easily? So I've learned a little of this earlier. But when we look at OpenStack and how it integrates with Solaris, NOVA, right now we offer you can run guests on Solaris that are either regular Solaris zones that you've seen since Solaris 10, the original containers. And more recently, Kernel zones, which showed up with Solaris 11.2, which are more of a pair of virtualized hypervisor, which allow you to run a separate Kernel from the global zone. The standard native Solaris zones are a shared Kernel. In the networking, plastic virtual switch, we have some virtual networking technology that underlies our neutron implementation. So if you're running OpenStack and Solaris, there's no NOVA networking. You're using Neutron, which is more complex, definitely, but a lot more powerful. You've got a lot more capabilities there. And we've got a virtual switching technology that we use underneath that. We talked about ZFS and Cinder and Swift there. And glance, the unified archives. Unified archives are a feature that we introduced in Solaris 11.2, which allow us to take an image of a deployed system, including the contained zones and Kernel zones, and redeploy that in other contexts. You can do an exact replication of it. You can do cloning, where you just take one element of an archive and redeploy it elsewhere. And in a different context, you can take bare metal, turn it into a virtual machine, or vice versa. So if you use Solaris 10 in Flash Archives and in the earlier releases, this is way more powerful. And it's one of my brain children. It was a really hard project for us to do. So what does Solaris really provide in an OpenStack context, though that's kind of unique? I mean, if you look around at this entire community, it's everybody else's Linux. So we're the weird ones here. Why would we do that? I think there's a few really unique things that you get out of running OpenStack on Solaris. One, hey, we've gone and built and integrated it all. It's all packaged up. You look at a lot of the deployments out there. I see a lot of people running things upstream, from upstream, and having to figure out how to package it up themselves, or just using tower balls and stuff like that. We've gone and done the packaging work necessary to make this really easy to deploy and also easy to upgrade. And that's really a big key. You can get a cloud deployed. But you're going to have to upgrade it eventually. How easy is that going to be? Boot environments, really a great thing when we talk about upgrades, because not all your upgrades are going to go well. Sometimes you're going to have to back off for unanticipated reasons. And boot environments lets you do that. It's a pretty unique feature. We're just starting to see some of that sort of thing show up in Linux. But that's something we've had for 15 years if you go back and look at the roots of it. One of the other neat recent features is immutable zones, which allow us, I'll talk that more about in a little bit. But this allows us to really lock down the underlying system, did the deployments of OpenStack. And it's something you can also use in the guests that you deploy. And we have a lot of fault resilience built in the system. I won't say fault tolerance, because that's a completely different thing to talk about. But fault resilience, we've had features in the OS like the service management facility since Solaris 10. And these give us some very good capabilities for ensuring that the services that provide your OpenStack control plane can stay up. They're reliable. We get diagnostics out of it, and we manage the faults. So when we talk about running the OpenStack services on Solaris, one of the big keys, and one of the things that we've done that's fairly unique, is figure out how to run those with minimal privileges. So in the event that somebody manages to hack your API services or any of the other services that are running the 30 or 40 or 50 services you might be running for your OpenStack control plane, which contain the boundary that they can affect there. We don't run these services as root. We run them under reduced privileges, different user accounts, and only give them the privileges that they need. We also, when we talk about security in OpenStack, what about the data at rest? How do you ensure that's secure? ZFS is built-in encryption, very easy to use. It's available in all of our ZFS storage appliances and Solaris ZFS itself. And it's very easy for you to configure it under sender so that all of your block storage is encrypted. We talk about data in motion security. And of course, you can use things like SSL and IPsec and things like that for securing your control plane in various ways. But there's also the issue of, even if you weren't doing that, an extra layer to help you there if you've got multiple tenants running over the same network here, is how do we ensure that we keep the data in motion when we're migrating VMs between our compute nodes? We have a secure migration capability built-in for migrating zones so that all of your migrations between the compute nodes are secured on the link. And you can always run, as I mentioned earlier, applications with a immutable zone. So just a quick sidelight about encryption here. Now, if you haven't had a chance to try out our latest systems, the T7 and M7 systems and the M7 processor, one of the things we'd like to point out is that we can encrypt everything essentially at no cost. We've got all of the common encryption algorithms are baked right into the chip. So we can offload all of that work. You never see that as CPU impact on your compute nodes or any of the other things that happen to be running in your OpenStack environment. And we've got other features, the silicon-secured memory, which is a new feature that we've just introduced, which allows us to lock down memory regions so that we can ensure that some of the bugs that you've seen out there are some of the more recent brand name exploits out there can't happen in these environments because we can protect memory adjacent regions. And just to say a little bit about the performance, these are pretty stunning differences in an M7 system versus either Intel or IBM processors. When we're talking 10 to 30 times difference and common algorithms, that's a very remarkable difference. One of the other things I mentioned earlier in Solaris11.3 is a compliance feature. So we've gone and built a new compliance command that you can run on each node to check your systems deployment against different compliance profiles that you might have, such as PCI or some of the federal profiles that are out there. So you can make your systems compliant out of the box. You can check them, and we'll be providing remediation features to automatically remediate problems there. One of the things that's not there in Solaris11.3 is multi-node compliance, but that's something you're going to see coming in future releases. And so we'd be able to monitor your entire cloud for compliance. So that's kind of the general features in Solaris that are really key in deploying and managing an open-stack cloud environment. So a couple of years ago when we got to the point where we had builds of Havana that sort of worked and such, the next thing was, well, how do we know that they really work? We need to start using this stuff ourselves. And there was a lot of opportunity, shall we say, to do that. If you looked at how we have historically done engineering in Solaris, well, if you needed test machines, well, we have a web page. You go off and reserve systems, reserve entire systems. And most of us squatted that. We'd get one and we'd squat on it for months, years at a time. Very inefficient and high-cost environment, relatively speaking. More recently, we started handing out kernel zones and LDOMs and things like that in this. But still, it was a very static infrastructure in that sense. So there wasn't a lot of choice and sizes of machines you could have and very little capability to build any complex networking infrastructures that you might need and things like that. So we needed to modernize. And like everybody else, well, hey, here's the cloud. Why aren't we doing that? So we've done that. And we've been running that. And I'll talk more about that in a second. But what we've got right now is a small cloud. It's not a huge cloud by any stretch of imagination. But we're going to build a huge one. Really build out to the scale that, similar to what our customers will want to deploy for private clouds. And generally, our ongoing strategy will provide a good hybrid cloud environment where your on-premises private cloud and the Oracle public cloud products will be able to interoperate and you'll be able to migrate workloads between them. So generally, we need to build some big clouds that involve Solaris and such to make this all work out. So that's the stage we're in now. We've run the small cloud. Now we're going to go big. But sometimes I feel like Tom Cruise in this picture just lashed onto the side of the airplane that's lumbering along here. And it's going to be a hard problem to deal with. The interesting thing about our cloud, to me, compared to what most other people are doing, is that generally you're running an open stack. You may be running off of upstream off the tip. But you're probably running on like an LTS version of the OS underneath. Me, on the other hand, I've got a OS that's changing every two weeks. That's our build cycle in Solaris. And the goal here is, and OpenStack's integrated that. So the goal here is we upgrade every time. And we find all these problems well before they ever show up in any customers. So everything's a moving piece all the time. Makes it a very challenging thing. And then the other part of all this is getting an understanding of OpenStack and cloud computing and such and pulling that knowledge that we gain and running this out. And how do we make Solaris work better? How do we get the whole organization really aligned behind building a great cloud computing platform? So here's the rudimentary diagram of, this is the cloud we started out with a couple of years ago. Six little modes, a couple reasonably good size compute nodes. This was OpenStack Havana. And everything running on bare metal. And no HA. This was our sandbox to start with. We did have data link multipathing for the network links partially for giving us additional aggregate capacity, but also giving us a little bit of resilience for switch failures and things like that. But as you can see, it's a very simple layout to start with. And that's where we were. Now we've gone and built out something that's four times bigger, roughly at this point in time, four to five times bigger. And what you see here is we've got a collection of systems that are providing the various pieces. And in terms of compute nodes, I think one of the, again, interesting aspects of our cloud that you won't see anywhere else is, it's a multi-architecture cloud. It's x86 and Spark. Now we're running a lot of this stuff right now on x86 is for historical reasons in terms of that's the machines I had available to build things initially and so forth. But in terms of the compute nodes, we're basically 50-50 in terms of capacity of Spark and x86. And oh, by the way, we brought in ZFO storage appliance cluster now to provide our back end for Cinder and the other things that we want to do with storage. So it's gotten a lot more capacity, a lot more performance. We've upgraded the networking along the way. Initially, it was all one gig networking. Now it's 10 gig. And we're continuously updating everything. So go along. What we're trying to get to is a real global multi-region, multi-cell architecture. Because Solaris Development is a highly distributed organization. We have major engineering sites across the US. There's one here in Austin, Colorado, California. I'm actually in Boston. And overseas, we have a bunch of my colleagues in Dublin, Prague, Beijing, various places like that. And a big region, big sites in France and the UK as well. So we've got a globally distributed environment that we need to support and learn how to really run massive scale clouds there. So I apologize for this incredibly detailed diagram that my friend Octave drove and drew for us. But this really shows where we're trying to go. And you can compare it to that initial crude diagram that I drew two years ago. This is what our various regions are going to look like. We're in the process of going from OpenStack to Keelow, which is going to be showing up in Solaris 11.3 real soon now in one of our support updates. Our next release after that, we're planning to be Mataka. We're going to skip Liberty because we've been on this treadmill of trying to catch up to the community. And really the only way to get there is we've got to hop all the way to Mataka and get ourselves current there. So that's going to have some interesting challenges to it. You see, with all of the network links here, we've got a lot of redundancy built in. We're going to have HA load balancing. So in Solaris, we actually have an integrated load balancer that, frankly, we haven't used very much. So we're having to find all sorts of interesting things with it. But it's actually working out better than most of the things that we don't use work. So that part's good. But yeah, essentially, we're building an architecture which is all horizontally scaled with load balancers and so forth. The public and private API and back end nodes, these will all be things that we run in various container architectures. And I'll talk more about that in a second. But essentially, it's all going to be a very horizontally scaled environment. One of the other things that is interesting here will be going to Neutron DVR as our networking architecture. So right now, we're kind of a centralized L3 agent architecture with all of the opportunities for failure that that represents. And so going to a full DVR architecture will give us a much better fault management story. And one of the other key little points here is that we will have both Linux and Solaris compute nodes in this architecture. Right now, we don't have any Linux compute nodes in our cloud because if you recall, we talked about the virtual networking that we're using under Neutron is a Solaris elastic virtual switch. We're in the process of bringing in the OVS open V switch as our underlying network virtualization technology. So at that point, we'll be able to have full interoperability between Solaris and Linux at the virtual networking layer using ML2 plugins the same way. So we can have the cloud that everybody really wants, which is the mixed OS cloud. Because none of our customers, neither do we, want to deploy, well, here's a Linux cloud. Here's a Solaris cloud. And well, maybe we can tie them together at some level through Keystone and stuff like that. But ultimately, that's not very satisfying. So just to talk a little bit about how we run our cloud, I've got hundreds of Solaris engineers that are my customers. They tend to be rather pig-ish about what they think they need in terms of resources. Tell them, well, yeah, 2 gig VM. They're going, what, 2 gig VM? I want more than that. So over time, we've spent some time learning what was really the sweet spot in terms of the things they needed for resources. So basically, our tenant model is each of our users is a tenant. We also have some shared tenants for projects and for other production uses that we're starting to run on the cloud. But essentially, we've got some quotas. And right now, we're a little limited on some resources. So they're probably a little lower than you might think. But that's OK. It's still something for everybody to work with. And like everybody else, the onboarding of users into this is an interesting question. We've got a very simple self-service create user Python script that they go and run and get some account and project and sets up the basic things there. We need to build a buoy for that in some of my really scarce free time. We look at how we deploy our infrastructure. Right now, we mostly are running the various open-stack infrastructure elements on the global zone and bare metal. That's really not that efficient. I look at the utilization levels we have there. It's not really the way you should run this at all. And especially as we move forward and move a lot of this stuff onto T7, M7-based systems, they're way overpowered for doing that. So as we're moving out new generations of the services, we're moving them towards kernel zones and non-global zones. And there's various reasons to choose each of those. Right now, with obviously Nova Compute, you've got to run on bare metal. That's fine. You can't run that in a container, I suppose, and make things more complicated. But frankly, it doesn't really benefit from doing that. Neutron L3 agents, the other thing that we have to run in the global zone at this point in time. But that's something that over time won't be necessary any longer. So most of the time, what we're going to do is deploy those services in kernel zones, at least in the near future. So that gives me the ability to migrate those around between the hardware that I'm using underneath and also allows me to be kernel independent in terms of what I'm running in the hypervisor versus what we're running the services under. Non-global zones, they're a good choice for this as well. And there's actually some reasons why we would use them. Because if we've got services that we need to run active passive and use Solaris Cluster with, this is actually our best option is to run those in non-global zones and use the zones clustering technology that's in Solaris Cluster. So the other thing that we'll be looking at is Docker containers. And as you've been paying attention to some of the things we've been talking about with Solaris, Docker is something we've been working on for a while. If you follow the Docker community closely, you've seen some pull requests in the last few months, for really the last month, for things to start showing up upstream. And in the near future, we'll have Docker as an option for you to run on Solaris. And I've been to a couple talks already at this summit and around people running their infrastructures in Linux containers and using Kubernetes and stuff like that for orchestration. Those are going to be options here as well. And we'll talk, I mentioned the immutability features earlier, and we can just dive into that a little bit right now. So the immutable zones feature in Solaris, which is really cool about it, is it's not just a matter of, oh, we just make all the file systems read only, and then you can't do anything. That's not useful. When we first started talking about this feature, I said to them, well, I can't run my infrastructure with that sort of environment because we need to make changes to the OpenStack configuration files and occasionally some of the other things in the system without having to reboot those systems. So if you look at how we do the immutable zones feature, there's actually what we call a trusted path into the system that allows for various pieces of the system to be writable while maintaining a largely read-only profile. And we have several different profiles that you can apply to the system depending on which risks you are most concerned about. So generally, we're working on applying a profile that's leaving some of the configuration files writable so that we can do this stuff. What we'll be doing more long-term here is we'll make it such that Puppet and other configuration management tools, but Puppet's the one we bundle with Solaris, that Puppet will actually be able to run in the trusted domain and make changes itself while keeping the other paths locked down. So you can ensure that, yeah, my infrastructure, it's managed through Puppet. I don't have somebody SSH-ing into one of my pieces of infrastructure and making changes that Puppet that has to go reverse. We can really ensure that we don't have those SKUs for any length of time. Talking about the hardware specs of what do we do in our cloud? What kind of hardware are we running? Right now, it's an assortment of stuff. There was some various things that I had available a couple of years ago that we deployed. And then we started buying additional things as we go along. Right now, each of our compute nodes, we're running anywhere from 10 to 60 instances per compute node based on the sizes that people choose. The flavors that we offer range from 2 gigs of memory up to 32 gigs of memory, various sizes of block storage to go along with that. So we get a range of usages. But right now, we've basically come to the conclusion that the right level for us to standardize on is half terabyte of memory per compute node, two processor configurations. Because especially on Spark, the number of threads we've got off of those is pretty massive. And it actually turns out that if I look at the utilization on my compute nodes, it's actually not that high. It's actually really hard to keep all those processors busy at this level. But so if we went with, say, one terabyte of memory per compute node, we'd have a better balance there. But then I got the problem of, OK, we need to do maintenance on one of these things. How long is it going to take me to evacuate that thing? It's the classic fire escape problem of how do you get everybody out of the building or off the plane? So it seems like half terabytes about the right sweet spot at this point in time. And as I mentioned earlier, we're using storage appliance clusters to back end sender. What's interesting in my observations of this, iSCSI was not something we had done a lot with internally. We've had it as part of Solaris for many years. But internally, it wasn't something we used very much. We all had local disks, and we used NFS. So iSCSI to me was an unknown going in. To me, it was probably the riskiest part of it. We have no idea how well this is going to work. Turns out, actually, it's worked very well, surprisingly enough, because the amount of we had used it. But what is interesting to see in our environment is that the workload that we see on the storage appliances, 80% right since EF is caching, and the guests are so good that once you're there up, there's almost no read activity in terms of basic operations. Now, if you're running a job like, say, build a Solaris source tree, well, yeah, there's a lot of read traffic there. But there's still probably even more write traffic. Really, it ends up being just an interesting data point on the workloads that we see. It does affect how we configure the storage appliances. We really go for log zillas and stuff like that, and don't worry about the read zillas so much. A little bit about our operational environment. This is how we deploy our systems, using the standard Solaris features, automated installer, IPS for installing systems, unified archives for recovery and cloning. This allows me to essentially, my goal originally is we don't have any HA, but just need to be able to rebuild any of these nodes within 15 minutes, including booting the thing over the network. So we were able to do that, and that's worked out very well. Papa mentioned earlier, we integrated that in Solaris 11.2, and we continue to extend it. Its capabilities in terms of Solaris and you start starting to see some of our work there go back upstream into the public community. Use the Solaris. Our back for managing administrative access, so there are routes of role, and we use it as little as possible. And we're pretty militant about filing bugs about anything where I have to go use root, because this is something where if we're going to provide the level of security and auditability that you require in these environments, our back is very key. We also use that for doing some temporary administrative access for what I've got developers that I need to go look at some of my nodes. We have some abilities where we can put that in temporarily and then use some SMF periodic jobs and IPS to revert that out when it's needed. Just wanted to show you an example of what we do with Puppet, some of the things we can do with Puppet. Just show some of the extensions that we've done to Puppet for Solaris. So what you see, we've got a class here for doing essentially an aggregation for Neutron. And you can see us setting up Solaris VLAN objects, IP interfaces, IP address objects, setting up the link aggregation, and customizing some properties there in terms of how DLMP is going to detect failures. So these are all kind of Solaris specific extensions to what Puppet does and so forth. But this is the kind of thing we're doing. I mentioned SMF earlier. A lot of our availability story is SMF detects when my services fail and crash. It can restart them. So we've got good dependency checking, stuffing. And I mentioned the service privileges earlier. Just show you something that you'll see coming in future release that I'm running now is zones as SMF instances. So what you see here is the zone is actually a fault notification that one of my guests on the cloud has failed. And I get that. I get email on that. Actually, I get more of these emails than I would like because we've got some bugs around that. You know, essentially, if you've got some workloads that you need to run more as pets rather than cattle on your cloud, we're going to have capabilities here for you to get the monitoring that you need about what's going on with them as part of the operating system. So kind of wrapping up here, we've been doing this two years. How's this worked out for us? Generally, it's gone well. I mentioned earlier, we'd like to upgrade every two weeks. Well, that turns out not to be possible when your operating system has bugs during development. So we've done 13 upgrades in two years, which is still an awful lot of upgrades. And that's crossed, gone to Havana to Juneau to Juneau to Kilo most recently. We had two upgrades that failed. In fact, one last week, which after we rebooted everything in, well, we found we're getting panics on our firewall software. So we had to back off. But it's boot environments. All I had to do was activate the old ones, reboot, we're back to where we were. So I can do these upgrades aggressively and not be too concerned about what might happen. Yeah, we've got a couple hundred users. And as I said, per user projects and such, so we've got a few more tenants than that. Our availability actually has been pretty good considering I have no clustering. I don't have HA. There's no HA built or yet we're going to be building that HA as we go along. But we're doing pretty well. Out of the downtime, it's probably been about 60% upgrades and 40% unplanned outages. And we've had two big ones that were annoying. But that's what happens. And a lot of bugs we filed. Everybody really enjoys getting a bug tagged with my little tag for those bugs because they get a lot of scrutiny of, well, this is something we need to fix before release. So I'll just close with a couple things about what we're going to be doing with Solaris and OpenStack in future releases. I mentioned OpenVSwitch, so we'll be able to do Linux and Solaris compute nodes. We're also going to have an OpenStack installer to really make it easy for you to deploy multiple clouds on a catalog of systems. And that's something that we've got largely completed. MySQL cluster. So when we talk about the database back end that's in every OpenStack installation, how do you do your HA on your database? Classically, people are using mySQL server and then using the various Linux clustering things around that. We've got a set of patches to use mySQL cluster so you can use an active-active configuration. That's something we're going to be contributing back up to the community as soon as we possibly can, as soon as we finish shaking out the bugs for that. I mentioned the multi-mode compliance. And we've got a lot of stuff around analytics. If you use the ZFS storage appliance analytics, things like that are going to be showing up in Solaris and they're going to be applied to this cloud management capabilities as well. And last thing I mentioned, something I actually worked on myself was bringing in a cloud-based init. So we have cloud init-like capabilities for running your guests on Solaris. So that's what I have. I've got a couple slides here just with if you want to follow my continuing adventures in doing this and some basic background in getting started on OpenStack for Solaris. We've got a mailing list there that you can get access to us and ask questions and things like that. So I've got like one minute for questions. OK, I snowed you all. Yes, go ahead. Can you get to the microphone? When you mentioned the MySQL cluster, yes, progress. So did you mean Galara or it's your OpenStack URL? Galara is already there. It's going to take care of your? Yeah, this will be using the actual MySQL cluster upstream. So why not Galara? Is there any? This seems simpler and it doesn't require extra components. My question is that because Galara is becoming more strong with the community and more people are contributing. I think more options for everyone is a good thing, right? Thanks. Any others? One last question and then they're going to kick me off here. I was wondering if you could give a little more detail about your HA solution. What types of failures do you monitor and detect and correct? Yeah, so right now what are we doing about HA? I don't have a lot of monitoring going on yet. There's a lot of things we can do with SMF right now. We get failure notifications on services, things like that. We run some canary jobs to make sure that things are sort of up. Nothing super sophisticated in that respect yet. We've got capabilities with SMF to run. We call what SMF calls monitor services where we can have SMF do much more active monitoring of what's going on with those services. That's something you're going to see us add a lot of capabilities around. So right now it's not quite where we'd like it to be, but that's an area where we're doing a lot of work. OK, thank you, everyone.