 Folks have filtered into the intimate room here that we'll go ahead and get started. And so I'm Robert Asker, I'm the product manager for OpenSecondNap. I probably should have asked if everyone here probably vous anglais, but you probably respond in French. I want to understand what you're saying, so I'm apologize to have to speak in English. Je ne comprends pas, francais. So I wanted to talk to you a little bit about OpenSec and NetApp, and what we're up to, the way we look at it, some of our development effort, more or less the state of the Union, the state of the art rather, with OpenSec and NetApp. So we're part of a, our OpenSec development is part of a group, a NetApp called Cloud Solutions Group, where we're looking at enabling a common data fabric across multiple different endpoints, to try and solve one of the, as of yet not fully solved problems of hybrid cloud, which is fundamentally the movement of data. It's the thing that's least like a power utility. It's, you've probably heard the metaphor several times, data has gravity, data has inertia, it's hard to move data comparatively, whereas compute turn on a light switch, network turn on a light switch. And so in starting down this path, we looked at what's within NetApp's portfolio of products. Fundamentally, there's a thing that most folks associate with NetApp called data on tap. It's the storage operating system that has been classically associated with the hardware that we produce. And that is the same operating system exists in all models of the, of the FAS line of, of the things again that NetApp has historically, you know, had success with. We're doing some things, we'll get into a little bit more detail here in a second that kind of changed the context where we can deliver data on tap and it sort of starts to grow beyond those physical pieces of hardware. But it's a great place to start in building a common data fabric because if you're familiar with Mac Offslaw, any given network is only, and I'm paraphrasing that's powerful as the number of nodes within it. You know, the more nodes within a network, the more powerful generally it is. So data on tap is actually the, you know, by most measures, the number one most common commercial storage operating system in the world. Not claiming NetApp is the largest provider of storage in the world, we're not, but because all of our platforms historically have been delivered with this common OS when you actually measure it, we're the most widely deployed single storage operating system. So it's a good place in starting to build a fabric. When you start layering on top of that that we're, I believe it's over 200 and some of the folks in the room here could probably confirm different service providers in the world have built services that are underpinned by NetApp capabilities, then you start to fill out this notion of a common data fabric. Well, that's only one part of the picture and so where does like OpenStack fill into that? Well, commonly we're seeing folks deploy OpenStack on-prem to facilitate a hybrid cloud model. You know, certainly I wanted to build my own set of capabilities that are AWS like. Maybe I don't necessarily intend to do business with AWS but increasingly developer tools expect like an S3 or an EBS or an EC2 endpoint. And so you can levy that of course against an OpenStack implementation. So increasingly OpenStack sort of starts to resemble that common infrastructure as a service control plane. And it's not always necessarily the open source bits themselves, the reference implementation. So, you know, for example, you'll see depicted here if you're familiar with the icons Azure. Microsoft's unrecorded saying we'll start actually over time when they're product managers stating that we'll start supporting API, OpenStack API endpoints on our services which isn't to say they're building Azure with OpenStack or intend to do so but rather that, you know, you can actually write code against an OpenStack API and run it against an Azure hosted capability. So we're seeing that kind of across the board. There was a contribution, I think it's a bit old now at least nine months old wherein there was shims provided for, you know, basic shims for Google Compute Platform wherein in the same way that you can actually, you know, levy application logic that was intended to be delivered against AWS against an OpenStack because of shimmed APIs. You can actually do that with some subset of the Google Compute Platform capability. And that's actually in, I believe, StackForget might possibly have made it into some of the upstream OpenStack projects on GitHub as well. So increasingly we're seeing it being the choice of, there's the various hyperscale providers, the largest sort of providers of public infrastructure as service capabilities. And there's a collection of others that sort of have that ambition, number of whom are actually building their public capability in this case of Rackspace, hpcloud.com, SoftLayer has elements of it. Likewise, they're actually wrapping some of their capabilities with something called, I believe it's called JumpGate which is a way of taking their existing compute as a service capability and making it accessible via the NOVA API. But similarly also deploying Swift. So increasingly OpenStack is that sort of common control point. Either in addressing everything through the native OpenStack APIs or through OpenStack's ability to actually function as an alternative some of these other capabilities through its own API shims or at the other end of the spectrum some of these other existing functioning as a service capabilities themselves wrapping themselves with an OpenStack API shim. This looks like over time it might become the standard control plane across hybrid clouds. We shall see it may not be the only one but as standard as we can hope for and not standards through some standards body that's ratified a bit but through ubiquity through common deployment. And of course, any of the different flavors of OpenStack clouds that you might deploy in your own prem or in a hosted capacity or another boutique sort of IS capabilities that might not be depicted here. So just a little bit more about what I'm referring to if you're not already aware just a selection of some of the OpenStack projects and their equivalence amongst AWS. It goes both ways. You can of course address OpenStack directly via its native APIs, these individual services APIs. Likewise, you could have intended to deploy against AWS and maybe you need to repatriate for reasons of economics or perhaps it's that classic hybrid scenario of bursting out for peak lopping for that once or twice a year. So kind of deviating from that sort of I'll come back to it in just a second. I just wanna talk about like, well, how is NAP involved with OpenStack? And by what means? The reality is as we started an open source data on tap operating system referred to is derived of BSD. And we regularly have kind of a push-pull relationship we'll push stuff into it and likewise syncs our own sources with it. We've been involved in a number of different places and we were the first major storage vendor to have participated in the OpenStack community having provided, having joined signed the corporate contributor agreement having provided upstream integrations having been a charter member of the foundation at the gold level which is sort of depicted here as well. So not only have we been involved for a while we've actually had a chance having been involved as long as we have to have matured some of our capabilities. Most of our upstream integrations most of our integrations started around Cinder we'll get into a little bit more detail about that in just a second. And I'm gonna move very quickly so apologize how fast I'm talking. Certainly be available to answer any questions afterwards. So the place we start with our platforms is to make sure that any of their distinguishing characteristics, any of the things that they do well are not hidden by the abstractions by the service layer that an OpenStack capability provides. We have customers who will use something like Cinder independently of the rest of OpenStack because they desire a single standardized block storage abstraction. And the modular, generally modular nature of OpenStack allows for that. There are certainly caveats. But, and that makes sense from a deployer perspective I don't wanna have to rewrite my application to PlanetLogic against a different backend and a different proprietary API if I decide to swap out backends or have multiple backends that are appropriate for different workloads or use cases or SLAs. So make sense to write against the Cinder API that standard abstraction. But as an abstraction it also isn't terribly nuanced in its understanding of distinguishing characteristics of individual storage systems. So the first place we start in providing integrations is to make sure that the things that are generally useful and interesting and different about NAP systems are accessible in the context of OpenStack aren't hidden by the abstractions. I'll get about into how we'll do that in a second. So that's what I'm seeing here. So going back again to this notion of data fabric and data on tap. Clustered on tap is kind of the latest evolution of that. It's not quite true. I'll talk about why in just a second. And it has the ability to scale up. Probably the better way of referring to it is scale out on tap. It's the ability to take a single capability, a single namespace, a distributed characteristic and scale it horizontally. Actually you can scale it both vertically and horizontally. And what we're trying to depict and it's a little bit abstract here probably in the way it's depicted is that on tap can actually reside atop of course NetApp hardware which is what's classically understood. It can exist atop our flex array flavor of hardware which basically functions as a gateway to other third party storage. You can take the capabilities of on tap and apply it to some of our competitors arrays on top of that. And likewise, the newest addition here and this was just announced last week is something called cloud on tap which is basically on tap as an instance in a public cloud. And so why is that interesting? Well you get all of the capabilities of data on tap. It's ability to support multiple different protocols as a storage server, all of its storage efficiencies but probably the most interesting characteristic of it with respect to like enabling a hybrid cloud is it's thin replication. It's well known and well hardened and established replication capability. Something we call snap mirror. So you have the ability to actually take workloads and move it from your on-prem to a hosted provider to an easy to provide that same data set in as thin a possible way in terms of network utilization. It's thin a way as possible to your easy to instances or to your instances in any number of different public or private clouds. And so this is how we're starting to build out that notion of a common data fabric. So close, I'm sorry, thought I heard a question. So let's talk about a little bit about the open-stack integration more specifically. The first place I mentioned we started was with block storage. I'm gonna talk about images, so glance because actually we didn't have to start. It just automatically worked. So there's two basic options for deployment of glance. If you're not already familiar, there's a file and then there's also a object storage backend. It turns out that unless you're deploying, you have images that immense in number. There's a lot of additional complexity in deploying Swift if that's really the only use case for Swift. Now it may very well be that you have a generalized object storage requirement and you also wanna host your images there that can make sense. But if the only thing you're really wanting to do is deploy a compute as a service capability, frankly, using the file backend is far more efficient. It makes a lot more sense. And what you basically get is automatically the benefits of deduplication. Now there's some other reasons to use this which I'll explain in more details in a second that has to do with our cloning technology. But it's not at all uncommon given that these are operating system bits to see 90 plus percent deduplication rates. Again, there's a lot of commonality from one image type to another. So this is just an automatic quality you got out of the gate. Excuse me. So moving on to object storage, the first thing, there's a couple of things that I wanna talk about here. So first of all, there is Swift itself, the reference implementation, the open source capabilities described in depth here at this summit. And what we do is we take one of our platforms, the thing I haven't been talking about today, which is something called E-Series. And we take a unique characteristic of it and take the 3x storage consumed by default in Swift's consistent hashing ring scheme. This is how it protects data. And we reduce it to 1.3x via a parity scheme. But you might ask, well, I've heard that you can't use RAID with Swift. And that's probably a generally true statement that it's not typically a good choice, not typically viable. And the reason for that is that you're commonly deployed in a capacity optimized manner. You're using largest possible spinning discs, six terabyte drives emerging now into the market. I don't know what the numbers are, but I know in the four terabyte range, certain scenarios, a rebuild could take upwards of a hundred plus hours. And when you look at the number of spindles that you might be deploying in a given sort of RAID set, if you will, you just kind of like do the basic math around mean time between failure. You might need to actually plan to run degraded full time, given the use case or the common sort of profile of Swift the way it's deployed. So it becomes untenable. I mean, because that rebuild time basically means you're running degraded, which means less protected and performance degraded as well. So we have a kind of a cool trick over sleeves with something called dynamic disc pools. If you're familiar with Crush, and I imagine this crowd, there's a fair number of folks who are, we use that same academic work, use that algorithm and deploy it within our, within our E-Series systems to deliver what's essentially a declustered RAID. And long story short is the number is anywhere between five and 10% of the time it would otherwise take to do a rebuild we can do with our dynamic disc pools capability. This is the thing that uses Crush. And so it mitigates that problem and now you can again effectively use an E-Series that you can effectively use a parity scheme, a RAID system if you will to protect the data and make it immediately consistent which is a quality that Swift itself doesn't immediately have. It's an eventually consistent model. And the numbers work out to be about 1.28x for protection within a single site. And clearly, very commonly you're actually architecting to actually protect against site failure. So you'll have other copies elsewhere. This doesn't solve for that basic problem, but it certainly reduces the number of copies that needs to be held in any given site. So a common case would be a 5x, five total copies of a given object. In order to protect it against site failure you would usually want minimum of two at the remote site so that if you do have a failure you don't have to reconstitute it across the WAN. So that 5x can process to about 2.6x if you're using E-Series on both ends. And that helps you scale more broadly and saves you money on operational costs, environmental costs as well. This is just depicting how it shrinks. Apologies, I got off script with my slide too. So block storage, again I referenced earlier, this is actually where we started. We started with a classic mode of that data on tap operating system called seven mode because that's what at that point in time is still the case. The critical mass deployed system in the world are actually using. Cluster on tap is probably a better architectural mesh with the design center for OpenStack itself. But again, you know, seven mode works. A lot of folks, you would like to actually use it in a deployment OpenStack and so we facilitated that. We moved on very quickly to cluster on tap. We had to pull it, we offer a lot of different options in terms of like providing the capacity through the block storage service. The thing that probably is confusing when you first see this is NFS. We're talking about block storage. What is the relevance there? And the basics of it are that the common use case for Cinder is you're providing persistent storage to either boot an instance or to add hot capacity to an in persistent capacity to an instance. So in that scenario, we can rely upon the mediating effect of a component called libvert if you're familiar with it and the hypervisor itself. Long story short, we are mounting NFS to the compute node which is where the hypervisor lives and files end up being treated as virtual block devices unbeknownst to the instance. All it knows is that's a block device and you know, get to interact with it as expected, the guest virtual machine. And the reason why we do this, and to be clear, we support iSCSI as well, I'll explain that in a second, is it's vastly more scalable than iSCSI is. iSCSI, in any given storage system, you're gonna run out of lones and initiators well before you're gonna hit any sort of logical limit associated with number of files in a given export. A common question we hear is yes, but I think of NFS as kind of shared storage in a single sort of fan-in problem. Except for that when you combine it with clustered on tap, you're able to actually supply multiple different storage servers inside of a single namespace. And with paralyzed NFS, you can paralyze that IO such that it becomes more of a distributed characteristic and it mitigates any sort of that fan-in problem that you might classically associate with NFS. Now that said, we do support iSCSI. For those that might be familiar with Ironic or perhaps actually the use of Cinder outside of the context of the rest of OpenStack where you're vending block devices to non-virtualized entities or non-managed entities, you need to actually supply something that can be mountable in a sort of commonly understood way, like supports iSCSI mount semantics. So we do of course support iSCSI as well and we try to treat them as much as pairs as possible, but for scale considerations NFS tends to be the better choice. We did add in iSCSI support to our eSeries systems and you'll see fiber channel layered in over this next release and possibly not fully complete until the second release to the L release, if you will, after Kilo. So one of the things that we did, I talked about this earlier about making sure that none of NetApp's distinguishing characteristics can be hidden by the abstractions that the OpenStack services present, is we developed and this is an activity within the community, this notion of a storage, yeah, a storage service catalog essentially. And what you basically can do, and I'll actually move ahead to this, well, you can deliver any number of different things to your tenant base. You can define it, this is arbitrarily defined, red, blue, green, whatever it is that makes sense here, we're depicting different types of workloads that might make sense to make a catalog available to. So these are composed, these by the way in Cinder parlance are called volume types, these catalog entries and you compose them with unique characteristics of a given Cinder backend. So for example, perhaps archival is compressed, perhaps it's duplicated aggressively, perhaps it's on big, slow, status spindles, something to that effect and perhaps it's the database is on all flash, perhaps it's replicated because you have high uptime and DR type requirements, that type of thing, but you get to administratively the deployer of the OpenStack cloud, the Cinder administrator if you will, define what this catalog looks like. This is just a quick sort of depiction. Here we have a case where we've basically established a volume type and QS specs for a gold, silver, bronze type of capability and this is just depicting via horizon, somebody actually selecting when they want to hey, give me a Cinder volume, I'll select the volume type. I won't bore you with the whole demo and I'm kind of running out of time so you can see they selected gold and gold happened. Well, let's say if gold was let's say this case, they selected silver, silver in this case is defined as having a replication relationship, the container is automatically replicated. Hey, give me eight instances that are booted from volume of type silver. Silver happens, the instances are created and it's created in a container that is automatically replicated perhaps to a DR site. Just an example of the type of thing you could accomplish and the key there is offering the flexibility to build the catalog the way it makes sense for use cases. So moving on to some of the optimizations we did with both Glantz and Cinder, in particular for the cases of creating instances. So by default, if you're not familiar, when you create a virtual machine in OpenSack in Nova, it's what's referred to as ephemeral, which means that it's not, there's no guarantee of data being retained over a boot cycle. You should assume that it won't be retained and it tends to actually be entirely booted off of the local disk on the compute node. So it's problematic to do things like live migration of an instance, running instance from one place to another. So when you use Cinder's boot from volume capability and you use it with NetApp Cinder drivers, the data on tap Cinder drivers, you actually get a different characteristic. So one, those instances are by default ephemeral. Or I'm sorry, persistent. You can treat them as ephemeral if you so desire. It's just a flag you select. The instances come up much, much faster because you don't have to copy the boot image bits all the way down to the compute node upon first boot of that instance image type. We actually clone from Glantz. So if Glantz is on the same volume on the NetApp, when I say volume referring to a NetApp flexible volume, not a Cinder volume, if Glantz is on the same flexible volume as the capacity store for Cinder is, we'll just immediately clone it. Now if it's not in its elsewhere resident on our cluster, we'll do something called copy offload, which means we'll copy it across the cluster interconnect in our storage system. Doesn't have to wash it through the host running the Cinder back in. And if that doesn't exist, perhaps actually Glantz is based on Swift or something to that effect. We'll copy it at once and we'll cache it. Now in fact, Nova does that as well, but it's atomic to the compute node. So your cache is only as big as what is cached on your compute node. When we cache it, it's global to your entire fleet of hypervisors. And so instances come up much faster. They're immediately storage efficient until there's a net new write in that new def of any form from the original you booted and they're persistent and it facilitates live migration. And for some folks, it also allows them to boot their compute nodes stateless, meaning like I don't want local disk in them. If that's something you care about. So that's an optimization we added, I guess two releases ago now and we've added further optimizations too. So moving on to something new to OpenStack. Pardon me, I got far away. And that's not my water. So I'm not gonna drink it. Shared file systems. So not prior provided for an OpenStack. Problem though is if you kind of view OpenStack is the sort of de facto, and we do, open infrastructure as a service capability. I mean, I think it's established itself as the winner, if you will, in open infrastructure as a service. That about 65% of storage infrastructure in the world sold as recently as two years ago, and apologize, I don't have newer numbers, was to underpin shared file system deployment. And even if that number's changed, it's still a very significant portion of the market. And what you don't have within OpenStack is treatment for shared file systems, delivering them as a service. And it's kind of understandable why there's quite a lot more complexity involved than there is in delivering something like a block storage service, something we've become very intimate with as we've delivered that capability called Manila. So Manila is more or less what Cinder does, except for for shared file systems. And it's built to be agnostic to the file system. So SIFS, NFS, GPFS, GlousterFS, you name it. It could be provided for, and in fact in the process of, is in the process of being provided for, certainly the NFS and SIFS from that directly, some of the others from some of the other folks that we've brought in and built a community with. So this is now a formally incubated capability, meaning the OpenStack Foundation Technical Committee has assessed it and it's met all requirements for formal incubation. So it is formally part of the OpenStack fold. This is something that NetApp initially conceived of, design, prototype, submitted, built community around and we are continuing to lead. The Manila Project Technical Lead isn't NetApp employed, for example. There's a separate session, I'm not going to turn around for any more detail. There's a demo and four more additional info in a subsequent session. I think it's actually the one immediately following this on Manila. So please catch that if you're interested. And by the way, also hack at the bits, getup.com slash OpenStack slash Manila if you want. Just a bit about like what we're seeing. So this is numbers that we derive from our auto support capability. So this is if a customer of NetApp desires to supply information to NetApp for support purposes, they have, we have a phone home capability call of support that can supply this information to us. Of course, some folks opt out of it. Some folks would prefer not to supply it to us. So this is not all inclusive, but this is a relatively good representation and probably for obvious reasons, I've not included the actual numbers, wouldn't be appropriate to do so for a variety of reasons, not the least of which we're in kind of a quiet period for reporting our financial results. We've seen a huge ramp just within this calendar year, 285% growth in deployed systems and also a huge growth in the number of customers who are deploying OpenStack on NetApp. And the only thing we can actually measure with auto support is actually the sender stuff. And none of the other stuff I talked about is measurable via auto support. So long story short, this is growing really, really rapidly. Those are pretty, pretty enormous ramp numbers just within this calendar year. And we weren't starting from zero. We were already per the OpenStack Foundation user survey the most widely deployed commercial storage option for sender deployment. Another quick note about the way we actually deploy or the way we do our integrations, we do by default provide them upstream. It's not to say that we can always do so for an actual property reasons, but when we can, we provide our integrations into Upstream OpenStack. And we do that because it's not at all clear to us or for that matter, so far as we can see anyone else who ultimately like the prevailing options are gonna be for distributions of OpenStack. There's a lot of reasons to do it. It provides freedom, the ability for folks to actually like modify our integrations to taste in your own particular deployment. More commonly though, it's just the case that some of you may wanna deploy Ubuntu, some ReLOSP, some Nebula, SUSE, it goes on. At last count, I think there's something like two dozen different sort of readily identifiable OpenStack distributions. Our stuff is in all of them because we are upstream. Just a little bit of a summary of what we've been up to the last two releases. So some of the stuff I talked about already, we delivered our E-Series system enablement for Cinder. I didn't really talk about that, but I briefly did. We also have the ability to stand up E-Series behind Cinder and employ that same storage service catalog construct. We layered in the paralyzed NFS stuff at that same time that the copy offload capabilities I mentioned, that was an additional optimization for improved instance creation. It's a new thing, a nice house release. A big emphasis for us is in building reference and solution architectures. And so we were just starting on this path that's gonna expand fairly rapidly. A big goal when we deliver reference architectures to provide accompanying automation in hand. So if we give you something for example, RELL OSP and High Availability and NetApp. High Availability, a reference architecture for deployment of open stack services with a RELL OSP distribution and NetApp back ends. We wanna also provide you the appropriate accompanying automation to make it so. So reference architecture could be an interesting reading exercise, but wouldn't it be more useful if you could just in a simplified manner deploy it? So that's a goal is to provide puppet manifest and chef recipes where possible with each of our reference architectures. In June, we got Manila Incubated, which was not an insignificant effort. Some of the rest of the stuff that you have here is you'll actually see some of our puppet and chef enablement for our platform is actually just now rolling out. And we implemented support for a capability we don't have time to get into here called pools, cinder pools. And that dramatically improves behavior on our cluster non-tap systems in particular. We've got a lot of stuff planned for Kilo if you wanna hang around afterwards, we can talk about that in more depth. You know, we do partner. I mentioned that we're in any of those different distributions. It's not to say that we don't like do explicit partnering activities. It's not an all-inclusive list, but those are certainly some of the folks are more significant. I guess I forgot to mention MetaCloud. Forgot to remove MetaCloud since they were acquired by Cisco about a, I don't know what, a couple months ago. So we have activities in terms of reference architectures with Red Hat and Rackspace. We have ones underway with SUSE. We have more to come in this particular respect. And I'm particularly interested if there's anyone in the crowd afterwards, if you're using something other than Puppeter Chef or configuration management, perhaps an Ansible Assault, trying to measure like general, gauge general interest on that as well. So this is actually in the process of rolling out, like I mentioned. In fact, actually just two weeks ago, the first Puppet device module for cluster non-tap was made available on Puppet Forge. So that's a very new thing. So just a little bit, I guess in summary about OpenStack and NetApp, we are enabling that common data fabric across multiple different cloud endpoints, which is facilitate, you know, hybrid cloud. And that's just, we're just starting on that path. Our announcements around cloud on tap are an example of kind of directly where we're heading. So there'll be more detail on that here. I apologize if I didn't elaborate. Just to give you a picture of where we see OpenStack relative to that. One of the things that we see a lot of folks doing is using us, you know, a slice of our system in POC. Sometimes they'll use other set of capabilities. Perhaps, you know, local disk is used in a variety of different capacities, but when it comes time to support production SLAs, oftentimes there's no second conversation about reliable storage, reliable back ends, high performance storage. And so, you know, we participate in a lot of POCs, but also we also participate in a lot of activities and changing the storage back end when it becomes necessary to actually keep the lights on, so to speak. And so, you know, the fact that we're well integrated in, you know, a hardened capability proven over many years with a lot of additional features is, you know, kind of removes a lot of the risk for OpenStack when it of itself has a lot of moving parts. The fact that you can actually deploy, and we have a number of folks that are doing this, deploy infrastructure for both kind of that cloud native application infrastructure and your classic sort of maybe POSIX belt application infrastructure on a common, efficient, single proven data infrastructure is something that we've seen a lot of folks find attractive. It seems net inefficient to necessarily have two separate environments for that. And some of the things that we're doing at NetApp are aimed at trying to specifically bolster individual OpenStack services such that they can be used for more classic application deployment. Again, to support classic, you know, high SLAs given a different model of application at the same time facilitating, you know, that cloud native fully fundamentally distributed application, maybe built of paths, that type of thing. We are very heavily invested in OpenStack. We were, again, the first to have provided integrations upstream, very active within the center community. We've led Manila to this point. We have more detail in the subsequent Manila session about like essentially the community road map that we're helping drive. If you are interested in additional details, netapp.com slash OpenStack is probably the best single place to go. There's a particular document there called the NetApp OpenStack Deployment and Operations Guide that talks about, you know, not just the configuration setup, but ongoing care and feeding, how to do this. This is, I think, in its fifth revision, possibly, maybe sixth. So you can find that there, Twitter, and then also for those who use them, the development community or deployment community, find us on free node where they're on IRC, OpenStack-netapp. Happy to talk with you there. There's a number of sessions here as well. So, of course, there was one earlier today. I'm not sure if you're aware, but most of these are recorded, I think all of them perhaps, and will be available on YouTube later. I'm sorry, I went through these too quickly. There's a demo theater, I think it's tomorrow. We have a Manila session, we have a use case showcase session later today. We talk about, like, some of our particular customers and how they're using OpenStack with NetApp. And, likewise, our booth is over here. I think it's on the second floor. So, love to have conversations with you or answer any questions you have. We'll, of course, be there and force them Vancouver. So, we still have the rest of the weekend here in Paris, but just looking forward six months, we'll see you there as well. Thank you. And I even actually finished a little early. Thank you.