 So let's just give it like maybe two minutes or something like a few other folks filter in that's all right I guess while the folks are filtering in I'll Introduce myself, so I'm a Robert Esker. I'm the pride manager for OpenStack at NetApp and Been working with OpenStack for three and a half years. I think I think if my Basic arithmetic skills are are working. I think this is the seventh design someone I've attended It's really interesting to see the progression and the growth in the In the the whole of the event Thanks for for the folks in the audience for showing up day three after two successive nights of late-night parties. I Hope everyone here took their Advil from the piston party the night before last I Guess we'll just go ahead and get started So the intent of this session is just to talk about what up net apps been up to in OpenStack for like I said the last several releases You know I said I've been involved last the seven design summits or stuffs debuted in Essex Where we're at just a little bit about where we're intending to go I want to demonstrate or deliver kind of an overview of what it is we have done why we think that's of use and Get into a little bit about where we're going in the next six months development time slice namely for for Gino This is intended to be you noted in the mic interactive so We can mindful of time. I'd like to get through all of this so We may pen some of the questions to the end, but if there's something that's terribly pressing Please do do stop me along the way I've also got a number of other folks in the audience from that So if I can't answer a question, I'm sure we can find a way to to get out what you're wanting to know So again, thanks for attending Just to start with I just want to paint a little bit of picture of where we see Net apps product portfolio and the cloud going forward and where we see OpenStack in particular fitting amidst all of that so it's definitely the case that We have a number of different capabilities within our portfolio and and you know But if you look at it from a different angle, we've got some interesting intellectual property with our portfolio the ability to move data in a lightweight Way that's opaque to the application layers above it Thin manner that type of thing we also start with being at least in the form of our data on tap product We are deploying The single most utilized storage operating system in the world now, you know, you probably familiar naps not the single largest Storage company in the world But when you look at the fact that for many years the all of our innovation was delivered in the form of data on tap It was it's an excellent place to start in essentially building a common data fabric across endpoints whether that be hyperscale clouds So today you can actually land on tap systems and in colos that are like low latency connections resident You know for example to EC2 so that's called net private storage for AWS But in the future you also see to see not distant future You'll see on tap resident as an instance in public clouds and in a variety of other kind of more You know hyperscale public clouds, but also in a variety of other places as well And of course we see increasingly open stack as that common infrastructure to service runtime. You'll certainly see large-scale clouds Built You know, I know there's some of the folks here this week talking about for example rack space and HP cloud and IBM's Get ambitions are a variety of CloudWatt in Europe that are building large-scale public clouds that avail open stack interfaces and so not only in the sense that it be has the potential to become ubiquitous You know as a standardized abstract standardized interface to these infrastructure Primitives, but also because it does shim and support for some of the hyperscale providers certainly AWS today But increasingly we expect, you know, I noticed there was a start a project that was started up in this last In the ice house time frame that had some primitive support for example Google Compute platform I wouldn't be terribly surprised if Azure showed up. I think I saw something in the news that indicated that Well, they may not actually, you know build on open stack. They probably will start availing open stack interfaces So we see this as that kind of common infrastructure to service Runtime for lack of a better term, you know private clouds You know that one to actually be established in relation to various hyperscale on other clouds Increasingly we believe will default to open stack and if not open stack built on some technology that supports the set of API So that you can have this common data or common plane that's a sits atop what we're trying to build this common data fabric So that's that's a big part of it now, you know, of course open stack is is obviously Thriving open-source community by some measures the fastest growing in history We've been at this for a little while that operating system I just mentioned is originally built of BSD origins We we push and pull frequently where the employers of some of the primary maintainers of edifice and Lennox for example So it was very organic for us to to end up in open stack as early as we did and to that point We in fact were the first major storage vendor to have joined and also to have submitted integrations upstream I mentioned that I've been involved in last seven design summits There's been a progression in delivery of integrations which will get into more depth here shortly We've been at it for a number of releases and having matured and expanded upon our offerings And I should also offer that we're in the midst of a hyper growth phase at that app as it applies to open stack So I would probably be remiss if I didn't put in a plug for have a look at our job postings There's a number of things we do really well and so no intention to go through all of what those things are You know whether it's our e-series platform or our fass platform They're the things that differentiate from commodity storage in the market And so when you take a look at something like a Cinder a standardized block storage abstraction We as an accompanying that app cannot cannot afford to How you know have those competencies hidden behind that abstraction at the same time, you know customers loud and clear Want the value of that sort of open and standard abstraction So I want to write my application logic against this Cinder API and I can switch it implementation per use case What makes the most sense support a certain SLA in a support of certain cost metric whatever it is and So we've got a number of things that we bring to the table, but settle upon that abstraction But still get at the richness of our set of capabilities So that's where we start when it comes to Engaging in the community is making sure that in terms of development is making sure that nothing's left behind so whether it's various qualities of you know data protection or storage efficiency performance assurance encryption so on so forth those Things must be availed explicitly and accessible to a tenant. I'll talk about how in just a moment So, you know, we start with cluster on tap. I talked about how it can actually Be resident in a number of different contexts This is a little bit abstract But that operating system can sit in front of net at this can sit in front of various forms of foreign disc You know other other companies arrays it can also actually increasingly set atop what we you might argue with this definition increasingly refer to as disk as a service aka EBS so you can imagine a scenario in which we take Those capabilities and deliver the richness of on tap on top of it You know, I arguably a legacy of net app at least in the on tap space not so much in these series Area is you know, we've taken commodity components and knit them together Provided software capabilities on top of it to you know to support enterprise type requirements You know high availability service provider and cloud requirements as well in the last several years so what's kind of depicted here is The what's referred to as a storage virtual machine so in clustered on tap and actually let me get into a little bit more here you have this notion of a virtualized storage controller virtualized network interfaces called lifts and the virtualized disc container the the flex ball if you will and These things can be moved across as intended to kind of like a physical separation those little vertical lines Where and that's the hardware But we don't care because we can move these across a cluster this cluster can grow horizontally You can scale horizontally you can scale vertically because we can swap in individual heads You know for a given workload that doesn't lend itself to being distributed across multiple systems So you have the you know multiple axes by which you can actually scale You know no one tend to go through the entire bullet list But again, we're bringing that sort of richness this set of capabilities One of the imports of this is that we can support continuous available a continuous uptime something we think is particularly Valuable when you have a vastly, you know co-mingled multi-tenant cloud That is deployed with open stack. So you're underpinning storage in our sense You know is is you don't you need not necessarily take any downtime to expand contract Frankly move if you can span the cluster interconnect across locales that type of thing Like I said, it's a good place to start to being the most, you know prevalent and most widely deployed single commercial storage operating system in market So the places we start Are certainly in block storage. We've done some work around image and object storage. I'll get into that in just a second And this is just sort of an overview and I'll go through a highly simplified Frankly somewhat trivialized view of a common open stack services and where they map to net-app product portfolio Is there a question or should we pin it to the end or just you just discussed amongst yourself, okay? So the first thing is glance. Well, what does it mean? So I guess the first thing we'll just talk about our on-date on tap Certainly you saw that it can be applied in a variety of different ways to our prior portfolio So there's two primary options with glance back in it with an object store swift or with a file back in And very commonly in fact, I think probably most logically that's something like an NFS wherein You can apply something like net apps deduplication capabilities to aggressively compress the amount of capacity associated with your image repository, you know so we we have we play tricks with pointers and you know, we we basically fingerprint data and in certain 4k boundary is what it actually amounts to and we identify commonality and You know, basically, you know, we remove the Necessary duplication. So, you know, of course we make that single copy that's shared Immutable such until it, you know, it's all all references to it are decremented But the point is is that with images, you know, imagine a series where you've got like 12 different varieties of rel You know seven you know with different stacks atop of it in your images Well, because there's so much commonality of the bits within it, you know, maybe the number is more like, you know, 500 You get pretty dramatic deduplication rates And I'll talk a little bit in a minute about why this is even more important when it talks when we when we get to Creating instances more rapidly. So, you know, again the image service can also of course be deployed We can, you know, take our eSeries systems EF by the way are an all flash variety of eSeries systems And you know, just take the glance bits and actually sit at the top there as well Some of the duplication benefits just described do not apply in that scenario. We have a future platform We've already announced that is a kind of from the ground up native all flash array probably more specifically aligned to You know kind of those crown jewel classic somewhat more silent applications at least initially but To the extent possible we're going to aim for zero-day currency on what you know for sender support with flash frame storage grid is a You know a software object storage capability that was hardened over time in the in, you know, very terse regulatory type You know a regulatory sort of Enforced requirements Environments healthcare for example, and so we're taking a lot of like that sort of hardened set of capabilities and delivering it in the Form of that you might expect it's more a little bit more organic to like open stack use cases Support the type of API and points that you would expect on it And that'll be more to say I'm not later in the year, but you can certainly imagine where in the case of image In the case of like the image repository where object is the back end that would be applicable there's also a Really interesting way to actually take this swift reference sample but to implementation in the here and now and deploy it on our E-Series systems. I'll get into more depth on that in just a moment, but there's some Particularly unique qualities kind of a no-level or ratio coding within the within the E-Series system that allow us to use more of a Traditional parody scheme to dramatically reduce the number of copies associated with those typical swift deployments so more on that just a moment actually right now so, you know, so those you may be familiar if you know much about swift that You know placement for protection is handled via consistent hashing ring The import of that is within a single site. You're talking about three copies minimally And so you might ask well, you know Why not just use sort of like a classic parody scheme like a you know a rate of flavor or something to that effect? And the answer is typically that the object storage deployment tends to be very heavily capacity optimized biggest Fat as slow as spinning discs possible Rebuild times in the hundreds of hours a hundred plus hours for four terabyte drives and we're moving into the five and six terabyte range so Onerous and and frankly the impact impact is that you are running Without without, you know, this level of protection you need during that sort of you're exposed during that rebuild period so That's why you don't use something like a traditional parody scheme Now I discussed that that our E-Series system, which is predominantly a You know very well engineered system with you know traditional razz considerations height very dense Optimized for a very high throughput system, but it doesn't necessarily have a lot of the higher order Storage data management capabilities within it and that's you know kind of a design Decision, you know, let's It's designed to work really well with various stacks that have that intelligence And so this is a first-class sort of underpinning to those and I would argue Swift's actually an excellent example of this But one interesting characteristic above what I just described is this dynamic disc pools so it's another implementation of the academic work behind the crush algorithm for those who might be familiar and The it's it amounts to more of a deconstructed or distributed raid virtually You know no level ratio coding and however you prefer to refer to it The effect of it though is that rebuild times and the working rule of thumb, you know with an arm You know development arm there is that it's approximately 5% of the time it would take to do a rebuild And so what does that mean when now you can use a classic sort of or at least obtain the effects of a classic sort of Parity scheme and so, you know in terms of like numbers that 3x for a single copy Hey, I need to store a terabyte object or a number of objects are amount to a terabyte Three terabytes of capacity assigned to the protection of it all of the environmental all of the environmental is the cost and you know paint power pipe and whatnot It's it's not insignificant well, I can reduce that to it's actually like 1.28 x but you know approximately 1.3 x of of You know the original thing that I wanted to store so you know that's a pretty dramatic difference to be clear You almost certainly in use case dependent, but very commonly will want to Architect for site resiliency. You'll need another copy somewhere else. And so yes That's another 1.3 x at the other destination But swift by default 3x on one site and usually 2x at the second site because you wouldn't want to reconstitute across the man And so you're basically there's a replication activity that occurs amongst those copies. So the other effect of this is that we We remove that replete a significant inhibitor to the ultimate scale that swift can achieve by By tuning down or basically throttling down that replication traffic after all we're not making the additional copies We're not making the additional copies within the single site You would do make that second copy for the for the additional site and we've seen swift deployments fold under the pressure of on Going replication. It's quite an interesting thing to see like, you know secondhand commodity discs You know no name x86 compute Assembled to use swift with 40 gig ethernet interconnects and now that's I've only seen that once But that's where you do end up putting a lot of your costs into it from a hardware perspective And so we reduce that dramatically You know and you can achieve a greater scale. The other effect is that consistent hashing ring is a eventually consistent model Whereas in this scheme within a single site, we're immediately consistent. You get that right acknowledgment. You're protected So that's a reference architecture that debuted just after the summit in Hong Kong And it's part of our net up open-sec deployment and operations guide There's a I guess I can't reference them explicitly, but there's a large university Somewhere south of the equator that is deploying this at scale now. So pretty interesting capability we think So block storage, this is where we started and I think where we put probably the most overall effort in to date So, you know for those that aren't familiar and you know, I'm mentioning some of the the audience here is intimately familiar Cinder is not the data plane. It is a provisioning control plane. It's the orchestration activity You know, there's no impedance to the actual data flow So just kind of set the table with that There's a number of different options that we avail with our clustered on tap system And we just I kind of described a little bit at least at a very top level We support multiple connection methods and it may be at first a little mystifying why I would even talk about NFS in the context of Block storage service, but get to that in a second but We support paralyzed NFS NFS and ice-kazzie at present With seven mode, which is kind of our classic mode of operation has a lot of the characteristics of on tap I talked about but not the scale out characteristics per se We we do have that available to be clear. We spend the vast majority of our time Frankly seven modes and sort of a maintenance mode We spent all of our sort of development around innovation on clustered on tap and e-series Those are our platforms for the future. That's that's and it's frankly a better architectural mesh in both cases than seven Mode is that said it is supported. We have a lot of production deployment on seven mode. So it is out there on e-series That debut is a nice house. That's a brand new thing So ice-kazzie presently and we'll look to expand upon the array of options there I guess array of options and no pun intended Or maybe it is intended So just a little bit about clustered on tap. So there's different ways you can kind of define Atomicity of the tenant. There's different things we talked about the the SVM the storage of a virtual machine that kind of virtualized storage controller There's flex balls within it, which are a container that can contract and grow or you know expand And then within it or at least in our interpretation of of cinder We create cinder volumes. So in the case of NFS just to you know press forward to that What we basically do is we mount NFS to the location of the hypervisor and by virtue of the hypervisor and Libvert we essentially virtualized the file into a block storage device so virtualize, you know block blocks block device And that's a commonly employed Scheme by the way an enterprise virtualization It's vastly more scalable than frankly ice-kazzie any given ice-kazzie system is going to run out of lones and initiators Well in advance of what you could conceivably contain within a a given NFS export And I should also mention that NFS. This is not your so to speak father's NFS I think you might have known about 10 or 15 years when you take about take something like custard on tap where you can have multiple nodes Actually supplying IO think about it. I guess you could say data or IO engines if you will Supplying Capability into a single global namespace and then you take the capabilities of paralyzed NFS Which they can you know kind of create a little bit more of a distributed IO characteristic against that You don't have that sort of classic fan-in problem and availability problems associated with it So the point is is we can get vastly more scalable open-stack built to this sort of hyperscale design center It made perfect sense for us to To to you know kind of go there with NFS that said and you see this probably increasingly even this week in the ironic sessions There are a variety of use cases where the consumption of cinder would be from nonvert or bare metal Maybe even you know if you will entities that are external to that which is managed by open-stack We have a number of deployments In fact, I believe we're we have them on stage in our third session here today Well each one of those customers PayPal that that uses cinder independently or actually maybe the eBay portion that PayPal of that of that consortia that or that that group That use cinder independently in places of the rest of open-stack So in that scenario what what you need to be able to supply when you're asking for a block device is something that you know You know has the semantics of an ice-caze something that like I can do something with I asked for a block device Well, give me something I can mount ice-caze is the clear option in that scenario So we support both and it's kind of a per use case type of type of decision Another sort of subtlety is that again Mindful of that kind of hyperscale sort of ambition of open-stack We don't actually use by default flexible snapshots because we don't want to co-mingle tenets In a snapshot, you know from a security perspective that would not be optimal Maybe if you're talking about dr of the whole cloud sure but in a in a scenario where you have you know coke and Pepsi in in the You know a deriving benefit of a single open source or open-stack public cloud You don't want to lock them collectively in a single snapshot. So we actually do per file or lawn in the case of ice-caze Cloning and of course in the case of a snapshot we mark that as immutable a clone of course is a writable version thereof So just a little bit about what we've done there You know I talked about earlier You know avail our core competencies our basic value propositions to the market things that make us different from a commodity block device how do we actually do that and It basically starts with something that we've participated quite a bit in within the community is evolving this notion of a cinder volume type And by the way in a subsequent, you know sort of slides I'll talk about a shared file system as a service capability that will also have this kind of type construct and And what we end up doing is allow the administrator or the cloud deployer to define and highly it's entirely arbitrary A catalog of capabilities so and you compose those individual catalog items with the characteristics of the back end So we what we do is we take things for example like deduplication and compression and we advertise those as explicitly capabilities of a clustered on tap back end and we allow the deployer You know the cloud administrator if you will to say You know I want to create a volume of a particular type that makes sense for my user base for my tenant based and And it will amount to you know, I'll call it gold or cats dogs or birds or whatever But it will be composed of those types of capabilities the cinder scheduler when it receives a request for a particular new Volume and it has the type parameter appended It will you know evaluate all the back ends available to it and place that provisioning request appropriately So you get the effect of and by the way I should mention you can compose these things is you know Kind of abstract and coalesced set of capabilities maybe like platinum means like automatically replicated to a certain locale Maybe it means that you know, it's a certain media type whereas maybe you know tin or something like that would be on you know cheap disc and and You know, maybe it doesn't have I also is associated with it But you can also be you know very specific like you could create a type called the duplication You could also have in types a you can specify that it ought go to a back end of a particular type The point is there's immense flexibility to build the catalog the way you want it to and so that's the way we take Things that are different about our systems and make them explicitly accessible to the tenant and you can derive that value You know, maybe it's measures of storage efficiency Where and because of aggressive compression and deduplication or thin provisioning You you know a tenant basically gets the terror ten terabytes They ask logically but in reality you're consuming, you know a small fraction thereof and that actually could be the means by which You know a given service is profitable. Yes, sir No Not exactly so the storage grid has a variety of capabilities are kind of outside of the scope We're talking about here because we haven't really formally launched it with some of like I said the API support We're talking about Swift itself the stuff I talked about is a Rougher to architecture that basically changes the ring builder Supplies different parameters to the ring builder logic basically says don't make the extra copies because you can inherently trust this We don't do any other modification and Swift at this time. So yeah, that doesn't exist So just a little bit about this kind of volume I'm sorry a cinder volume type and some of the other QS capabilities associated here You can see we've established a gold silver and bronze. We've assigned certain attributes to it You know in this particular case, we also established QoS policies You know basically ceilings if you will per volume And you'll show there you can actually create a volume and select from the type that that makes sense and You know magic ensues, right? Any questions around this? Okay So I'll just move past that So a new thing in ice house that I talked about our e-series and our EF series, which again is an all-flash variety of e-series is now accessible via cinder, so that is In ice house, but we also did perform a back port Which is available on net have net apps github repos to support the use of it under Havana and Grizzly as well So those didn't go into upstream or in a stable branch for those because of course the criteria for that is bug fixes only that Represents a new feature, but it is accessible. It is in the open albeit on the net github repos There's a scenario you might imagine I went like to derive the value of the very high throughput capabilities of an e-series system and You know perhaps I've created a cinder volume type of analytics And so maybe I've got Sahara, you know analytics as a service deployed on top of it and it selects from a Cinder back-end that can support the throughput characteristics you need of it And you know here's another depiction with cluster on tap systems and this scenario silver was composed such that it had a Replication policy associated in this case actually we're depicting a boot from volume scenario, which I'll get into a bit more detail here in just a second Okay, so this next thing I'm going to talk about really bridges glance Open-stack compute and cinder So it's the ability to actually create new instances based on our cloning technology Essentially instantaneously, so just a little bit about the day in the life of a VM by default in open-stack I guess virtual machine an instance an open-stack parlance so the basics of it are that You know a tenant of an open-stack cloud says you know give me an instance of a particular flavor Give me a guess virtual machine of a particular flavor and Nova and again sort of a simplification here You know interrogates its its fleet of hypervisors and tries to make the it plays the Tetris game if you will and trying to place it appropriately What ends up happening of course and is selecting a flavor which has you know different characteristics of you know memory and compute Characteristics, you're also so of course selecting an image to boot from So, you know perhaps and toss perhaps and boon to you know sir us maybe if you enjoy pain very various different Options of course, you know every option under the sun the way of images What ends up happening though is an interrogating that fleet. It's going to determine You know hey, okay, where's the best place to put it per flavor? Okay Well now I need to actually get those OS bits that image if you will over to the location Hypervisor such that actually instantiate it can actually boot it and so if it doesn't exist in the form of some I guess you could say a cash copy on that system it will curl it It's an HTTP copy over to that location. You do get the benefit of subsequent image or supplement boot Instances created of that same image you get a kind of a caching effect within the hypervisor, but it is local to the hypervisor And so what we end up doing? Here is a scenario where maybe you have glance on net app And if you do have glance co-located with your cinder capacity store What you can do is you can boot from volume by default and so there's a couple of things that you get from this One is when booting from the volume you're you're starting with a Persistent instance now maybe that your use case doesn't demand a persistent instance You can still actually just select delete upon terminate and get the effect of an ephemeral instance Again if the use case doesn't demand persistence, but I would I would argue or cert that you know It's far easier to go from Persistent to ephemeral than it is the other way around you're having to copy out the things you care about into some source of Persistence if you said suddenly realized actually I did need to keep that around so it makes some sense to actually start with booting instances in a persistent manner first But okay back to like this model so so if it's the case you have glance perhaps and located in NFS And you have or vended vnfs and you also have the capacity store for cinder on the same net flexible volume we will clone from glance to cinder and The storage copy portion or the image copy portion of it of the boot process is essentially instantaneous because again We're playing pointer tricks. It's not a scenario or any bits are copied until there's there's no no Like additional capacity overhead until there's a net new write or overwrite so it is effectively instant Of course, you still have to boot the actual instance itself So I'm not trying to claim it's entirely instant, but the storage portion of it. It's mitigate the copy portion of its mitigate If it's the case that glances not on net app Or perhaps is elsewhere on a system external, you know another net app system somewhere We'll make that first copy. Well, we'll make it and we'll create it as a cache Excuse me Take a drink here Well, but we'll make that first copy and once we've done that then that it is cached But it that caches global to your entire fleet of hypervisors It's not localized so that one compute node another benefit to this approach is you can actually boot your compute nodes stateless We have a number of folks actually do this Okay, well in booting from this short of shared storage persistent model I also easily facilitate live migration, which of course shared storage underpinnings are generally a requirement for live migration scenarios So any number of different aspects why this is interesting sometimes we call this a rapid cloning capability And that effect is you get persistent images or instances that are storage efficient and appear much much faster There's an additional optimization that went into ice house Wherein if you have Glance on a flex volume on this end of a cluster and you have center in a flex volume on this end of the cluster We have an additional layer of optimization that does a copy-off load that's in band between those nodes Doesn't get washed through the host and so you know just an additional enhancements to enhancement to this model So shared file systems Now I guess the thing to start with is if you believe IDC numbers and even if they're not entirely accurate They're directionally correct something like 65% of all commercial storage sold was was was delivered to underpin shared file system deployment in some form and You know even if you say well, no, I argue with that. It's 40 percent. It's an immense component of Infrastructure and if you accept it opensack is the de facto open infrastructure as a service Capability and we do and then there's a critical omission, you know How can I render, you know infrastructure primitives like shared file system in an as a service model? And so what we've basically done and I should point out that we'll go into significant more depth on this topic in the ensuing hour I think it's actually 10 10 10 15 sorry If you know what Cinder is then you know what Manila is Manila is Cinder for shared file systems and actually goes a little further than that It frankly originally is a fork of Cinder, but there's a lot of additional lift, you know, frankly heavy lifting associated with delivering shared file systems orchestrating coordinating the activity so that you have a you know things mapped in the correct same off and username space You know plumbing into the tenant specific SDN Solving kind of a last variation a last mile problem. So hey, I vented it to you now do something with it Those are all things that Mila has to contend with that Cinder does not And so Manila is a new project. I should probably be careful with it It's accelerated dramatically in the last few months. It's not entirely new. We've been at this for a while So now, you know, basically conceived designed initially implemented in Collaboration with with the other number of other companies and increasingly are building community around this You'll see us next hour on stage with what you probably identify as a number of net apps traditional competitors in the market and the intention there is, you know to build the community and legitimize Manila so that it becomes you know over time it moves towards a core status You know the community is not going to accept a net app only thing foisted upon it And of course who our interests are we have a lot of cool capabilities when it comes to shared file systems And we'd like to see that represented as an option You know along with co-equal option with block and object and Certainly, that's very applicable when you talk about like movement of classic or POSIX style applications Into an as a service model with open stack So, you know a scenario give me shared access between instances X and Y to a Existing perhaps if share or create a net new NFS export That you know, I can coordinate between them. So, you know, hey Manila is a new thing. It's on stack forage Meaning well actually here's a little bit of a demo, but I guess as we're getting into this It is a real thing. So this is a this is not all smoke and mirrors This is no granted recorded, but nevertheless, you know horizon interface does exist for it It's not an upstream. It's not a nice house, and it's not yet upstream and Juno proper It all exists with on stack forage, you know, here's a scenario where we're creating shares You'll see us actually map it to you know particular IP entities And there's a number of policies you can apply to it from a security perspective Much much more on this particular topic in the next hour So maybe defer questions on that if you do want to become involved in the community and we do definitely actively Encourage that you know, we follow all the conventions of the open-sack community and all the other projects and programs Irish weekly IRC meetings You know, it is the code is on stack forage. It's built in a manner such that, you know We've automated testing, you know, it is You know gated through the same mechanisms that others code submissions To other open-sack projects are so the very clear intention here is to get this into an officially incubated status and then into core Now I should state that independently of that We are working with distribution providers to deliver this in the form of a tech preview You know, so think of them as somewhat orthogonal efforts We want to deliver the value of this As quickly as we possibly can and it is usable in a limited set of use cases today We expect those use cases and you know some of the you know Accounting for various permutations to expand rapidly here in the coming months I should also mention that the gentleman sitting here in the front Ben Schwartzlander is Presently the PTO on Manila So that's shared file systems We've done a number of things I talked about a series any of series I talked about how we're moving through the progress of Manila Paralyzed NFS is something we default to In our cinder drivers So of course we evaluate whether the compute node the host OS on it can support it and then we negotiate to whatever Version of NFS is appropriate for it, but if available. I'll just as an example. I think a contemporary rel Does support paralyzed NFS then we'll make use of that in some ways You almost want to call paralyzed NFS something other than because it connotes that it's NFS certainly it is But there's enough different about it. It you know that it you almost want to depart from the kind of the legacy connotation NFS Talked about the in additional optimization for an enhanced instance creation There's a number of reference architectures that we've been actively engaging on and you'll see a lot more for us in the in the not just in future one of the sort of I Guess you could say tenants of how we approach reference architectures is that we want to provide a company in automation So it's definitely useful and interesting and a nice, you know read through like sort of the philosophy and thought process behind deploying Open stack with that app But boy wouldn't it be a lot easier if I could just make it so with commonly, you know used Configuration management tools a lot of puppet and the future chef So that's that's what some of this stuff we're working on These are just sort of focus areas things that we want to work on and that we are in the process of planning on Attacking in the Juno time frame So we're happy to discuss some more of that individually I guess we can do a little bit during the Q&A But since these don't yet exist is just an indication of some of our priorities an expansion in particular into heat and Solometer and some work in horizon and mine a Big thing though as I was talking about deploying reference architecture So we can definitely deploy cinder in an optimal way for And some of this by the ways exists in the community for a while independently of an app in a way that You know if I'm a if I have a puppet Consumption or if I if I use puppet if I use chef It's already the case that you can actually like configure cinder in some cases, you know in a highly available way And I can you know plug the right values into cinder comp and that type of thing But what I'm referring to here is more it's actually addressing the controllers the storage devices themselves such that you can actually configure their state in their their their I guess configuration explicitly to be a puppet and chef, so we're working with both of these companies collaboratively to deliver modules and Enablement for those platforms. There is some in the way of Capability some capability on puppet forge today for seven mode so we're actually going to deliver the Clustered on tap in e-series for for puppet and then chef probably just from a progression perspective But not priority will fall a little bit thereafter You know there's a lot of different reasons why we think this makes sense And this is someone informed by like the use cases. We've seen in using open stack with that app You know a good way to start a proof of concept is to take one of those SVMs I just talked about whether it's a v-filer, which is the old name in seven mode The storage virtual machine and in cluster on tap and just carve it out and deploy it to your open stack proof of concepts It's a way to get started very quickly You know it's it's very much the case that you know deploying your kind of high-value applications Both and I am not the world's biggest fan of the this metaphor I'd argue somewhat tortured metaphor of pets and and cattle But you know deploy your pets and your cattle in the same infrastructure It may very well be that that quote-unquote Livestock entity becomes the crown jewels over time and you have a seamless infrastructure that you can attach those higher SLAs and actually support it There's definitely a measure of You know frankly holistic consideration of the effect of your cloud deployment if I'm storing data Three and five and in the very extreme. I've actually heard 11 times all over for protection against various scenarios What does that mean for your power bill your sustainability? It's not just the cost of capex For the individual system, but like what does it take to make this a sustainable business? You know do I have the most efficient mechanisms in place to support high SLAs? Do I have the flexibility to support on a rate of capabilities? And I we believe we offer those and we hope to expand upon those in the future Some of the things that we've been working on again the flavor of reference architectures is the notion of assembling what we call flex pod So basically Cisco UCS and nexus with net FAS systems. I guess in the future E-Series systems Such that you have a converged infrastructure think of it as cloud in a box now It's maybe not that simple the point though is it's a little bit of an easy button in consumption and deployment So we have like some preview reference architecture on that and then we have separately a just a simple rel OSP and Open stat rel OSP so red hats distribution and net app reference architecture that exists independently of those Cisco components Debuted last last month all accessible off of net app.com slash open stack We're also working with other distribution and reference architectures. So we see like different, you know mappings per vertical in use case A great way to kind of keep track of us is that come back Com slash open stack redirects to a community site. You can ask questions there directly for those in the development community You can find us hanging out on IRC Twitter I'm trying to enforce Twitter discipline Not necessarily natural to it and Variety of different capabilities probably the one sort of canonical document on what we've done and how to use us is the net app open stack deployment and operations guide posted on app.com slash open stack and That document is presently Havana, but in the next We believe five to seven business days you'll see the ice house version posted there So there's a variety of sessions some of which have already occurred to be clear That we think you ought to be know about ought to know about that are related Certainly today immediately there after this is a manila session again This is not a net app specific thing It's a community session and following that we're going to get into a little bit of best practices kind of a minor user case study with both net app internal engineering IT where we have a vast fleet of Systems in within a single site I heard a number of a hundred thousand instances that we are moving to open stack over time So that some of that's already underway, and you'll hear from them also pay pals on stage to discuss their use of open stack on that app There's some other sessions like for example red hats extending trip below. They show like the deployment of open stack systems through ironic Nebula showing open stack integration in their booth for example Also in the back as your folks are leaving we have a survey For feedback if you have a minute We'll see you in Paris looking forward to it first summit in Amia, and then I think that leaves about 10 minutes if I'm not mistaken for Q&A and So we'll go back to this any any questions Is that a bit dense for this time of day Collectively no comment All right anything We'll be in the back afterwards, and if there's anything that you wouldn't get into a more depth on Well, maybe not appropriate for this context. We'll be around to answer questions. I do appreciate your attendance. Thank you