 Okay, I guess we're mic'd up. So good afternoon. It's just about five minutes after noon, so I think we're gonna go ahead and start. Welcome to, I see, what is the actual title of this? Something to the effect of this is gonna be a pretty interesting session, possibly even the world's most interesting storied session. So I'm Rob Esker, I spend my day at NetApp on all things product management strategy around OpenStack and the ecosystem. Been involved with OpenStack for five and a half years, pretty close to the inception within a couple months of it. We have today, on stage with us, I guess maybe for the first time as one company, Mr. John Griffith, I'll let him introduce himself, but I know he's a humble guy, I just wanna point out, John's kind of a big deal, he's the founder of the Cinder project, was the project technical lead for a long time, sat on the technical committee, but John, how about you introduce yourself? Nice to meet you, that's it, no. So yeah, my name is John, I've been working on OpenStack for almost five years now, not quite, about four and a half, as part of SolidFire and as Rob mentioned, now we have SolidFire and NetApp merged together to bring something even better and make more change, so. So we're gonna talk about a few different things. I guess the basic sort of agenda is, just to give you a preamble on OpenStack and NetApp, how we're engaged, what we've accomplished, which I hopefully sets the table for a discussion where we'll bring a couple of special guests on stage. They're over here, but I'll introduce them when it's time, and thereafter I think we'll kind of make it a little bit more of a conversational discussion on a few points, potentially, if you will, sort of opportunities for OpenStack to improve in some of what NetApp's thinking in terms of acting upon that. So I guess I just won't do that. Oh, I guess we're not actually getting the progression on the screen. Okay. I just wanna point out, so from the beginning, NetApp's been involved. That's basically true of SolidFire as well. There was this small matter of SolidFire actually just became a company, more or less concurrent with OpenStack, but essentially born concurrent, while Cloud was underway already, so sort of like came about concurrent with the growth of OpenStack. We are heavily invested in the sense that we're deployers of OpenStack ourselves. We have large-scale deployments in both our global engineering Cloud and then also in our corporate IT set of the house. I mentioned some of Mr. Griffith's accomplishments. We also have Manila Project Technical Lead. We have elected board representation. And I think most significantly is within the community, we've also been the single largest contributor, if you measure this over time, and this is via Stackalytics, to both the Cinder and the Manila projects. And so you'll probably notice on the Manila side a large schwaft by Mirantis. That's actually commissioned by NetApp. So we basically engage them in sort of an engineering outsource capacity to augment our capabilities. So point is, two out of the three most significant or the two out of the three foundational storage services in OpenStack, NetApp's actually a leader for. And in the swift sense, in the object storage sense, one of the reasons why we don't gauge quite as heavily there is we have our own object storage server called StorageGridWestScale. We'll touch upon briefly here in just a moment. So we pretty closely watched the results of the OpenStack Foundation user survey, of course the most recent version available as of, I think it was last week, possibly the week before. And we've, NetApp, even before SolidFire was the most single widely deployed back in amongst the commercial storage options for production. And this is specifically listing production deployment. And then of course that position is significantly consolidated with the addition of SolidFire. Until like very briefly, story I'm not sure if everyone in the audience, perhaps those who came from SolidFire might not know. We were a little concerned about the growth of SolidFire so much so that for a while, NetApp was playing with building our own native all-flash array beyond our all-flash FAS systems called a flash array. And we had built a sender driver so that upon ship we would have something that might seek to compete with SolidFire. Of course, that problem became solved for us when we actually acquired SolidFire in that close in February 2nd of this year. I do want to also briefly point out, and I do have the numbers obscured intentionally, but I'll tell you that all of the columns are in the three-digit range. So the lower numbers are unique customers and I'm sorry the larger bars are unique systems. And so there's a few things that we draw from this and this is basically the last year's worth of deployment. This is telemetry we collect from the systems that are able to provide it with sender deployment. So there's a really significant growth. Even beyond the Foundation User Survey we see a ton of growth and this is empirically derived. This is not just someone throwing a dart. This is, these are systems that are being deployed in production capacity and reporting back to us. I guess one thing that's maybe a little more subtle that I do want to point out though is there's a pretty significant growth in the ratio of the customers to systems. I think I've mentioned this even in the past perhaps in prior versions of this sort of session where we now are pretty convinced that this is going, this is not merely production at an early state but it's, you're starting to see X2 and X3 and X4 and organic growth which is definitely associated with we're no longer playing with it. We're actually making this work. One of the things that we want to point out whether it's solid fire, whether it's the historical enablements we've done with Clustered on Tap or E-Series, some of the other NetApp platforms we always do so upstream. And so regardless of which of the distributions that you seek to employ we're there but likewise we do go deeper and do installer specific integrations and we'll just have, there's a number of those listed here and there's a different story behind each one of those logos but just so you're aware not only are we upstream we also go an extra mile to try and improve upon just generally the user experience that you, that is so critical when you consider the complexity of deploying OpenStack otherwise. So I'm not gonna go into all the specifics it's, we don't have the time but our effort is not just to the primary storage platforms solid fire, data on tap, E-Series, I mentioned storage grid that's a alternative implementation of the Swift, it is a Swift API end point itself, Alt of Alt is a cloud backup appliance that you know, sender backup service can land on. The point is we, this is a whole effort this is a portfolio enablement effort for NetApp and in significantly we are now delivering a portfolio of capabilities. You might have thought of NetApp historically as a single platform company that's definitely not the case. So Manila, I'm gonna touch upon that very briefly. You'll see that listed that's a project that NetApp pioneered that NetApp brought to the table. We built community around and now you're starting to show up, see show up in the distributions. Indeed and the same user survey I referenced that shows up as one of the most significant areas of interest for new deployment. Towards that end there's something been missing though Manila didn't have a project logo. Is this important? Not sure but it's kinda cool. What is that thing? It's a jeepney, a jeepney so at the end of World War II the city of Manila there was suddenly a ton of U.S. surplus jeeps which were sold or essentially given to folks and essentially became like the de facto public transportation method for the city of Manila. They're like chromed and colored vividly and frankly it's much more attractive than that. So that was voted upon by the community by the way that prior project logo and if you want you can have a Manila project sticker if you come up to us afterwards. So that's just a bit about NetApp. We have a ton of other sessions I think it's 17. We have some material here that will direct you to those other sessions not intending to go into a tremendous amount of depth about all of what NetApp's doing in all of the different places that's covered well in those other sessions and generally at netapp.github.io. We're gonna transition to hearing a little bit from some of our deployers and for that matter also partners in aspects. The first individual I'd like to actually bring to the stage is Phil Williams, principal architect for all things storage with Roxay's private cloud. Phil. Thank you Rob. Hi everybody. Rob asked me to come along today and just talk about very much in general some opinion on storage. I've only got a few minutes but I could probably go on for days. So without further ado, how do I see storage today in the OpenStack world? So my background is from enterprise storage. I'm going through this sort of mindset change of, hey there's a different way of doing things but where are we? So the way I see the community is everything is about keeping things simple. A lot of the successful deployments on OpenStack really are built for cloud native applications. We saw this morning that there's a huge majority and a huge significant amount of applications out there that just aren't cloud native. They're nowhere near being cloud native. It's going to take a long time for those guys to get to be able to consume OpenStack. So how can we adapt to what we have right now? Those guys, those applications, they look for high availability within the infrastructure not necessarily in the applications. It's the whole pets versus cattle debate and how do we petify OpenStack a little bit? Just almost like an interim. We want those guys to get that, to that sort of infrastructure or to work with that infrastructure over time but how can we help them? And how can we move them to OpenStack and make OpenStack successful within the enterprise? Just one thought around infrastructure and building availability or high availability into that infrastructure which is something that's not all that common at the moment. Data is the most important thing in any business. No data, no business. It's trying to protect it an application layer compared to doing it in storage layer. It's difficult, it's slow, it's expensive. It takes time to move things around and especially is when you start looking at disparate sites where you're building in HA for catastrophic events. Shipping data between sites is horrible. Doing it at the application layer is even worse. So if you can do it at the storage layer and glue that into the rest of OpenStack that seems like the right way to go. So thinking about that feature in terms of things like Cinderproject, how can we start building in the things that enterprise customers are looking for in that infrastructure? Things like clustering. It's nice to have that sea of compute and sea of virtual machines that are loosely coupled but there are still use cases where we are reliant on voting and quorum disks. So how can we get that deeply integrated into OpenStack? Some bits of it are there today, some bits are not and there's some rough edges. It's kind of hard work. Replication between sites. I mean, that's a huge, huge thing in the storage world and kind of paid the bills for a few years. It's nice charging twice for storing data. But it's important and enterprise asks for that. So how can we get that deeply integrated? That then rolls into things like RPOs and RTOs. So Swift eventual consistency just doesn't fly in the enterprise. People are getting the mindset change of that's kind of okay, we can work around it but how do we deal with it today and how do we get them actively deployed today? So Rackspace and NetApp. I mean, quite simply, OpenStack is quite difficult to consume. It is complicated. People like ourselves at Rackspace, a whole bunch of other ecosystem providers, they help people get on board with OpenStack. We try and help everybody on that path like I'm talking about there is today and how do we get them to being that cloud ready native cloud consumer? And it's gonna take some time. Rackspace can do all these things along with our friends over at NetApp. Within a customer's premise or within your own data center, within a Rackspace data center, in a third party site, we have a thing called OpenStack everywhere now. So we can partner with our partners, we can do this all together and provide it as a service. But whilst providing that service, we're helping with that journey to being totally cloud ready. And a few of the lessons learned, I've said refedges already and there definitely are a few out there. One thing I can't stress enough is validate, validate, validate. The CI process for Cinder is fantastic, but it still doesn't capture everything. There's the old gotcha out there that's like, wait a minute, how did this pass and we not realize it? So if it's using us or doing it yourself, make sure everything gets validated and actually works. And a couple of sort of final thoughts which I'll leave quite open. The ongoing debate of FiberChannel and iSCSI. I've come from the enterprise background, I've built some of the largest FiberChannel networks in the world. FiberChannel is great, don't get me wrong. So is Ethernet and it's so much more flexible and there's so less cabling and it's reusable and you can use it for different types of storage and not just block. So it's a touchy subject with the enterprise guys but I think they're opening up to, oh, we can do things differently. We've always used FiberChannel. Let's look at a different route. And finally, to converge or not converge, I think eventually we will get there but when you're operating at scale, it's about using the right tool for the job. You wouldn't try and cram everything into, I don't know, take a car for an example. You wouldn't try and use the same car for every kind of purpose. We kind of go and buy something that's kind of general purpose but you never get the fastest but the most spacious, they don't fit together. Same with storage and the same with compute. Distributed storage is hard. It needs very specific capabilities of hardware and you then try and throw a compute workload on that same hardware, you get contention. It's, we're not quite there yet. Eventually we will converge but I just wanna throw that one out there. And just finally, a shameless plug. We have the Rackspace Cantina across the street. So if you wanna talk more about storage, what Rackspace is doing with OpenStack, feel free to pop across the street. And there is lunch provided for the first 200 people in it. Thank you. Thanks, Phil. Thank you, Phil. And I'm told that maybe there's even libations to be had at some later stage at the Cantina. Probably, yeah. Yeah, anyway. So briefly on the topic of Fiber Channel, NetApp has enabled it on ONTAP and E-Series. The primary reason why we did it are for those who already had investment in it and want to bring it into the OpenStack fold. It's to be clear, we very much agree. The right place to start Greenfield is with Ice-Cuzzi or NFS. So the next, and we can certainly go into more depth in the later on in the discussion. Unless John, you wanted to speak to that. I'll let it go with that. You might even be able to discern what John's feelings are about Fiber Channel from that comment. The next guest is Chris Ferraro, Senior Cloud Engineer from FICO. Thanks very much. Thank you, Rob. My name's Chris. I'm on the Cloud Engineering team at FICO. And I'll go through a couple of things about FICO, how we're using OpenStack, some of our decisions, and get into some storage details. So first of all, I'll get into who is FICO. I think a lot of you might know us by FICO score. FICO's been around for a long time since the 50s, doing a lot of data analytics and a lot of financial related data-driven type products. They get into some customer data. One other product you might be aware of is when your bank calls you and says, hey, can you validate these last five transactions? It's probably a FICO product that you're talking to there. And they make a lot of decisions, 98% of credit-related decisions are made by FICO. And 2.5 billion credit cards are protected by our fraud systems. So they've been around for a while and we cover a lot of the financial market. So FICO has in the last couple of years decided to make a move to the Cloud and that's based on some decision. They're historically been an on-premise company. Products have sat inside major financial institutions and there was a desire to move into different markets that weren't really accepting of these huge appliances sitting inside their data centers, moving towards more of a SaaS type model and packaging up some of our applications into a platform that our customers can then log into, hosted by us, and utilize our tools to massage and get more information out of their data. So it's a move away from the traditional on-premise technology and it also moves towards a more simplified environment for our customers as well as FICO's operational support. And low risk cost overall that we found and gets us into areas that we weren't able to get into easily because the scalability of SaaS as opposed to more specific hardware and appliance-related things. So the first application or first platform that we're targeting for OpenStack in this new infrastructure environment is the FICO Analytic Cloud. It allows our customers to use our tools and we have a marketplace where tools are available. They can pick and choose which ones they want their data to interact with and they pipe their data in and get out information from that raw data. So we have chosen OpenStack as our to run this environment and this has pretty much checked all the boxes for why we selected it. There is the scalability aspect. There's, we were leveraging a lot of existing skill sets. Engineering team was always looking for ways to, knowing that as our design matured it would be handed off towards the operations team and we were very aware that the operations team, we wanted to have them work in similar technologies that they were already supporting if we could handle that. The automated deployments for the environment, easy ability to scale, predictable costs and guaranteed interoperability were important factors in these decisions. So now I'll go into some of how our design looks currently. It's gone through many iterations over the last couple of years that we've been doing this. It gets into a little more detail but we have load balancers at the front for the OpenStack APIs. We're pretty much standardized on Cisco UCS hardware to run compute and storage. We are doing, Philip brought up converged infrastructures. We are doing hyper-converged for storage and compute in this design and we have C220s as the controller layer and to manage some of that, the hyper-converged contention between storage and compute we're relying on C groups to set up boundaries for who can run where on the environment. We also have tiered storage which takes advantage of both Ceph as a general purpose storage solution and we found that solid fire fit our need for Ceph as an all in one like doesn't handle all workloads equally and we found that solid fire does a really good job of providing a bit more power behind our storage and providing low latency and fantastic performance. So solid fire comes in here where we have high performance needs and have particular SLAs to meet to provide solid performance and a common expectation for our customers. We went with Ceph because it's the open source, open source, SDS type technology, general purpose block storage. They do have their Ceph FS which they've been improving upon in the last releases and object storage which we haven't got into that's on the plan but we're looking at Swift for future services. So the Ceph was chosen as it can be optimized pretty nicely for small and large deployments. Our model is more smaller deployments that can be scaled when necessary. We're dropping in a particular environment into say a new geographical region and might start out small but will need to grow as more customers come on, more demand is there and Ceph allows us to do that. There's already a tight integration with OpenStack and but that being said, Ceph doesn't meet all meet the demands of all our application requirements. So we started looking at solid fire about a year and a half, maybe a little longer ago and it was very impressive technology and they were able to fill that need, that high performance need that Ceph wasn't able to provide for us. The D-Doup and compression is very helpful especially in a couple of our use cases and replication is also was something we were looking for for site to site replication and things of that nature. So why did we choose, what were some of the choices or some of the reasons we went with solid fire? A lot of the same reasons that we went with OpenStack, the deployment speed, we can automate the deployments much like our OpenStack environments. Integrations were really strong with OpenStack as well as VMware, we aren't just running OpenStack in our environments, in more legacy FICO environments we still have VMware and solid fire and our NetApp storage works, integrates really well with VMware. The VDI is a good example of where a solid fire works fantastically well. The D-Doup in the VDI environment is pretty strong and you can set the simplicity of setting up the storage environment is, I think there's like three or four things you have to enter and pick an SLA or performance metric and you set three or four things and boom, you're done. It's real real easy to deploy and gets up really fast. It also has the scale out, so like I said, if we start with a smaller deployment in a particular location, we can scale out the OpenStack pieces as well as storage individually or tied together as we go. These are some of our use cases for our storage and what we were looking for again. These are specific FICO use cases. We have VMware integration, we have OpenStack integration, we have VDI is using it heavily. The high performance computing again, data analytics requires some of the applications are requiring very fast storage handles at no problem. And the DR replication between sites or between devices is very straightforward and strong. What are we working on? These are some things that we have planned. What we're looking for, what we're looking forward to in the future coming out of OpenStack, coming out of other projects. Looking, as I said before, we've been interested in SWIFT as a service for our developers and we'll be looking at that very soon. Shared storage volumes in Cinder. That's a database requirement in a lot of cases or a specific use case there that we're hoping for more support of in future Cinder. Releases the tooling, a lot of the operational aspects of OpenStack are still getting sorted out. More advanced tooling to understand how your environment is running. And once it's deployed, how do you operationally keep it up and running? Those are pieces that we've been working on and looking forward to some of the advances that the OpenStack community is coming out with. And this is all, we're always thinking of how self-service, how our services can be provided internally to our developers and to whoever might be using the platform, how self-service can we get it? Ships a lot of the burden and improves the time for deployments, I think was mentioned before, how someone puts in a request and it could take however long to get it. We wanna shift that more towards the end users so they can deploy and know that it's gonna be deployed in X number of minutes or seconds. So that's always, part of our design is the self-service aspect and that's it for me. Thank you. All right, so I think I should probably do a little bit of a time check. Just a few things I wanted to talk about before we have time. Please, Mr. Griffith. So, it kind of dawned on me. Some of you may know me, some of you may not. Some of you may have heard this before, but I thought I should give you a little more background on part of what these guys are all talking about, especially in the solid fire situation, right? So, solid fire came about out of Rackspace. Our founder was at Rackspace trying to solve block storage there and came up with the idea of solid fire. So, we're a scale out clustered storage system. Start with four nodes, scale out horizontally, no downtime upgrades, all that good stuff, right? All these things that you think about when you think about cloudy and open stack and stuff like that. Some of the things that Chris was talking about in terms of performance and things like that, what's actually interesting is we're not talking about just like super fast or being like the Uber storage device that blows the doors off of everything. We're fast and that's all good, but our key is actually quality of service. So, we let you actually dial in the minimum and maximum IOPS that you want your storage to have. So, what we're doing is we're taking a pool of performance across the entire cluster and we're allowing you to actually specify what that is going to be for each volume and you can do that dynamically. So, as your workloads change and your demand changes and stuff like that, you can actually modify that on the fly. So, if you've heard of the noisy neighbor problem and you probably have if you're doing cloud stuff, that's the whole point. It's to solve the noisy neighbor problem. The other thing is back to NetApp and SolidFire being together, it's kind of an interesting journey for me. Rob and I have done a lot of talks over the years and we've talked a lot about different technologies like Fiber Channel versus iSCSI, replication, all these different features and we've usually disagreed. So, I'm the iSCSI bare bones, keep it simple, cloud storage and Rob has the other perspective and what's most interesting and I think what's most valuable about NetApp and SolidFire coming together is now you have both perspectives covered. So, you can actually get whichever one suits your needs best, we're gonna have something in our portfolio now that is gonna be a perfect fit for you and I think that's what's most exciting and most interesting. Yeah, we have occasionally agreed on a few things. We have, yeah. There are a few things I think we would like to cover that sort of address the future state of OpenSec. I guess it's a little bit of a discussion on the state of the art now and some of the gaps and I definitely would invite our guests to chime in on this as well. That's a huge topic. There's no shortage of things to sort of get into but I think we wanted to talk a little bit specifically and I'm gonna actually fast forward past some of this just because we're running out of time. To talk a little bit about not just OpenSec but also the containers ecosystem that seems to be exploding, not just adjacent to it but also in times on top of OpenSec as well. There's a few things that I think we're pretty interested in trying to move these collective communities. We're heavily involved in OpenSec, we describe that. We're members of the CloudNative Computing Foundation if you're not familiar with that there. Kubernetes was provided by Google to it so we're hopeful that will have a standardizing effect and some element of containers but there's certainly other containers out there. What we're hoping to do is provide capabilities that are common across them and economize on some of the work that's been done in Cinder and avoid perhaps like the reinvention of the wheel if you will. Actually, I'm just gonna kind of go back briefly. Some of an example of some of the work that we've done in an early state and John I don't know if you wanna kind of get into this because you were engaged in it directly is the NetApp Docker volume plugin that was announced last week. Yeah, so for those that are using containers and wanna go use storage inside of containers as of 1.10 or 1.09 actually, you have the ability now to do storage management and attachment and things like that. So while we were at SolidFire, I was doing a number of things there to get us a driver that works there and at the same time NetApp has a whole team of folks that are actually doing some really cool things with giving a full, a plugin that has a full range of NetApp portfolio devices underneath it. So that's kind of cool. And then also, it's kind of an advertisement for myself. I'm having a talk tomorrow on using Cinder as a back end for Docker as well, so. Which was sort of the point and thanks for the segue was Cinder exists and right now you're seeing within these different communities folks contemplating what would it look like to provision block storage and maybe provide differentiated access to different things. Of course you're familiar with Cinder, that exists. So how can we bring Cinder? That's kind of what we're alluding to in the last place. Docker, if you're familiar with it, is dead simple to deploy. Cinder, maybe not quite there. At least it's a slightly different form. If you're using SolidFire, it's pretty simple. Well, and I'm referring to the Cinder service itself. So why can't you do pip install Cinder? Right. And no presence of a comp file via directed through your command line UI. Answer the relevant, provide the relevant parameters and there you go. I mean, that's sort of the bar that's set by Docker. So how can we make it synonymous with it? There's some areas that I think, during particularly the design summit tracks this week we're interested in trying to, hopefully move community consensus towards. Cinder can be used independently, modally. Now, it's not as simple as that today. There are those who have. There's some large, in fact, large online auctions house in particular kind of pioneered that same thing with Manila. So let's actually use it beyond maybe where the rest of OpenStack itself would be deployed, perhaps for containers independently. Software defined storage is a huge topic. Of course, generally, we heard our guests here talk a little bit about, I don't know if you guys refer to it as a software defined storage, but you've referred to certainly Ceph and I guess that sort of resembles most of the characteristics assigned to software defined storage. NetApp amongst our portfolio is data on tap. That's what's powered filers, the things you probably most classically associate with NetApp. That's available in a software defined file, as a virtual machine. It becomes a question though of like, how can I actually build like an elastic service with a fleet of those things? How do I manage the life cycle of them such that when a sender and a Manila like start to reach resource exhaustion that I need more or X more of the quantity of those of the quality that's being requested. And so this kind of lends itself to this notion of a software defined storage controller. It's really quite interesting. SolidFire in some ways doesn't need this quite as much as the Rust in the sense that it has some autonomous sort of auto scaling capabilities that are just inherent to the platform itself. But even then there's still a point where you hit like the maximum possible and you need to contemplate how do I get to X2 and X3 and X4. And so we're pretty interested and have been having active discussions within this community and others around the establishment of a software defined storage controller project. This week in particular I think will be pretty interesting in trying to reach some conclusions. Most of those discussions are sort of informal within then the design center track. And you'll may hear some of the discussion around this not just from that app, but for some of the folks we've been talking about talking about it with in the community. You sort of alluded to the bimodal IT tension. Apparently I'm on the far enterprise end and I don't know the client which is the first time I've been accused of that. It is, but the reality is there is a tension. I don't know if you can kind of, I know you've lived us the hard way in Cinder. So I'm actually a little more on an extreme and that's why I made the comment I did earlier. I should stand all the way over here. Yeah, and it's not that RAV is over on that extreme either. But I am definitely pretty far on the other side. I'm a big believer in everything should be automated. Everything should be software defined. Everything should be simple and resilient. And that's really the crux of it. Things to me, things like fiber channel and monolithic APIs and things like that. Add on packages, add on features and stuff. That is kind of against sort of the philosophy that I have and that I think solid fire has. And that's where some of that comes from. But the reality is there are two sides to it. There are both sides of, both sets of customers and there is demand on both sides of that fence in terms of what people want. So that's why I said I think it's so interesting that NetApp and solid fire would come together because you have both extremes at this point, right? So yeah, I don't know how many customer conversations, deploy conversations I've had, maybe 300 plus over the last few years. I don't know, I think that might be right, 350. So this is certainly not my term. I'm not sure where it came from. But there's this notion that open stack is a snowflake. Every deployment is a snowflake. It's all hexagonal and frozen water. But boy, they sure do look a lot different from one to the next. And I think the distributions are kind of solving for that to an extent, making it somewhat more deterministic and repeatable. Surely the deaf core effort may actually influence that over time, we'll see, I guess. But the problem there is that on one end you've got folks who look at open stack as a way to, maybe as a foil or an alternative to sort of like the incumbent enterprise virtualization stack, if you will. Would be politically correct. And that's being done successfully. And I think there probably are folks in the room who are represented amongst it. And on the other end, you have fully cloud natives, scale forevermore, and yes, there's that spectrum. And so within our own effort, we tried to appeal to both ends of spectrum and we kind of came to the point where it became apparent we needed a portfolio that appealed to all of those different ends of the spectrum. Apparently I've been a little bit, we've been collectively a little too long winded, so we've run over. There's a guy gesticulating madly in the back telling me to wrap it up. So please do catch us afterwards if you wanna know about any of our other sessions going more depth, we've got some directions to that. And then we also handy dandy laptop decals for the Manila project. So thanks very much, appreciate your time. Thanks everyone.