 Good morning. Good morning. Welcome to the Cisco Sponsored Track Room. My name is Gary, and I'm going to be the host for all five sessions here today. Hope you can join us for the other four, if they make sense for what you're looking for. We're going to kick off the day with a great session on the MetaCloud solution. MetaCloud opens stack in the enterprise. We've got three members of our team who are going to be giving our presentation today. Are we starting with George? OK, we're going to start it off with George Sieb, who is a systems engineer at Cisco. We're going to jump into Jason, who is one of our software solution architects. And Chris Revere is going to bring up the rear. Oh, I hadn't even planned that, but that worked out really well. So with that being said, gentlemen, take it away. Great. Thank you, Gary. Good morning, everyone. So we're going to talk to you today about MetaCloud and how it fits in in the enterprise, and some lessons learned. Is this working? Yeah. So quickly, I'm going to go over how Cisco is approaching the portfolio in this space. And then we're going to dive into the meat of the topic, and that's the enterprise-focused cloud. And then finally, we'll have some time at the end for some questions. So please save those up and come up with some good questions and stump these guys. So when Cisco was looking at the cloud space and how we approached it, we asked the IDC to go out and really dig deep with customers. And they actually were able to talk to over 11,000 directors and above about their cloud implementation. And they then focused in on 6,000 of those and went really deep. And what they saw was there's difference between a lot of people who are doing ad hoc implementations and spending a lot of budget in implementing hybrid clouds and those that have optimized cloud. And you can see some of the numbers here with people being able to double the amount of budget that goes strategic that really focuses on that top line revenue growth. And that's by moving up the stack and really being able to do cloud native and focus on the applications. And so some commonalities, some good news for all of us, at least here, is that they saw a strong correlation of those that are doing OpenStack and having these optimized cloud. And so as you can see, hybrid cloud was a strong correlation. Very high percentage of them are doing DevOps and cloud native and are doing fog computing as well and extending that cloud out to the edge right into the IoT. The downside is only 3% of those companies surveyed really had that optimized cloud. And so most companies weren't able to focus on that cloud native space because they're still trying to get the infrastructure working. So Cisco's strategy really is around getting to that optimized cloud and enabling customers to focus on cloud native and applications. And so another piece that the survey drew out on those optimized clouds is the secure hybrid cloud. And so security is in Cisco's DNA. And so we see, no matter what your environment, whether you're looking to deploy on-premise or public, that security needs to be part of that strategy. So two avenues of Cisco is kind of this encompasses. Obviously, build it yourself. We believe that a lot of customers are able to successfully do that and a lot of customers want to do that. And so Cisco supports that with a lot of technologies from switching our Nexus line that integrates well with OpenStack, from a security standpoint, our firepower enabled next generation firewall, et cetera. We also have validated designs that we've done in conjunction with the likes of Red Hat to work with our networking and UCS solutions. But a lot of customers also want to buy their cloud environment. And we work through partners. And then we also have the MetaCloud offering. So just at a high level, some of the big portfolio items, if you look at the hybrid cloud solutions, we have the Cloud Center, which is the formerly clicker application. And that's really about moving workloads between multi-clouds and hybrid clouds. And then the MetaCloud offering, which we're going to kind of dive deeper in here, which is that fully managed private cloud offering. And then we look at our managed platform, such as Meraki. And this is one of the most successful, fastest-growing businesses that Cisco has ever launched. And it's really about simplicity for deployment and management, but also being able to extract that data and useful data about all of those end devices. So analytics is obviously a top of mind and kind of encompasses all of these. And we have products like Taptration, which allows you to pull that analytics, use machine learning to do intelligent things, do forensic-type activities on all that data. And then, of course, we have plenty of SaaS offerings, open DNS, many of you may use. Cisco WebEx, the collaboration tool, is a SaaS offering. And finally, our Spark collaboration tool. So that kind of covers the portfolio. Let's focus down into the MetaCloud piece. And we'll have Jason kick that off. All right. Thank you. You bet. Good morning, everybody. So MetaCloud, open stack for the enterprise, it's not just a tagline, even though it's on a PowerPoint. This next section is not about architecture or anything like that. We're going to get into some technical depth about the reality, what we mean by open stack for the enterprise, right? We kind of debated on the name of this. You could call it carrier grade, open stack, open stack for grown-ups, not your dad's open stack. But we'll really take a look at what differentiates us in the enterprise space and things that we do that essentially nobody else does. So the reality with cloud adoption, and part of the reason open stack has gotten a slow enterprise adoption in some black eyes in the enterprise space, is that everybody rushed off to a cloud strategy. Open stack, I mean, the Linux of the cloud was those shops that embraced open source technology. A lot of DIY, do-it-yourself science experiments came up and there were a lot of failures. This high level of expectations, like we're going to get cloud, our time to market, and time to value is going to be so much better. DevOps and pipelines and tool chains, and it's going to be this panacea. So the Ferrari assembly line was going to where the bar was set. Unfortunately, when most enterprises went to go and try and execute on that model, and we had a talk yesterday about broken stack, all of these complexities, the staffing, the operational model, the technical hurdles, the culture, and the dogman, all that stuff. So you end up what enterprises really want versus what they get. We're at an inflection point in open stack adoption. We've passed the tipping point to viability, but we are at a point where people are kind of rebooting their strategy. So everybody's seen this slide probably several hundred times. 95% of private clouds fail. So what makes MetaCloud different? And I'm going to go through these fast, and we'll go into each one in depth later. But consumption model at DIY versus a distro-based versus a fully managed offering. We'll take a close look at the control plane, networking and storage, some things that we do different there for both scale and stability and availability. And digging deeper around what we mean by enterprise scale, how far we scale, how we do some of that scaling. Look at SLA and the stability of the cloud. Look at TCO, ROI use cases, regulatory as well. How we do upgrades and updates. And finally, we'll take a look at curation and telemetry and visibility, some enhancements that we've done to open stack in general. So before we get into the differentiating factors of MetaCloud, we'll just take 30 seconds here to take a look at what MetaCloud is. And in summary, if you roll up all of the MetaCloud presentation decks, it's kind of, I describe it in kind of four different buckets. It's an open stack powered private cloud that Cisco got through an acquisition about two years ago now in November when they acquired MetaCloud. You might remember it as Cisco open stack private cloud, Metapod, and now we're back to MetaCloud. So we have kind of a CI CD on our naming convention for MetaCloud. But I think we're staying with MetaCloud now. So it's a solution that's deployed behind your firewall, your data center, a Kolo data center, or a partner DC of your choice are working with us. Fully managed by Cisco, 4.9's platform, uptime SLA. Not just the API, but the entire platform. Delivered as a service. So from planning, so we're very prescriptive and opinionated about how we deploy and support and manage an operate open stack. Yet we're, at the same time, we're very consultative and collaborative about how we size it. Helping you with aggregates and zones and metadata and flavors and creating tiers of storage and things. So there's a lot of flexibility in the solution where we can have it and a lot of opinionated configurations where we need to be diligent in that space to maintain the SLA. But from the planning stage, through the design and sizing, through deployment, the monitoring and management of the solution after it's in, and then maintenance and capacity planning and growth. So we'll look at some of the curation of that, but how we do the management and monitoring of that. But it's really a concierge service. I don't have the slides here, but truly, after I've done customer deployments, I get added to the list of tickets between the customer and our ops team. And there's the visibility that we have into the system. You'll see things like, you know, DIM 6 on Hypervisor 12 is failing. We'd like to migrate all of your VMs off of that and tag that hypervisor out of the scheduler and triage that and repair that. So we have a very deep level of visibility where we can gather this data and execute on it as well. So the first thing we're talking about is the OpenStack consumption model. So it really falls into three buckets. There's do-it-yourself OpenStack and OpenStack distro or Cisco Medical Algebra, fully managed distribution. With each of those, you look at things like SLA, typically no on the first two. We offer the four lines SLA, production timeline from unpredictable to more predictable to the most predictable of how we do updates and upgrades and how we deliver that as a service for you. And the operational complexity from high to medium to low with OpenStack. So the OpenStack, I mean, the consumption model is more easy for enterprises to consume. You don't have to find staff, acquire staff, retain staff, worry about poaching or pay disparity or things like that. With a delivered as a service private cloud, what makes it different is that it's turnkey. It's delivered as a service. We can have, once the hardware's up and accessible, and Chris will go through the control plane and look at some of the out-of-band things. But once our hardware is accessible, we can have the entire system deployed, configured and turned over in 10 business days or roughly two weeks or less. Again, no training cost for existing staff. If your cloud admin or cloud operator can log in AWS and create VMs, I mean, that's the user experience that we're going for around when we deploy private cloud on your site. So, again, no risk of having your talent poached, no lost time with hiring or training. And this is a part of the delivered as a service model, no talent acquisition or cost. Chris is gonna come up and talk about some of the technical details around the control plane. Thanks, Jason. So we've talked about metacloud at a high level, but to kind of start to dive a little bit deeper into it, we're gonna take a look at the architecture. So first and foremost, we have the metacloud control plane. I'm gonna go into a little bit of detail to describe that. To start off, we have two Cisco ASRs that are essentially responsible for the routing, for any routing that takes place across the cluster. There's two Nexus 9Ks for essentially the switching fabric. There's an ISR 2901, which is essentially hooking us up to the outside world via out-of-band connection to our metacloud knock center, where we're able to monitor the environment, do the provisioning, et cetera. And one thing to note is that this is a very robust HA configuration. And there's three UCS servers, which are running the majority of the open stack services in a very HA manner. So what do I mean by that? You can essentially have any sort of failure here. And the cloud will still be operational. All the APIs will be functional. So we have a 4.9 SLA, as was previously mentioned. And what we're essentially doing is we're guaranteeing the operation of the cloud, the provisioning of VMs, the network running smoothly. And we actually take it a level further with our operations team, and we're actually automating a number of synthetic transactions, doing things like spinning up virtual machines on a regular basis and spinning it down. So we're always monitoring that environment, making sure that everything's operational. And that's how we can help ensure a 4.9 SLA. The other thing that I think is a bit quite unique is this 4.9 SLA is across the entire stack. So what does that mean? It's a managed service. If you look at this control plane right here, this is essentially Cisco prescribed. Cisco is managing the network, the switching, the routing, the three UCS servers that are running the open stack services. Cisco is responsible for those. It's essentially Cisco open stack. And so Cisco is basically on the hook for everything. So one of the things that I think is attractive about this is the customers get one finger at a point, right? Anything starts to happen. It's a network issue. It's an open stack issue. It's any sort of hardware issue. You get one number to call. You don't have to worry about, oh, let me contact the open stack vendor or maybe the network vendor who's making my servers. You get one number to call. Now this control plane right here, it's extremely scalable and robust, but it doesn't actually have any compute or storage capacity. And that's why I've just introduced down here. So this is where we have the hypervisors, starting with as little as seven hypervisor nodes. And this configuration here can scale up to 400 physical hypervisors for an availability zone. We simply add a pair of switches to accommodate 40 servers at a time. So it's a very robust, scalable architecture. And the nice thing about these hypervisor nodes, while we're prescriptive on the control plane for the hypervisor nodes, you can actually bring your own server. We're not picky there what vendor it is or obviously there's UCS servers. There's a number of different storage options that are available. So first and foremost, there's just ephemeral storage where whatever kind of drives we have in these servers, we just treat the local ephemeral storages. You have instances, any sort of outage, you lose the data on that specific node. We also have the converged block storage where we use Ceph. And this again is completely managed by Cisco as part of the solution. So you don't have to worry about the best practices of configuring Ceph for performance, et cetera. Jason will actually go into a little bit more detail later on that. We also have external block storage options where we work with a number of different vendors here, including SolidFire, NetApp, Pure Storage, Nimble Storage. And we've actually, as we've done some of our testing, we've encountered some scalability issues at certain points and we're able to tune that to accommodate the performance when we go through our testing phase. So we've actually made sure that when any services are deployed for the storage, you're assured that it's gonna be completely scalable, reliable, you don't have to worry about how many nodes you're adding, are things gonna continue to perform. And we've recently added object storage capability of partnering with SwitchStack. So really depending on your storage options, and like Jason mentioned at the beginning, we'll actually work with you as a team to understand what are your storage options, what are your different networking options, and we'll make sure that we build the cloud right the first time around. So what is that, what do I mean? We talked a little bit about the networking where essentially the routers are HSRP, the Nexus, the 9Ks are configured, very HA, you can have any sort of failure in that control plane, whether it be a router going down, power, lost CPU, et cetera, and the cloud will continue to operate 100%. And what that actually buys us in the networking, this is just kind of a simple screenshot from Horizon, and what you can actually see here is I have a virtual router here, which is connected to my external public network, and these different colors just represent different virtual networks that I've set up, and I've just taken a screenshot of one specific network here, and you can actually see there's four interfaces on this virtual router. One of those is obviously the public interface, and then we actually have three additional interfaces for that network, and what that actually is doing on the back end is that's associated with the three interfaces are tied to two physical links, so each hypervisor is connected to both switches, and then both the switches being connected to the router, router, whatever your preference is. So yeah, this is essentially very robust HA, and almost like if you were paying attention in the keynote the first day, when Mark went up there and cut scissors, started cutting cables, whereas I think that redundancy was being handled at the software level some degree, we've actually already have some of that functionality baked in here, so at any point in time, everything's completely redundant. This is also extremely scalable as being hardware based, so we kinda call it hardware assisted neutron by all floating the networking to the routers. So just as an example, you could have easily, a couple hundred hypervisors each running 50 instances or so, a number of connections, and we can easily support 200,000 flows significantly more than that, and that's something that you're not gonna typically see in a software implemented neutron solution. And actually, I think Jason has some more interesting stories on the storage, so I'll flip it over to him for a few. There's just one slide, so stay around. So Chris is gonna talk about curation a little bit later and how we select projects and how we augment OpenStack and fix some of these gaps, particularly around scale and stability, but as he's saying, I'm not sure who in the room has deployed OpenStack themselves in a lab or in prod or in dev? Okay, so quite a few of you. Not sure if you've tried a software only model for neutron and how well that's worked or how well you've load tested that, but what we found was that since Metacloud's been around since 2011, we had customers on Nova. We already had customers at multiple hundred hypervisors per AZ. When we went to neutron, we had to adhere to that level of scale that was in Nova. We had to do that with neutron and the two just don't scale the same. There's challenges there, but with neutron, putting hundreds of thousands of flows in a software only model, if you haven't tried it yet, it doesn't work. I mean, there's things that should be up in the A6 and that's what we're doing here. Similarly with Ceph, Chris mentioned the control plane. We have a bunch of sender drivers that are supported and cross certified and we're finding things like the solid fire image caching. The majority of this code we're contributing back to trunk. When we find an issue with the sender driver with solid fire and glants, we're fixing that and we're putting it back in trunk. We're still maintaining the Cisco powered certification. So this is not a, this is not VIO. This is not a locked in version of OpenStack. We're still maintaining trunk API parity. But if you look at storage, it's another example where there's a lot of DIY install guides and our competitors are doing Ceph and what you'll see in a centralized, most of what those recommend is a centralized Ceph model, so three or five nodes of Ceph. Again, running probably works okay in a 20 node, maybe up to a 50 node hypervisor model and in a centralized Ceph model, you take three to five, typically three to five nodes dedicated to Ceph. Similar in Neutron, you might have dedicated software nodes for Neutron, but you have dedicated Ceph nodes. Not an uncommon configuration is to put 50 to 60, four to six terabyte drives in that three node cluster. Each of those running 320 terabytes each. Again, not sure if you've tried a fell over scenario with nearly a petabyte of Ceph and one node goes down and you're stuck with replication traffic on those two nodes. Oddly enough, it's not the network bandwidth that kills it, it's the CPU that spikes, trying to catch up with figuring out which blocks to replicate and essentially the box, you can't even log into it, the CPU goes to 100% for an indefinite period of time while it's trying to rebalance that. Alternatively, the way we do Ceph is a lot more work on our end, but it's really the only way to deploy it at scale stability, which is a fully distributed model of Ceph. When we do Ceph, every time you add a compute node, you're adding the raw storage for that node to the cluster and we're putting two 480 gig performance SSDs on the front of that for journal and cache. And what you end up with is a much smaller failure domain. And so in model A, which a lot of folks are doing today, again, they haven't run into issues if you're running 25 hypervisor clouds, but so three nodes, 320 terabytes per node. The 10 32 T boxes that you're seeing on the right, it represents only one of those 320 node Ceph nodes on the left. So imagine, it's kind of a moot point, would you rather be reading and writing to 30 nodes with a 32 terabyte failure domain with a terabyte of cache on each end with two 10 gig connections per or have that dedicated on three nodes. Model on the left is much easier to deploy, obviously. Model on the right actually works at scale. So I'm just kind of taking a little bit of a step back. Looking at the enterprise and scalability, we talked a little bit of detail about how a single availability zone with our control plane can scale upwards of 400 physical nodes per availability zone with that four nine SLA again across the entire stack. Looking a little bit into our successor, I think this is an interesting fact. We talked about at the beginning how Gartner said 95% of private clouds fail. We actually have a 0% meta cloud churn rate, which I think is actually quite unique. So all the deployments that we've done where we've actually got meta cloud up and running, we haven't lost a single customer. In fact, a number of our customers have scaled out additional hundreds of nodes across the different multiple availability zones, different geographies. So I think that's kind of one of the key statistics here. And in terms of just the, you know, in general ROI, we have one of our customers has upwards of 400 nodes running big data workloads with Hadoop. They actually realized five times capacity for less cost than when they were previously with AWS. In terms of the upgrades and updates, I think this is another differentiator is that we've had in place upgrades since the Grizzly release. So really we have customers that have maybe started with Grizzly and we've now, most recently, now we're based on Red Hat OSP8 with the Liberty release. But basically you're guaranteed, when you have meta cloud, you're guaranteed that you're gonna be fully supported in terms of we're able to even upgrade customers from Nova to Neutron and now moving people forward. But really essentially minimal downtime, seamless updates and upgrades. We tend to be about six months behind the trunk. It gives us the ability to, I'll talk about this a little bit later, but more of our curated open stack. It gives us that ability as well as kind of six to eight milestone releases, six to eight week milestone releases for things like patches and we'll do, you know, critical security updates, et cetera. And with regards to, we're now SOC one compliant, SOC two compliant, ISO 27001 compliant. We'll be PCI compliant very soon and with Ernst and Young auditing us for that. Now what do we mean by the curation? We've talked about that a little bit. This is an example, not sure if many of you see it. I think it was in Tokyo they announced the OpenStack Foundation, announced the Project Navigator on the website. So this is actually from OpenStack.org. But what I think is absolutely phenomenal about this is they list the maturity of the various OpenStack projects out there. So we've seen there's so many new projects. Some have, you know, mascots, et cetera. They're coming out all the time. And so what they do is there's actually various ways to rank these projects. So if you look at something like, and we get this all the time when we're talking to prospects, people say, oh, I want, I want a car. You know, I need that messaging service from my cloud. And you know, you pull this up and you say, all right, let's look at the maturity level as rated by the foundation and you know, all the users. It's essentially, you know, the maturity is one out of eight, probably not the most mature module out there. Hoping the Zacar PTL isn't in here. And we also look at things like, you know, the age, how long has that module been around, right? Zacar's been around two years and the adoption rate is a whopping 1%. So we actually see that as there's introducing some risk in your environment. If you're actually building a cloud, maybe it's DIY, et cetera, and you're saying, okay, I want to use this module, 1% of the community is using it, right? Are you prepared to support that? Are you, you know, how are you going to handle updates for that, patching for that, how's that going to interoperate with the other modules? And so that's actually something to consider. Whereas if you look at something like, you know, Nova Horizon adopted by pretty much everyone, extremely mature, eight out of eight and it's been around since the dawn of OpenStack. So what we've actually done is we've taken all that information with the history that we've had running thousands of hypervisors for a wide variety of, you know, global clients. And we've actually said, okay, we're going to, let's analyze all that data. This is what's included in the MetaCloud offering. So you'll notice that it's actually a little bit of a subset of all the different modules. Obviously we add, you know, more and more over time based on customer feedback, the maturity of the modules, et cetera. But the idea is that we're only deploying these stable configs of mature models. And because we're, you know, maybe lagging by six months, it kind of gives us time to see what modules are being adopted. Can we make improvements of those? Are those really stable and robust? And this is actually how we're able to offer that 4.9 SLA across the entire stack. So basically you're assured that whatever functionality you're doing in your cloud, there's that 4.9 SLA that we're constantly monitoring and you're guaranteed that all these are going to work and continue to work in a scalable fashion as your cloud grows. We talked a little bit about the neutron networking. So how we've kind of differentiated ourselves there is it's really the hardware assistant model we think is much more robust, you know, easily scaling hundreds of thousands of flows. Instead of a salometer with regards to telemetry and visibility, we can actually show you how we've done extensive modification to Horizon to provide our client's visibility into the hypervisors, the control plane, the VM performance, et cetera. So we've actually opted, and you can see here that salometer is not the most mature module as well. And we've managed to do this while maintaining, you know, full certification with the OpenStack Foundation. So we start looking at the enhancements to the telemetry to give you some examples that we just thought would be a good idea to show some of the screenshots. So as I mentioned, we've extensively modified the Horizon dashboard. So when you spin up an instance, we are actually monitoring that instance and the users can see things like CPU, disk, IOMetrics over various periods of time and we actually maintain that data for a year. So anytime you've launched an instance, you know, lots of times you're troubleshooting things, you don't know what apps the customer might be running within specific instances. You can get that, you know, detailed view of the performance of the virtual machine. Storage, I think storage is quite an interesting one. When I first started using OpenStack, I was kind of baffled that I couldn't figure out how much Ceph was used and available. And then if you managed to, you know, fill it up via using a lot of the space where you have some sort of failure, it's not the most pleasant experience. So we actually provide complete visibility into the storage and this is supplemented with us monitoring hundreds and hundreds of probes of networks in the environment and we're getting alerts on the back end. So if we start to see an anomaly or a hardware failure, our ops team 24-7 is gonna get alerted on that and proactively take action. But in addition to that, we're gonna be providing you visibility into that environment yourself. So whether whatever sort of storage you can have, you can monitor the OSDs, the performance, are they up, down, et cetera. As well as visibility into the controller and hypervisor metrics. So while it's a managed service, we're still providing pretty much complete visibility into those controller and hypervisor nodes. So you can see here, I can easily see how much physical memory's in the machine, what's the CPU stats, the memory utilization, and this is for the actual bare metal for the bare metal OS that we're looking at here. Drilling down into specific servers, we can see a system level overview. Again, for either controller or hypervisor, we're actually able to see things like uptime, CPU, the different NICs, how much data's been transferred for those NICs, the disk usage, et cetera, the different partitions, how much space is free available. As well as things like the network details, which NICs have packet errors, drops, et cetera. The running processes on a specific machine as well. As well as the disk details driving into each one of these. So we can actually look at the individual partitions. Here we can see how much space for each mount point, how much space is free used, et cetera. And one of the things that's more interesting with this is taking this kind of to the next level, because I mentioned we're monitoring hundreds and hundreds of metrics, we actually provide you the ability to extend that visibility by integrating it in with your own tools. So using something like Grafana, here's an example of kind of a bespoke dashboard that we've been able to build if I wanna look into specific metrics within my environment. So now I can use this to kind of integrate with my existing operational tools. And I can actually see here we have a detailed view of MHV, that's one of our hypervisors. I can actually drill down and say, okay, I wanna look at a chart of the different types of CPU and that performance over time. And again, over any sort of period of time. So I can then set bespoke alerts, et cetera on that. As well as things like load average or something that I think is pretty cool is just looking at kind of a simple example, but just looking at the different types of flavors that are in use. So as an admin of my cloud, I can see like I have, you know, X100 VMs, I can quickly see, you know, how many of those are large, tiny, et cetera, when those were launched. So that's just one example of the kind of bespoke, bespoke dashboards. And Jason, I will let you wrap it up. So one of the things that you saw in the slide, so we keep that data for a year. Some of the folks deploying Solometer today for telemetry, they're able to deploy in production, but because Solometer's writing back to SQL and to Time Series database, you end up with both a capacity and a load issue. So they either both reduce the number of metrics they're collecting or only keep the data for a few days at a time. With the real Time Series database, we're able to keep that for a year. Just a quick call out on that. So in summary, we went through the Cisco Cloud portfolio at a high level, focused on metacloud differentiation, particularly the architecture, the scale, enhancements, visibility, our curation process. We've got about five minutes for questions. This is our, not John Kelly, that was from yesterday, George Sybe. We'll put his, yes, he goes by John as well. So we do have time for a couple of minutes for Q and A, any questions? You got a booming voice, okay. So today we certified center drivers for pure Nimble, generic NFS driver, NetApp and SolidFire. Those were, and that's for block and then Swiftstack for object storage. SolidFire has been with us the longest, we have the most customers deployed on that. But our roadmap, a lot of time is driven by our customers. We'll start talking to someone and they'll say, you know, we want, I love the metacloud idea, but I really like Pure, or we already have an existing investment in Nimble and our team will go through and look at the driver and the maturity of it, I mean, look at the code. Like in the case of Pure and Nimble and SolidFire, we'll have, you know, we'll deploy a metacloud dev environment to them or they'll deploy gear on site for us and we'll go through a whole certification process. We do an on-site certification, as someone wants to deploy quickly and then that kind of matures to a true certified driver where we're doing cross certification on every release and things like that. So all the ones that you see up there are, you know, deployed in production today. Long answer, but yeah, self-affirmable Pure and SolidFire, good. Yeah, right, there you go. Yes. Your voice is in your makings. You seem to have removed Celiometer, so how does your heat-alterscaling work? Good question. So the auto-scaling context is still there, but we're not triggering off of the salameter alerts and alarms. I mean, you can auto-scale by sending straight, you know, curling straight to the API, but the triggered auto-scaling with salameter doesn't exist. What customers do are either have triggers that reach out and look at collectee or monit or they're using something outside. I mean, that's a good point. It is a gap, but it wasn't a sacrifice we had to make until salameters mature enough to scale to the level that we needed to do. So that was the trade-off that we had to make for that. Now we are contributing code back up to Trunk to write a salameter wrapper for Graphite. So that salameter, you can talk to salameter API on the back end instead of salameter talking to SQL, it's talking to Graphite. So in the Cisco broadcast domain, all the auto-scaling is done via heat? In the Cisco broadcast domain. SPVSS. Oh, yeah. Well, yeah. So MetaCloud, I don't know if you're a Cisco or not, but okay, thanks for that. I appreciate the loaded question. So we've got Verticals and Cisco, right? We've got Mercury and CIS and use VSO and MetaCloud. With the testing on the SPVSS, they did use heat. Yeah. Well, I mean, SPVSS testing with MetaCloud did use heat, but it wasn't automatically triggered, so we obviously don't have salameter. We have a Fusion project to bring those, explain my straight man in the back, making me uncomfortable and making me explain a little more about the portfolio. So part of the Fusion project was moved from Canonical to Renat. The other part was taking bespoke versions of OpenStack and bringing together a collaborative effort right between CIS and Mercury, which is non-prem version of OpenStack and MetaCloud. So I can speak to Seth. He can probably answer the SolidFire question. I know we've got multiple production deploys of SolidFire. And it's some crazy feedback. It's the plate in my head or something. Seth, about a petabyte raw. Now when we moved to the i-release, we're getting a little more forgiving around that. We have a CVD on how many spindles we put in versus how many SSDs we put in, all that kind of stuff. But around a petabyte raw, around 350T was our standard before we went to the i-release of Seth. So we're going to probably be a little bit more flexible and forgiving on that. The only deployment of Seth we do is hyperconverge. We do know centralize Seth. So if we have multiple 100 nodes, we'll decrease the number of drives, drive count or drive size to fit in that one petabyte of raw. And again, with the i-release, we're going to be a little bit more forgiving with that. Or in a 30 node cluster with 840 drives. Do you have to know what our largest solid fire deployment is? It's large. We're going to have to cut this one short. Everybody, we have another session starting in about five minutes. So we do have a drawing for each session today. I think you probably got a little card as you came in the door. If you want to fill that out, we're going to toss them in the bucket. We have a very cool, should be better, prepared for this. Philip's Bluetooth speaker that we'd like to give out to somebody here today. So if you want to fill out those cards and send them to the end of the aisle.