 All right, I appreciate y'all coming out today. Spending some time with us. My name's Ed Bailey. I'm a Waddell, part of our extreme scale infrastructure group, our hyperscale group that has traditionally been known as DCS or Data Center Solutions. This is all a use of same team. We're part of the architecture team within that group. A lot of what we're going to talk about today, we're going to touch off a little bit on some of the trends we're seeing coming out of what called our hyperscale space, some of the things we're looking at as we are seeing some other trends in the market, and then really touch on, at a glancing blow on some of the infrastructure elements, and then all is going to walk us through the elements of systems management and resource management composability that we're really focused on advancing some of our innovation, some of our technologies on. We have a demo down in the marketplace where you can come see this live. So I certainly want to encourage you to come see us down there and talk to us after this so that you can get a little better understanding of what we're trying to do. And we absolutely want your feedback as we're working in this space as well. So with some of the shifting trends, I mean, some of these are buzzwords you all are very familiar with, where we're seeing pretty much everyone in our space and even in some of the emerging space here is in some form of cloud. We're seeing some customers starting to do proof of concepts, try to understand how they're going to go transition workloads, transition their environments. The software defined elements of things, software defined pretty much everything at this point. How do we enable that? How do we facilitate that and make it easy, make it flexible, make it agile, make it useful? In the open aspects of things, we are very engaged in the open management. You'll hear words like redfish. I'm sure you're all familiar with it. Would love to show you that demo in the marketplace as well. So we have that running on the product we'll touch on today. And so the open aspects of things and really understanding from a customer's perspective, what are the elements of open that they really want to grasp and make a key tenant in their deployment in their infrastructure? And what are the ones that are out there that maybe don't add quite as much value? One of the things we know for certain is that customers that we're talking to are really shifting from this idea of buying monolithics. I'm going to buy five more servers, five more of this, 10 more of that. We're really talking about resources. We're talking about elements of compute, elements of storage. How do I get those into my environment and get them deployed and provisioned as quickly as possible? In the space that we're kind of born out of, that we engage in more day to day, in that hyperscale space, that's the element of buy. I mean, that's where we want to operate. We want to ship fully integrated validated racks, ready to roll into place, hook up to management, hook up to power, let's get it deployed, let's get it provisioned, let's put it to work. And so that's our focus when we're looking at these types of elements. And what we're doing that with in our space for a while is a platform called the DSS-9000. And I was about to touch on really what we've enabled with this, but just to set a little bit of groundwork. What we're focused on from a hardware perspective are infrastructures that give us that flexibility and agility to go get the right SKUs, the right configs, the right resources for our changing workload, changing customer requirements, but do it in a manner that we have consistency in the infrastructure, in the management, in the elements that touch our provisioning tools, our deployment tools, our data centers, so that we're not turning the wheel every time. We have a shift in demand or a shift in workload, shift in use cases. Again, we're focused on the restful pieces and secure pieces of management. Open management, where we are focused on not only looking at technologies to go implement in the future, but what matters today? How do we generate that flexibility and agile today? This is something that will be coming later part of this year. We have a half rack of this with a demo again in the marketplace, so please come and see us. So with that, I was gonna kind of run us through the solution aspect of this where we're talking in terms of systems and resource management. I'm Ali Yosef. I'm in the Extreme Scale Infrastructure Group at Dell. I touched upon the hardware, so what I wanna touch upon, a full solution that is just beyond the hardware. There are a lot of pieces out there. When we started building this demo, we had the goal is to deploy an open stack workload on our infrastructure in an easy and simple fashion. We started with the DSS-9000 as our choice of hardware, and for a reason that I will be touching upon a little bit later, but also there is this work around the Intel rock scale architecture. The rock scale architecture is around managing a pool of resources, whether that's storage, compute, or networking. And in a magical fashion, bringing those pool of resources and creating a workload out of that. And there is the release of Pod Manager, which is an integral part of the rock scale architecture. We'll be also touching upon that a little bit later. I mentioned the DSS-9000, and why we use the DSS-9000. One critical piece in the DSS-9000 is this rock manager, this entity, this controller that is in the rock that knows everything about the rock. It knows about the resources of the rock, all the chassis and the servers and all that. And Pod Manager can do that, but it also can do it through this Redfish API, a very easy, simple to use HTTP-like API. So that's the new Redfish standard that has been defined by the DMTF standard body. Now to bring all of this together, right? I mean, we wanna use this infrastructure. We wanna interact with it. We partnered with AMI to build the front end tools and the CLI to be able to extend dashboards like Horizon. So with that, I'll go to the next slide. I mentioned Pod Manager, right? So Pod Manager is the main piece of the rock scale architecture. It is this, the best way to think about it is the set of Linux services that you can install on a physical machine or a virtual machine. But what it does, it obviously, as I mentioned, it manages these pool of resources, but it exposes this front end API for the outside world to consume and request things from Pod Manager. And at the back end, it talks to these pool of resources, network, storage, compute. And for example, in the case of our Rack Manager, it talks to the Rack Manager to get those resources through the Redfish API. So that at a high level, what Pod Manager is. Hey, question, question. So we've talked about Redfish, right? What's the big deal about Redfish? So if I can summarize it in a few things, obviously it's a modern API. It's the new API to manage data center hardware. It is simple to use. It is HTTP sending like a curl commands and things like that, secure. And it can apply to a single node. It can scale to thousands of nodes. So it is fitting for a single node and can scale in a data center that is like thousands of nodes. One last thing on Pod Manager, as I was talking about, you see these PSMEs. So every pool of resources needs to implement this PSME that conforms to the Pod Manager API, the Pod Manager architecture. And so this way the Pod Manager can talk to it. And so it is Redfish to our Rack Level Manager. So what I wanna touch upon is, so there were these building blocks. How did we use them? What was the benefit of it? What did we get out of them? One thing that was obvious was the discovery piece, right? I can out of band discover everything in my system, all my servers, the chassis, everything that is in it. Didn't have to have the system in a certain state. They just, systems were added to my rack and I can discover them and know exactly what I have in the rack. So that was huge, right? And beyond discovery, I can do more, right? I can do this whole inventory of the rack, right? I can get information about each node. I can get like the CPUs, memory and hard drive. That is huge, right? I mean, out of band, I don't need to boot to an OS. I don't need to pixie boot, one of the challenges. Sometime when we're discovering system, we rely on booting into an OS to discover the system, right? A bootstrap image or whatever that is. For whatever reason, sometimes it doesn't boot, right? It's not set up to pixie or the boot settings is not set correctly or whatever. But being able to discover that and control that part is huge. As I mentioned, the inventory is huge as well. To be able to get this CPU, memory and hard drive and have that aggregated and be able to do with it. And that's where the rock scale architecture comes in, right? This ability to compose a node, to build a server out of those pool of resources, that aggregate CPU, memory and hard drive and just put it together and create a workload, a server. So I have my server, right? I have my server now, I need to configure it, right? I mean, that's a big thing. To be able to set bias settings out of band, right? To be able to configure the RAID or just power control the nodes, that is huge. So that's one of the, that's the rack manager with the ability to just control everything out of band. And finally that I'm done, I can deploy it. I can send it, you know, I can provision my nodes and you know, through the single management interface, I can just tell my nodes to boot to the provisioning server whether that's being fuel or ironic or whatever that is. Where else do we see configuration go? I think configuration, one other thing that can be interesting here, what if there is a way to do a bias update, do firmware update, just through the single IP, the rack manager that we have in there. I think that's huge. Another piece, you know, what if we can tell the nodes, you know, point the nodes to some ISO image and tell it to boot to that ISO image and to run some diagnostics. So there are things that we can add to this. But you know, today just setting bias setting, configuring the RAID and power controlling the node is huge. And I was in one of the earlier key nodes and there's gonna be like 400 million servers in the next few years. And managing that is obviously is a huge challenge. So anything that can help that ecosystem is great. What I'm showing you here is just a snapshot of an extension to Horizon dashboard. Work that we partnered with AMI to do this plugin, this Mega Rack plugin to Horizon. Basically what's showing you, and I don't know if you guys can see it in the back, but the top table is showing you nodes that have been composed, that's been built and are part of my OpenStack cluster. And at the bottom you're seeing the list of resources, these pool of resources that are available to me that I can build. I can pick from, call it bag of goodies or something. So that is another example of an OpenStack integration project, a fuel. By the way, I love fuel, all right? I use fuel and I love it. It's a simple to use tool. You know, a rock scale architecture can play a role here where it can discover the system. I understand fuel has its own discovery process. But again, if we can discover the system reliably and send it to fuel, basically have the system that's showing up in the rock scale architecture tab, everything that is discovered when I'm ready to add it to my OpenStack cluster, pixie booted to a fuel and have a fuel takeover from that point on. I talked about composability, right? That's the big part of the Intel rock scale architecture, the ability to compose a node to build a server for my workload. Composing a node can be as simple as a one CLI command that I can send and say give me a node of default everything, just give me a node out of the pool of resources. Or can be advanced, can be based on my requirement, my workload requirement. It can be, I can say I need it with this many cores, this many hard drives and such and such. There is a mega rock composer from AMI that does this advanced filtering that, and I will be happy to show it to you in our queue, in our booth for the demo. So when it's all said and done, I went through few steps, I configured and sent it to my provisioning tool and boom, I have my node added to OpenStack. So that was huge to me, right? Just using these simple tools out there to be able to quickly deploy, whether it was a Nova compute node or a Cinder backend. So that's what the live demo is about. How does the DSS9000 or just the elements of systems management hardware, how does it aid what you've been trying to do here? So the big thing in the DSS9000, as I was mentioning, building this workload that is fitting to OpenStack and the DSS9000 has this mix of compute and storage and with a uniform way of management, a single point of management, a single IP, being able to pick from that workload was huge, right? Just having a configuration skews that are fitting for my OpenStack and just me sending a few commands and picking from that and extending my OpenStack cluster. So had you been asking me all the questions, right? Let me ask you a question. So obviously I love the rack management part of it and that is huge, but the DSS9000 has more than just a rack level management if you could elaborate on that a bit. Well, so what we're talking about again is logical resources, right? So when we're talking to customers and a lot of our environments, frankly, we have customers that never even seen the hardware, right? We're talking logical resources, compute, storage, other elements. And so again, when we talk in terms of the things we're focused on, certainly from an open perspective are the elements that the customer engages with. And it starts with the management and the pieces that I'll have talked about. But we need to be able to have the tangible goods, have the elements that we can operationalize from a deployment perspective and give the flexibility again based on the shifting workloads or the shifting needs. And so what you're seeing here is kind of a high level overview of an infrastructure that gives us flexibility with regard to the sled elements, if you will, the physical elements of this thing, but a common experience, if you will, with regard to the power and the management pieces. So again, this is something that we've had in certain spaces for a little while now, and we're looking to bring this to the broader market with a lot of the elements that I always touched on. Another question, Ed, I think in one of your earlier slides, you mentioned that this trend that you see from one U2U is that is to rock scale design. Is that a trend that you see continuing? Yeah, I mean, we think so, right? I mean, when we're talking in terms of how do we buy resources? How do we get those deployed? We need to make that easier. We need to make it faster. We need to be able to capitalize on the dollars that we're putting in the infrastructure as quickly as possible. And so we absolutely see that continuing and expected to continue to grow pretty aggressively. So where do we see this going, right? I mean, it's just going to continue from a capability perspective when we talk in terms of more integration into open stack environments, more capability from a resource management perspective in how we go discover these things. Intel has done a great job pushing us with rack scale architecture. There is a longer term vision for all this. The ability to take advantage of a lot of those elements today, frankly, from an asset management composability perspective, they're real, right? And they're out there. And so we're continuing to build on that. The ability to use this for bare metal provisioning. Our focus on this is to be able to do this out of band. We want to be able to enable this with as few touches, if you will, as possible in the provisioning process. So the out of band element, the ability to leverage redfish, the open systems management, and have the hardware facilitate that is absolutely critical to our vision here. And again, kind of redundant, but the ability to go take advantage that in future technologies, as we look at disaggregation of other elements of the hardware, we're trying to put the foundation in place to be able to take advantage of that as quickly as possible. So kind of in summary, why is this a focus? We were at the expo last night for the little happy hour. And frankly, I think we surprised a few people with regard to what we were showing here and what we're trying to do. This is a priority for us. To continue to enable, call it solutions, be it hardware, systems, resource management in open stack environments is absolutely priority. Our commitment with Intel on rack scale architecture and continuing to evolve that. 1.2 is something that we're working through now. We're excited about more announcements throughout the course of this year and something that we're really excited about. Really, this is just the beginning in that front. And then with all this is the broader del from a deployment services standpoint. The ability to go do systems and solutions like this but leverage the broader being, if you will, is something that we're excited about too. It gives us a significant capability when we're talking in terms of having multiple data centers, gear on the edge, being able to push things closer to regional users is a focus for us as well. And so with that, we can do a little Q and A here. Certainly, we really would like to invite you down to come see us at the expo. I mean, that's where Ola can walk you through the demo live. Like as I mentioned before, we have Redfish live there and we have the hardware elements live there as well. I just wanna say that Meretic Ganguli from Intel, she's one of the architect for the rack scale architecture will be at our booth to answer any questions around the rack scale architecture. Two to three p.m. today? Two to three p.m. today, she'll be there. All right, we really appreciate your time. We appreciate you all showing up to see us today. Thank you.