 Hello, I'm a storyteller, so I'm going to spend a little bit of time talking through the stories and the reasons why Dell has done what it's done internally, what we're seeing as an industry end to end, and then we'll just go through that and work through it to what we should be seeing going forward. If you look at where we've been, and it was interesting because believe it or not Uber and Dell did not work on the slides together, but what you find is we really have gone from closed systems. If you look at the mainframe systems, the server client architectures that we've been through, we've really been moving to those open ecosystems, and it started all the way down from the ASICs and where you saw the customized proprietary ASICs, and you still see some today for niche applications or specifically for cost reasons, but you're seeing the merchant silicones really emerge, and you're seeing the pace of those silicon advances increase. It used to be about every three years you could count on that, now you're seeing that about every year you're seeing something come out, and then you're seeing the interleaving of different competitive vendors in that space. So the timing and the pace of development in networking and open networking has certainly changed, which presents a challenge if you're building hardware systems because you've got to find a way to keep up with that investment that it's occurring. And then you had open standards around hardware. You saw some of the stuff that was released through OCP where there are standards in terms of how you build hardware. Some of that is certainly taken into account where you get standards of how you're going to build a hardware platform top to bottom and how you're going to move in a position to accelerate that transition as you see the chipsets evolve. And then you get into the networking space. One of the challenges in the networking space that we've always had is if you looked at innovation, you looked at the innovation that chipset vendors were bringing, whether it was from one company or it was across multiple companies, the challenge was in embedding that innovation quickly and bringing that to market. To do that, a lot of times somebody would say, hey, there's a new chipset, or we want to cost down on this platform, and you had to hop between chipset families. And what would happen is you would go back and say, hey, it's about 30 engineers, about six months to do. If you really had innovation, it may be longer, maybe a year. But by the time you did that, it was often too late in the market. And so you were always having to judge and figure out how to move forward. You no longer really have the luxury of doing that. You have to have these interfaces that are open standardized. But the key is you're trying to do an open standardized interface, but you can't eliminate the differentiation. You must enable the chipsets to differentiate through the networking hardware and open up new ideas and new concepts. So you have to manage that standard in such a way so that you have extensibility of that standard. But you may have to have a path that allows differentiation for the chipsets, and then allows from that differentiation it to evolve into a standard within the industry. And so if you looked at, for example, PSI, and what was done with PSI through OCP, that was a great initial effort, and that was put together. I think Facebook, Microsoft, Dell were all involved in that effort associated with PSI. You've seen that continue to evolve and broker some of the relationships between chipset vendors. And it will have to continue to move forward. But the idea is that you want to open up those systems and you want to open them up with regard to the hardware. Dell took a very good initial step and we broke out about three years ago. We broke and we separated the ability to deal with the hardware and the software. So networking systems were typically closed. What we allowed is we allowed other operating systems to come on that hardware. And I think you started seeing announcements that have followed suit. I think there was an announcement by Cisco as well that has come out and is starting to look at options with regard to separation of hardware and software. And I think you're going to continue to see that move forward. So you now have the open systems on the hardware side. You have the ability to move your innovation forward quickly by standardizing that interface. And then you have the open software. The next thing that happens is you start to look at the controller side. There are some systems where you have specific needs where you want to introduce a controller. Those controllers then have to be integrated into those open systems. And you're seeing some of the announcements come out about various controllers. But you will see those integrated, some in a standard way, some in a non-standard way. But you would hope that that would certainly evolve. But you will have the option for those standard controllers. And then it becomes about the orchestration and the tools that sit on top. The reality is you don't want your networking system to be dictating your tools in solution. If you have unique Linux versions, customized versions of Linux. If you have customized versions of tool sets that lock you into managing it that way, you lose the ability to manage your systems end to end so that your server storage and networking systems can be translated into end-to-end solution that allows you as a company bringing to your customers differentiation in your services and product offerings, it presents an additional expense for you. And so you're not focused on the things that can bring you the highest value. So part of the objective in all of this is an open networking industry is to simplify this top to bottom. And you're seeing that in terms of some of the simplification of protocols and the simplification of platforms as they're coming to market. What you have to pay attention to in this whole thing is why are people open sourcing? There are reasons that people open source. Some of that is speed, time to market. They're looking at the community to help drive that speed, some of the agility. There are reasons to open source to try and establish your product as a standard, your offering as a standard to control or slow down pieces in the industry. There are reasons outsourced because the integration effort is complex. If you look at all the V&Fs, integrating all those V&Fs and getting that testing done as hard, if you're trying to keep that close in your vest, it's a complex process. So the ability to open it up to the community, open up those testing standards, open up the ability for the community as a whole to build in the natural integration point so that those V&Fs can come through and you can get the service chaining functioning correctly, it gives you an opportunity. Security is going to be a big one. If you have a modified Linux version that is being built by your vendor, then in likelihood when there's a security issue, you're going back to that vendor. You're asking them to give you a build that secures your infrastructure and fixes that gap. If you're going off a pure kernel, then the expectation is you can take that kernel, you can take a release of a fix in the industry, you can apply that and you can move on. Unmodified kernels allow you to build the containers just the same way you would for your server systems and you can treat your tools on the servers the same way. So you have to look at security and how that's going to work. It comes down to the DNA sometimes of companies. Is it in their DNA to do work in an open environment? The economics and just frankly, how open are you really? Is it an open set of APIs that you can then program to? Is it an openness and here take the code? Happy to work with you and then is there a support model behind that? It doesn't do a company's a lot of good if you open source all the code, but then you as a company, when you're looking at those open source opportunities to pull that into your company, do you have the ability to support it, to integrate it, to build on top of it? Are you bound by those resource limitations? And then you're going to be having to fill the gap on the backside of this. So the journey forward. And some of this is Dell's perspective, but I want to try and stick to and talk a lot about open switch as well. We certainly see the merchant silicon continuing to merge. The idea is to continue to bring that forward. This is some of the silicon that's been invested in. I assure you there are more silicon vendors that are out there. I know there's barefoot's out there, nobium's out there. Cavium and some of the stuff that they've now done with Melanox is another option. And so I think you're going to continue, or Marvell, you're going to continue to see the opportunities there with innovation in the silicon. For Dell, we built on top of that with the S series and Z series specifically focused at the data center market and to end solutions. And then we've given the option on those hardware platforms as we go up to be able to offer alternative software versions. So you can go get something from Big Switch as one of our partners. Cumulus is a partner of ours, IP Infusion, Pluribus. But then at the same time what we wanted to do, and Dell's process and thought process was to build our own OS from scratch, do it the right way. And then we stepped into open switch. What ended up happening with regard to open switch is there was an opportunity really to move the industry forward. We had a conversation with Michael Dell. And he agreed to let us open source the base of the operating system. The reason we didn't open initially the Layer 2, Layer 3 stacks is one, is we felt like there were companies better positioned to provide that innovation in Layer 2, Layer 3. We felt that by treating Layer 2, Layer 3 as an application and going out with the open source on the platform side, you could open up to a broader industry, allow the innovation from other players in the hardware industry to also drive those operating systems forward, and what we wanted to do is get that done. The open switch effort that you've seen by Dell and other companies that have been involved in that space has been focused on building the full stack. Dell has been primarily dealing with the lower half of that. So then sitting on top of that we have the integration with NSX, Open Daylight, and then a series of automation tool sets coming on top. These are intended for mission critical deployments. I will tell you that OPX is a base, is running in a tier one carrier. It's running with Quagga on top, and then some modifications that they have done, and then supported on the back end by Dell. So there is hardened code out there right now that's available that you can go pick up on the open source community. You can pick it up, you can drop it on a hardware platform, deal with the hardware drivers, and then move forward. You're just grabbing something like Quagga or FR, and then integrate the tool sets on top, expandable. So expandability and the ability not to lock it in, to make sure that the use cases can be abstracted. If you look at some of the open source that's out there right now, one of the things you're finding is you're finding that that open source can really be intended for specific use cases. The idea is continue to drive that forward in broader environments. And then composable stacks on top. Ahead, I'm just gonna flip through two last quick charts. But the idea is to abstract away the services on top, enable the innovation. But keep it consistent between what's an open source community and what Dell has. So now with that hardened proposition of the hardened lower layers, what we announced today is we announced you now have the opportunity to go out and get a commercially hardened, deployed. And the company's been around for about 35 years in MetaSwitch. And you have the opportunity to take layer two, layer three software that's been hardened by the market. And put that in on top of that open source base. And it will be supported by, we offer the ability to support that end to end. So it's the first case of a composable infrastructure, meaning that you can go out there and you can look at specific sets of services. And you can pick and choose the service that you want on that stack. And you can get that service and ignore the rest of the protocols so that you're not having them take up space in your systems. You're not having to deal with any security risks associated with it. You're not having to deal with the overhead of maintaining and configuring the other protocols. You can focus it down on what you need. But it's the first opportunity to really have a hardened stack composable on an infrastructure that can be supported by a company. And then finally, on top of that, then you have the automation framework. The goal is to keep that as open as possible. These integrations are occurring. They're not driven by Dell. A number of these have just simply occurred in the open market, building on top of those open interfaces. And the goal is to enable that. You can have the Python programming. You got the C program, the rest interfaces. But the idea is to open up the innovation and allow that innovation to come into the broader market. So that's what Dell's doing and where we see the market going. Huge opportunities going forward and we look forward to working with the broader community continue to move it forward. Thank you. Thank you. Thank you.