 Good morning, everybody. I'll give a little bit of a presentation about Dell EMC philosophy, about our open stack solution, what we are doing, and all of the flexibility we built into that. So let's start with our philosophy. So we have a lot of the open, we have a lot of the different components within the portfolio from the servers, from the storage, from the networking, from the open source pieces of the open stack of SAF, of the Kubernetes and so on. So all of that have to be put together into cohesive solution so it operates properly for you and you don't have to deal with all of the integration issues. And our idea is that we'll provide help with that, but we will have everything open so we prevent any of the lock-in. So we'll start with open architectures. We integrate continuously all of the latest version of our portfolio of all of the goodness which happens on a regular basis. Provide your ability to transition from your solution in the field from the current one to the next one whenever it become available on your term, on your timeline, not when things happening, and tailor it to whatever the workload and application you would like to run. As you have seen quite a bit over actually the last several open stack summits, there is a transition from having a generic solution, target for public and private cloud to more tailored solution tailored for either NFE workload, for various telcos, optimized specific thing for HPC, for big data. And all of them in order to really take full advantage of the performance to provide the same performance which you would expect out of your optimized solution requires specialized hardware, BTFPG, GPU, smart nukes and so on. And they have to be integrated in a solution and maintain as a part of the solution. So we took some of the effort for that to ensure that it will be life will be a little bit simpler. So we created an idea of the profile which is optimized for a specific type of the workload. And then we created a management of those profiles so it automatically sets, manages your entire vertical stack from the application which is yours, but how that application is deployed and managed through the open stack, through the open stack configuration to the operating system configuration optimized for that to obviously hardware and hardware configuration optimized for that. And it's not just with the time of the deployment but of the lifetime cycle of the solution. So kind of on the high level bit, I mean, on the bottom, we have a variety of different options within the Dell AMC hardware portfolio. Next level up, we partner with, in our view, the best open source distribution vendor, Red Hat. So we used the OBS KVM self-storage and various other plugins which are optimized for that. Then we can have a next layer up which becomes more optional as you go to real-time Linux, DPDK, SRV and various other extensions to open shift on top of the open stack and then various other components on top of that with the common management on the side. We can kind of start with a small one and you can scale it up or down to your heart's desire. So what are the things that provide you, what advantages? Well, first of all, it's integrated. It's a desegregated solution but integrated into one cohesive entities so it allows you risks your operation from the day one to the day end. It provides you the long-time lifecycle support and obviously it provides you the way to do the customization. Either you can do customization since everything is open or you can partner with either Dell AMC professional services, the Red Hat professional services for consulting and optimization and customization for your needs. And as I mentioned before, we made life substantially easier by creating the workload-optimized configuration including the total management for that. So this is kind of a high-level bit of what does the solution consist of. We have a variety of different server. It's an option variety of different Dell AMC networking with a variety of different operating system as a part of that because even our networking is segregated so you can have cumulus, you can have big switch, you can have a variety of different options. In this specific version of our release, which is the 13 release of our open stack solution with Red Hat, we integrated the latest 25 gig networking and we transitioned the entire solution 25 gig networking. We can still have a 10 gig networking and even one gig networking if there is a need for that. But the base solution moved to the 10 gig. The latest version of obviously all of the components, we optionally support integrated from the start the SC series from Dell and all of the latest things including the new BOSS hardware support where we separate build in the RAID-1, U2 disks for the operating system so it's completely separated from all of the other storage on the nodes which can be integrated for whatever specific to the workload. Again, a variety of different networking, a couple of the predefined profiles, the CSP for telecommunication networking specifically target for NFV workloads, optimized for that and then another predefined workload which I'll go into more detail which is more tailored to the web-based application but obviously can be used for generic things and then a variety of different optimization built into the NFV profile specifically for optimizing the NFV workloads. All of the latest things, even the latest Linux, Red Hat 7.6 with the latest SEF which actually came out less than a month ago for both of them. So we are trying to be very tight in terms of the ability as soon as the new software is coming out as well as all of the firmware from the LMC component, we have them integrated solution to make your life easier so you can transition to that if you want to. And of course we do all of the certification from all the certification of the OpenStack itself so we run the third-party CI for everything which we have for all of the different options for our storage, for all of the different hard servers and we have a variety of different drivers which we support for the Ironic from the default APMI one to the IDRAC one and the Redfish one and we are trying to transition all of the management of our servers to the Redfish as the Redfish itself matures. So let me spend a minute or two about, talk about our automation and the Jetpack specifically and why we went that route. So we looked at what was available and we were part of the triple-o right from the start of trying to optimize and simplify deployment and management of the solution from day one to the end and we realized that while it works quite well initially obviously longer term things need a lot more maturity and we were part of the community working to improve that starting from the updates from one release to the next but more developing the fast-forward update solution so you can update on your own terms whenever you want to rather than trying to stay on top of the open stack releases but what we realized very quickly that as soon as you start optimizing for individual workload the entire stack then the situation becomes much more difficult and you don't have a simple way of handling that and that's kind of where we start working on kind of very tiny shim on top of that which allows us to maintain the management of the entire solution in a consistent way. So it improves the performance by factor so we were deploying our solution in the lab as a continuous part of our continuous test integration for the full rack it's about to our deployment from the stretch and then actually took a little bit longer to do the upgrades than to do the initial deployment surprisingly enough but they're talking about hours to go through this process and we do all of that for specific workloads so we know that when you deploy specific predefined solution that operation is work. We can do the same for other workload which is specific to your needs and we have a professional services team which is trying to support you on this. As I mentioned, it's a layer on top of the triple low and it relies on the ironic to do the bare metal provisioning and bare metal life cycle management and the profile is layered on top of that as a specification how the configuration is supposed to be done through the entire stack of the solution. So why do you care about that optimization? Well, you do care because as soon as you start looking to a specific workload, now you're in a dilemma you would like to take advantage of the latest hardware available or late optimization available everywhere in a stack. Dilemma, how do you ensure that your hardware configurations are all set in place and not modifying? Well, you need to pick up the ones which you really want and now it's up to you to go integrate it into the solution and make sure we maintain and depending on which kind of workload you would like to do you need to have a specific things. And then of course once you choose a hardware now you have to go through the process okay what are all of the hardware configuration parameters I need for that, what are the specific things I need to have for operating system to make sure you take full advantage of that and then you have to go to your workloads and make sure they're handled properly from the open stack configuration to Kubernetes configuration and in some cases your workload configuration. And then the biggest problem is that okay somebody have to orchestrate all of that part of your continuous support and this is where kind of the idea of the profile came through and we kind of played with that and developed for a couple of specific workloads and moved that forward. So the project is called Jetpack to accelerate your moving forward. It's under Apache 2 license. Here is a GitHub pointer to that. Again we have a couple of we have a built-in notion of the profile which is optimized for workload. A couple of the profiles are predefined there so you don't have to deal with them. You can still modify them as you move to the different hardware options and so on but the base one is there and of course you can create your own profile for your own needs. You are tightly working with the triple O community and seeing at what point of the time it will be appropriate to start moving some of that kind of separate reports to the triple O but at this juncture it's not kind of not in the game yet and one of the things in the Jetpack does it's actually utilizes some of the features which are ironic to go do configuration of the hardware through the ironic because triple O does not allow you that yet. So there is several touch points which need to be handled before it can be merged with the rest of the open stack. Again this is a joint work between mostly Red Hat and Dell at this time and again I mean the reason for that is that we take the best of the hardware vendor and the best of the open source distribution and get them together and that will provide you the best solution you can get. So I'll kind of skip that since it's the same information I've provided before. So what is a profile? A profile basically provides you the information about what are the features you would like in a specific configuration and how you want to define that. Here is a sample of for example for NFV all of the different things we went through and making sure they are configured correctly and managed correctly through the life cycle of the solution. So in this release which is we call it JetStream 13.0 we went to the next level of the optimization in the Jetpack and that is we created a simple, well not a simple but a spreadsheet which predefined all of the networking. You do it ahead of time because that's kind of how you define how your network look like from every network which you exposed from the public API network for provisioning networks for storage and storage clustering for self and you kind of define that as a part of the kind of your discussion with your data center or whichever location you're going to be running the open stack. You predefined that and out of that it extract all of the information and populates all of the configuration files throughout the solution. Again this is just a simplification so you can ensure that you don't have any errors when you deploy it when you manage that and you have a consistency across all of the nodes and across all of the parts of the solution. So kind of key takeaway when you choose your solution our recommendation is to choose the right partner for the solution and that partner is both a hardware and software partner and for various reasons we chose best of the open source distribution and the best of the hardware vendor to create the solution. Minimize the risks of your life cycle of the solution from deployment to continuous operation of the field again choose the right partners to do that. Stay open source do the extensibility and flexibility you want and optimize your solution to your workload because as you providing your platform to your customers first and the most things they will expect that it performs just like their hardware optimized solution which is targeted specifically for one workload. So I'll stop if you have any questions I'll be very happy to answer I'll be in the booth right over there if you have more questions and if you kind of look back there this is our Red Hat OpenStack solution in the box in Mobile Rack running 10 node 1 everything I told you here is running there. Thank you.