 variety of different options you have is a Dell technology solutions for the open stack. So our approach is a little bit different from others. We decided that especially for telco-based solution that we don't want to have any legacy stuff. So right from the start we start building everything on the open architecture using the modular and open source based platforms and scaling it on demand. We concentrated when we looked at how the people going to use it for telco-clouds. We decided that we concentrate on kind of four different pillars. The first one is what we need to do in order to modernize the telecom network as we go from four to five and there's a variety of different things but you know mostly considering going on a fee and multi-cloud on the edge. Then transformation of the of the IT on top of that is a cloud native application replication protection automation and various other things. And then how does a telco provide the service for their customers and that is what are the services and the rich environments they're building on top of that and then concentrate on creating the solutions which allows telco and the hosters to provide. And then of course final piece where we don't play much of the role but it's a required for any of the work going forward and that is you have to change the knowledge of your people as the new skill set needed to operate in the new environment, the new innovation, new operating models as you go through the transitions. So based upon that we'll talk about several of the different platforms which we provide for the customers to do. So first we'll talk about Linux-based platforms. So we have several different options but I'll talk about the Red Hat-based one where we have a joint solution with the Red Hat where we combine the best of the flexibility of the Red Hat OpenStack with the best choices for the hardware components to build the whole solution in the flexible and open way. The goal for that is basically to continue refresh it and have continuous the latest options for the customers but at the same time stay on the solution which has a long-term supported infrastructure and long-term supported continuous movement forward. So right now we are still on the Queen's release of the Red Hat OpenStack and that's the latest long-term supported one version 13 and we moved all of the components forward to the latest version of the SAF which I'll talk about in a minute and then latest gear for the server storage and networking. So I'll actually skip a couple of those. So the latest version which we released recently had a couple of main things which we moved forward. Obviously we picked up the latest version of the Red Hat OpenStack and since that's not a major version it's basically concentrated on the ending a couple of the missing functionality which were not in place yet like multi-attach for the storage. We moved the solution to the latest version of the Dell servers and that's the latest generation of the Cascade Lake of the Intel. We decided that we would like to demonstrate the OVS offload not on the SRV OVS offload on the NICs themselves and have them fully integrated with the rest of the OpenStack. So we moved to the CX-5 version of the Melanoc cards using and moved to the 100 gig networking and then we have the full integration with hardware-based CPU offload which NIC offload which allows us a much different performance. It's still a little bit immature so we keep it as a tech preview. For example we still require very strange behavior that in order to offload to work properly we require additional DNS service which have to go through the Melanoc cards so we have to have a Melanoc cards on the controller node not just on the compute node. So we kind of go through all of the spend to make sure the solution operates before we provide for the customers. We moved to the Red Hat satellite 6.5 basically to allow a simpler version of have a repeatable process so you can deploy exactly what have been validated by us so instead of trying to have a locked bits of some other variety. We moved to a couple of different versions of the server components for the Red Disk so we can bypass the Red all together and have a better performance for that and then we moved to the Blue Store which is the latest generation of the SAF to allow to bypass operating system and go directly to the blocks. As I mentioned we have a freedom of doing a variety of different type of the solution so the same solution we have for the HCI where we have both SAF and a compute on the same nodes. We actually have a slightly different choices of the hardware for that to have a better optimization so that's part of that solution and we also release the first version of the H solution following the same model. It's a little bit different from the classical one so we have the control plane which goes into the control plane on the on the core and only have compute nodes on the satellite. Again you can't really scale it too far because you know the distances will not allow that and for each of the distributed site you have to have availability zone so there is some restriction of how many nodes you can have an availability zone and also restriction how many compute how many computes you can have in distributed zone. But the main thing is that we go through L3 routing for multiple different networks on the same physical infrastructure and we use the same ironic to configure and manage the underlying hardware for the compute nodes. A little bit of the networking part as I said we have to all of the different network provision network management network tenant networks all go through the L3 level. Within the same site they communicate directly but when you go between between the sites or between the sites in the core you have to go through the core networks okay. As I mentioned all of our solution based upon the ironic to manage the underlying hardware and this is very active project of ours and we have been running it for quite some time. On the right hand side is some of the gear which we are continuously running as our third-party CI for the upstream ironic project so not surprising we run it for variety of different hardware at the same time and what is more interesting for us that we are running it on multiple different drivers at the same time. So every patch is being tested for the iDRAC driver, IPMI driver and our own Redfield driver. So all three of them are running at the same time and we actually test them on different boot modes both BIOS the legacy mode as well as a new FIMO that allows us to catch all of the problems earlier on and won't get anything through the system. We are doing a lot of the work right now we actually decided early on that we would like to over a long time transition to the Redfish APIs going forward and in the trend release we landed what we call the hybrid driver which allows the functionality which can be used by the Redfish to be done used by the Redfish. And of course we test all of the latest generation of the hardware and continuously doing all of the validation of all of the bugs. One other thing to mention that we are the first vendor who have the Redhead bare metal certification which basically ensuring that Redhead will be able to provide a full support for that. We'll have a lot more information on our booth so feel free to stop by we'll give you more information on what are being planned and what we're working on. As I mentioned we have a variety of different options so not only we have the Redhead based option and as I mentioned before we have a Canonical and Susy one but we have the VMware one. So VMware is a little bit more interesting and the reason for that is that WIO which is a VMware integrated open stack is run as a VMs on the VMware gear itself and this allows much simpler model of managing it because you're managing the VMs and the rest of them the rest of the underlying infrastructure is running on the vSphere and that is very well understood and a lot of the IT organization already have that and it has a full integration with analytics and storage and everything else. So just as we are continuously as all of our solution we continuously upgrade with the latest and greatest and release them on a periodic basis. So the VMware cloud is now in version 5.1 and we pick up the latest version of the of the Dell servers you know Cascade Lakes and we have a couple of variations here depending upon how you wanted to use it. We can have either 3-pod system with minimum of 12 nodes and scaling up to multiple racks or the 8.1 where you have 2-pod system so the real question is do you want to have a h-pod as a part of your solution or not but the same infrastructure and the same tooling is the same and the same analytical tools for managing your application is also included. So skip most of that stuff this is basically how we design each of the cluster and of course the fundamental piece that we are utilizing NSXT for the control plane and overlaying it on the physical on the physical network so we are utilizing the software-defined networking which is part of the VMware fundamental things and we can transition it both on the leaf and spine and scale it across multiple different sites. Storage for the open stack obviously for VMware we're using this as the easiest way to integrate things. For the Red Hat we're using the latest version of the SAF and we have multiple different SAF architectures depending upon if you want to do for the object storage or for balanced block or for optimized block storage for the latest performance. Again just like with other solution on a regular basis when we have a new version of the software to take advantage of or new generation of hardware take advantage of we decided it's worse for us to do in refreshment of the solution and this is the refreshment of solution we have done when we moved to the blue store of the SAF and at the same time we moved to the latest generation of the hardware on the Dell servers. We have all of the performance information for that and all of those different configuration are transitioning into our open stack integrated solution. We continuously maintain and improve all of the different options for Dell EMC storage for our open stack and here is the latest generation of each one of those and the capabilities which we are able to land within the train release. All of them are certified with various different distributions and all of them supporting both Manila and Cinder and you know in the kind of fundamental thing which you are doing lately we are moving to the multi-attach support and on all of the solutions and also we are moving to add a multipathic support which is not formally part of the open stack but required in order to really get a better performance and a higher availability. You also have the container-based solution. This is a little bit outside of the open stack but it's related to our family of the solution which we are doing with Red Hat. So this is a version of the OpenShift actually slightly dated version 3.11 we are working on the next one generate on the 4x but that's still a little bit immature and not being released yet so we are this is the latest version which is available. The fundamental thing here is that while we have multiple different solutions for the virtual environment where we can have the containers running either with VMware on VMware platform or on top of the open stack for Red Hat this is the one where we do the OpenShift on a bare metal to provide you better resiliency and better performance. So let me stop at this point we have all sorts of information on our booth so please stop by we'll be happy to talk to you and depending upon what kind of workload and what kind of use cases you have in mind we have a solution for that. Thank you.