 Thank you all very much. I want to follow up the last presentation with setting the stage of how all of the networking open source projects are coming together inside the Linux Foundation and how this affects how we end up in a containerized world. And so what I want to start with is reminding everyone all that had to be invented over the last seven years as the networking industry has come into open source. What was just discussed in direct use cases really was this orange bubble from service orchestration and SDN to virtualize networking. All of SDN can be really defined by that one word that says programmability between the two. And that is something that has taken years and years for the networking industry to build in open source and standardize. Also moving forward from there, the incorporation of analytics and data lakes, which is now available inside the Linux Foundation, has come out of this. But in addition where the industry is going is adding a policy layer and then a platform for the applications, including bringing all of these networking services into Cloud Foundry. So let's step through this a bit. The original notion of software-defined networking, which really focused just on config and provisioning, was far too simplistic for what we really needed to do in the network. And we realized that there was no notion of resource management, no notion of endpoints or computers involved in this, and creating virtual overlays is necessary but not sufficient for what we need to do in the network. So what we needed to add and what has been added through a project called panda.io was this analytics framework to take the telemetry and data coming out of compute, network storage and security, bring this into a data lake, which those events could then be correlated to trigger other events inside the SDN world. And so really what this comes down to is that all of operational state of networking and of compute had to change. And you've seen this with the change away from legacy polling mechanisms inside networking to the pushing of this data in real time directly into that data lake. And in sum, that's what the networking industry refers to as telemetry. This catalyzed the panda.io organization and community inside the Linux foundation. And that is now a project with an OPNFV and soon to be a project with an ONAP and that orchestration piece and I'll show you that shortly. But what's interesting is that software defined networking is bigger than config and provisioning and bigger than data analytics itself. It has to have end to end resource management. And the point really is, is that today in scheduler frameworks and orchestration frameworks at best, we can perform compute and storage scheduling, but there's no optimization of that scheduling for compute networking and storage. And so for that effective service placement or effective container placement or effective hypervisor placement, these pieces need to be optimized together and it's been impossible to date to do that. So what we've done on top of panda is use machine learning algorithms. But instead of just talking about the type of models that are used, what we've realized is mechanistic models of the network of compute, of storage and security are impossible to create with any accuracy. And so this is how stochastic and differential machine learning algorithms which are observing and analyzing how the relationship of the applications are occurring and including traffic aware awareness into the analytics is optimizing that the placement of those workloads and we're seeing up to 30% gain of the efficiency of the data center equipment, whether it's in a court architecture or an enterprise application architecture based upon that learned placement optimization by looking at compute networking and storage together. But also what's interesting is that creating an SDN controlled network, getting the data out of the network, creating a feedback loop between these pieces is what's been worked on over the last couple of years, but that still doesn't build the service that's necessary. What's missing is identity and the policy to drive the micro segmentation and the service chains to be created inside that data center or inside that court architecture that we just heard about. And so using identity as an input variable is now one of the newest things that's been added into this orchestration architecture and is a part of the own app architecture at the Linux foundation. Really the identity is not necessarily just trying to focus on are we using a certain address for the device or its cellular identifier or its IP address, but really it's a multi variant normalization of user devices, geolocation, network topology location, time of day, et cetera, that come together to define the identity of that user, that application. When you have identity and can add this into it, you can then start to orchestrate the micro segments from a wireless access point or from a data center interconnect router. The microservices across that network linked to a specific service chain for all the devices associated with a particular user. In the networking world, there's been a large conversation about the use of micro segmentation across the WAN, across the EDGE, and in the data center, but without identity there's nothing to drive the linkage of a user to the applications or a user to those services except for IT and direct operator and administrator control. By adding in identity and policy, you can now create a much larger self driving system. So this also is what's believed to be the path using OpenID, an open source community and open source project in community, linking that into the orchestration framework is how we can rapidly onboard the internet of things, all those devices that are going to be attached to the network without requiring other user onboarding and allowing for automatic bootstrapping of the system itself. That by understanding the device and its role in the network, the security micro segments or the security policy can be directly applied to those devices. To bind these together, the emergence of a policy engine needs to occur. Many policy engines exist in the network today, but none of them on top of an SDN orchestrated control. And really what this means is that as there is an enterprise IT function and as devices and users are onboarded onto the network, then the policy can be pushed into the SDN controller and into the network and into the application services themselves. This also ends up driving the analytics to be associated with the policy of course and with the user and the binding of these pieces together. This block has just emerged on app inside the Linux foundation as well, but again overall driving the network services and application placement doesn't exist yet. What you can get when you do this ends up becoming all new ways of orchestrating a network and orchestrating the services on the network. So you're able to see when associating users, network, stored objects and applications are actually security violations if an individual is not supposed to be able to decrypt a critical financial document in this case. And so what this turns into is in fact a policy language that can be written in human readable form as you can see here where the finance team can only access secure documents when on campus. In previous incarnations of SDN and networking this is all done through address management. Now it can be done through binding and onboarding of the users, onboarding of the documents or of applications or of services themselves and lending itself towards again a human readable way of expressing what you want to have happen on the network. So when we take a look again at this wheel of what has to be invented and what has been invented, we've done well with SDN, orchestration is emerging, virtual networking with fd.io or FIDO is there, analytics has come through and the policy management for resource allocation and optimization and binding of identity is what's emerging next in the networking side of the Linux foundation and orchestration. So changing topics a bit, the previous talk and what you've seen inside the Linux foundation to date has been all about the orchestration of hypervisors and in some cases bare metal as well. But as we know containerized services are rapidly emerging in the industry, it allows towards optimal componentization and breaking up of the application suite but it leads itself towards very large challenges in the networking world and I want to discuss with those with you next. So as has been discussed in the industry, the changing development process, application architecture, deployment and packaging and the infrastructure around it has changed really completely in the industry and everyone in the room is well aware of this and in particular all the way from trying to do DevOps work all the way into a notion of serverless computing and for me that would be edge computing, fog computing or use of very, very large data centers. But what this looks like when looking at the networking side of it is in fact a very, very diverse set of communication that does not look like a service chain. It looks like full communication, full mesh communication between containers and expressing the networking policy for containers in a traditional service chain architecture will not work and so there is a massive amount of work that needs to be created inside the cloud networking foundation, inside Kubernetes, inside networking domains as well so it can be expressed all the way to the pass layer in communities like cloud foundry. So my point here is really that the way that the networking world has viewed creating networking services and virtualized network functions is really directly analogous to a three tiered application architecture where a service chain is created, hypervisors are launched and policies are placed in between those. That's a classic three tier architecture but that three tier architecture just doesn't work in a mesh like this. And so stepping this forward we have to remember that policy binds identity to those services and what's really changing in the stack is that as applications are being launched the rules of the policy need to be pushed with intent into the stack itself versus having the infrastructure hold the policy and applications are placed into it. As easy as that is for me to say that is a fundamentally different mode of building a stack than we've done in the networking world and in the infrastructure world and that becomes that itself is the challenge. And so to create this application control during application composition as we know with the OCI framework you start describing the application and the rules necessary for that application that becomes a policy source of when you go to launch or execute those containers the rules flow into it and I'll describe this a bit more in the stack in just a second. But the fundamental change of the application has metadata and rules associated with it to be driven with intent into the infrastructure is a fundamentally different architecture than the service chain that I just showed. So what are we doing about this? Inside the Linux Foundation there are a number of projects as you can see really that have emerged very recently at the different layers of the stack as they as they look in an IT infrastructure stack from orchestrating configuring the hardware all the way up to the PAS layer with cloud foundry and this is one way of representing how those projects come together. When we take a look at this with one particular use case and instead of looking at the mobility cord case I'm actually showing here a video workload or video service case we have to realize that the stacks on the previous slide that get built out actually get directly represented just written a little differently into this case as well. What are the resources? What's the physical topology, etc. There's been a lot of discussion and we have to realize that every line between each of these elements requires some config and provisioning as well as analytics coming out of it. But the goal of ONAP and the goal of OPNFV and the goal of all the work we've done in networking inside the Linux Foundation is to remove the complexity below that red line. What I'm trying to say is that the notion that's been discussed in the industry of the whole stack developer is in fact a fallacy. The goal of what we're trying to do in all this infrastructure is in fact a no stack developer in which everything below the red line becomes intent driven networking with the rules passed from the application on top. And we have quite a ways to go but the goal is to really remove the complexity of this part of the network and this part of the IT stack from an application developer and from an operator. So to show the difference between this you can see the classic VMware stack or VM stack on the left hand side or the right hand side and you can see the classic container stack on the left hand side here and what's really interesting about this is the separation of life cycle management from policy management on the top. And in the classic VM based stack and looking at OPNFV in this case really it shows that the policy point in the middle is open daylight and on the left hand side what's interesting is that there is no policy point yet available and I show one open source project called Conteev and there's several other open source projects in the space but that policy point becomes the way that compute network storage and security end up getting optimized together and that doesn't exist as a uniform way inside the container stack today and what this enables us to do underneath then is to use projects like FIDO, a forwarding plane running in user space and have that be configured and operated at the pace of the applications that are happening within Kubernetes for example. As this comes together inside the Linux foundation there's a number of different elements that are being worked on towards building out this stack and these are some of those open source projects and this actually has been built together and was shown at the ANGA conference just about a month ago. Building out the networking pieces as if it was a generic application in a container based stack but a number of these pieces had to come together. The policy piece had to be built, a net plug-in on top of FIDO had to be built and these are now all open sourced again in experimentation towards driving and building an active networking stack as part of the container ecosystem to be able to deploy these applications readily. So when we take a look at these different open source solution components we have to realize that looking down from the top now at Cloud Foundry there is no tie in Diego or in Bosch to the networking pieces below and so how can we get a notion of the rules that we want to drive all the way from the PAS layer to be driven through that infrastructure and so Cloud Foundry has picked up a notion of OCI and picked up CNI and a number of other open source projects within its architecture to be able to start to move towards this architecture that I'm describing which is the rules go with the application that get driven into the overall stack and it allows virtualized functions either running on bare metal, hypervisor or containers to be fully orchestrated across compute, networking, storage and security. That's the overall goal for which we have quite a bit of work to do. Nonetheless, a number of these projects are working very closely together and one is ONAP and OPNFV as I mentioned and as was mentioned in the previous talk. The components of ONAP on top which performs management, orchestration and design of a service can be built on top of technically OPNFV which then can config provision and provide analytics for the infrastructure and for the services being deployed and thankfully a number of these projects coming together and in this case what was proposed last week at the OPNFV summit was that there's a cross-community infrastructure and ability to test out this entire piece moving forward and so that was a fantastic outcome last week. As we take a look at how ONAP, Open Networking Automation Project is moving forward, these are all the components of that piece and this is being able to enable companies like China Unicom to be able to build their core architecture over time and continue to add identity, policy, analytics and even billing as a piece of this. But these are all the pieces that need to come together. What's being proposed is evolving ONAP to take on some of these new elements and projects that I haven't read and these are some of the projects that are part of the first releases of that project itself and the good thing that you can see here is that a number of the networking pieces that have been built from SDN control through the protocols through virtual networking, telemetry, analytics are embedded and be able to be integrated as a part of this overall architecture. DCAE for example which is the analytics engine of ONAP, Panda is being built in with that as well as I mentioned as well as if we can bring OPNFV underneath, we then have that stack to orchestrate those services. So where this is heading and the big conversation going on in the networking community of the Linux foundation is really trying to pull these all together underneath an open networking umbrella so that way the architecture can come together and there aren't necessarily a number of competing satellites of technology although we continue to foster and desire new approaches to solve this problem but that there is a stack that actually can be built and operated to have the industry on a common trajectory to use bare metal hypervisors and containers in the same orchestrated workflow all with the same policy including identity associated with it. So the industries come a very long way and the networking industry in particular has come a very long way in the last five to seven years as it's been working not only in standards but now within the Linux foundation. We've built a number, we've built a service life cycle management, OPNFV to be able to orchestrate the hardware, a data platform and what's being added to the mix now is that linkage of identity of things and people to the two compute networking storage and security through a policy engine across bare metal hypervisor and containers. It's been a ton of work, there's a long way to go, the Linux foundation has been paramount in working towards achieving these goals and this is where we're headed as a networking community. So thank you very much.