 All right, ready? OK, we'll kick this off and keep it pretty brief. We've got just a few minutes to talk a little bit about how we're simplifying, orchestrating, and automating, and also bridging across the physical and virtual network components. So my name is Jennifer Lin, leading the product management efforts in what is now Juniper Networks and was previously Contrail Systems, a company that Juniper had acquired in December of last year. And we've got some of our colleagues a booth over there where we're showing both sides of this equation, both what Juniper proper has been doing in the broader switching business, as well as what we're bringing the table with virtual software overlay. So we really believe there's this evolution going on. I think lots of our folks and fellow comrades in the networking industry have been talking about this change for some time. We would violently agree that there's an evolution going on between the traditional physical networking infrastructure. What we're calling here is a physical underlay, which obviously needs to be very resilient and very secure. And this evolution with the software overlay, where we can push a lot of the complexity up to the software layer and drive new levels of automation for very dynamic application environments. At Contrail, a lot of the folks had joined from many of the web scale companies where they were obviously able to control a lot of their own application environments. For enterprises, it's much more difficult where they have to deal with, let's say, lots of legacy applications and good sections of new emerging apps. But we really believe at Juniper that we need to build up from where we've come from. So how do we essentially make this transition while still interoperating with the network that is out there? And making sure that we can deploy IP services across both broader network services as well as things like security services and load balancing services. So embrace this sort of infrastructure that's highly scaled today that many of the enterprise and service providers are counting on today to run their mission critical environments. We've really thought about how do we bridge across these two worlds to drive better levels of control and management? Some of the things around how do we do better real-time network orchestration and dynamic service chaining in an environment where virtual machines are moving around, applications are very ephemeral, bursting up and down. I think a lot of the talks this morning, we're talking, we're looking at network traffic profiles and showing that a much more elastic architecture is where we're going. You can't statically configure the physical network, set and forget, and sort of keep it there. So we push a lot of this complexity up to the software layer with what we're calling here. Showing here is the virtual software overlay. And a couple of key pieces in what we're calling our controller here. We have a highly federated control plane implementation based on an implementation that started in the IETF in the standards discussions over two years ago about seeing this transition happening. How do we make sure that we define a east-west control plane that, number one, is highly scalable and self-resilient like a lot of the large-scale IP networks today? And do a lot of the configuration management and real-time analytics that today the network people feel very blind in. So if you're trapped in layer two VLAN segments or you don't have sort of a native layer three fabric underneath, things like trace route, things like network diagnostics, how do you look at the latency between various tiers of your network become very difficult? We've also abstracted at a higher layer to kind of define a virtual network. And we have Kiriti Kompela, who's our Chief Technology Officer, has kind of been talking in many international conferences around this notion of software-defined networking as a compiler, where essentially what we want to be able to do is define a high-level business rule and push that down so that the controller can turn that into configurations at the machine level. And in order to do automation, we then need to have a real-time feedback loop that essentially makes this automation loop much more dynamic and real-time. So keeping track of the real-time topology of the physical and virtual network over time is really the power of the end that we're going after. In terms of Juniper and OpenStack, I think many of you may have seen the announcement yesterday, Juniper now is a gold member in the OpenStack community. And we really look forward to kind of contributing both on the technical side as well as just the awareness side. When we're talking to customers now, the key thing that they're looking for is choice and flexibility. They do not want to be locked into proprietary verticalized systems, and there's been a lot of that over the years, both in the broader IT organizations, but also in applications. Now, as this all gets democratized, we've kind of contributed on the quantum side with our physical infrastructure, so we've got now quantum plug-ins for our EX and QFX switches. Today, they're Layer 2, and we will start to see the convergence with some of the native Layer 3 architectures that we're putting forward, like with the Contrail Layer 3 overlay solution. We're pushing native Layer 3 capabilities all the way down to the host, and we're making certain assumptions about where we believe these architectures are going. We're currently in beta with that right now. We've moved past some early field trials, and we're getting very good positive validation from very large-scale customers, both in the service provider as well as a large enterprise, and emerging type companies, like SaaS companies and online gaming type companies, who feel the pain first about very dynamic application environments. So our intention, obviously, is to define each of these components in a loosely coupled way with clean and well-behaved interfaces between the two. Over time, as this evolves, you'll see a more tight integration at the solution level, but we'll continue to have a scale-out architecture so that we can keep each of these components modularized. One major change that I think has hit the network industry before when things were siloed and everything was verticalized and systems were defined within racks and everything was tightly within the racks, VLAN segmentation was fine. We started seeing a lot of examples. Of course, the web-scale guys saw the challenges first, but the assumption in a large-scale data center increasingly is that you have a flat IP fabric with any to any connectivity. So it's no longer create a top-of-rack switch with high levels of oversubscription, but more enable each of the endpoints to cleanly communicate as efficiently as possible and reach any other node without any bottlenecks. So on top of that, then, we need a different way to segment the network. And when we talk about virtual networks, networking has been doing virtual private networks for some time. The way that that has scaled, we're essentially pulling many of those principles into the data center and using technologies like BGP and VRF tagging for MPLS networks within the data center. We're trying to solve two problems. One, which is, how do we leverage mature capabilities in a way that we know is scalable and interoperable with the rest of the wide-area networks and service provider carrier networks? And at the same time, we're trying to recognize that the application paradigms are much more dynamic and that the operational and organizational business models are quite different. So with the DevOps-type organizational structure, we spend just as much time talking to the cloud administrators as we do to the network engineers. And if we can get to a meet-in-the-middle approach with a lot of this convergence and a lot of the things that the OpenStack community is driving, we can basically get past small pilots and really interoperate quickly with large-scale networks. So part of that abstraction is how do we define a virtual network that is not tied to the physical topology in any way? So when we define a virtual network, that's a logical construct. We apply, let's say, a policy to that virtual network. Let's say your virtual network is the back-end database tier of a three-tier web application. And once you set that policy, you never need to do any manual reconfiguration. As virtual machines could spin up and down or as things move around, the network now has a system-level view and pushes those configuration updates back down to the individual elements. One of the examples of this as well on the switching side, previously you would have sort of two paths of workflow, one with the server administrator using Puppet and tools like that to do automation and configuration management down to the virtual machines and the servers within the virtual machines. So as this convergence happens, it's sort of a natural extension to say, how do we essentially eliminate what is today a separate and siloed process for the network administrator where he may be pushing configuration changes or doing iOS CLI changes to individual switches and use that same workflow pattern through something like a Puppet master. So Juniper's been working very closely with folks at Puppet to kind of get this same workflow in a streamlined way, more along the lines of how do we enable now automation and orchestration across virtual machines, allow the network to be exposed as a service and as an abstracted layer so that we don't create these silos. Then as you go through diagnostics, there's obviously the trouble ticketing process. We understand that that's been painful and has slowed down the ability for businesses to quickly roll out services. So we're addressing a lot of those, both in terms of how we're evolving our products, how do we enable switches, which are much more programmable, but how do we recognize that there's choice in orchestration models so that we can keep each of these pieces loosely coupled? In terms of the virtual overlay, one assumption that even as Contrail, we were a startup, we were sort of looking at what's going on and how customers are looking to solve some of the biggest network challenges. So although we were a small team, we had folks from Google and Facebook and Microsoft Azure who had seen some of these very large scale data centers and the types of challenges they were running into. Now, one of the first changes on the physical network, as I mentioned, was to enable a much flatter, simpler IP fabric in the data center. So assuming, many of them, for instance, had started to move BGP into the top of RAC and enable a native layer three flat IP fabric in the data center. That moves obviously a lot of the hard coding and configuration management up into this virtual layer. The virtual network overlay then is a logical abstraction that allows us to find how services are applied. And as I mentioned, if things move around, capacity changes, there's latency in front of the database tier, we can make those changes. The next layer essentially shows this scale out control plane. And in this particular case, work that was done almost over two years ago in the internet engineering task force, standards body for the internet, there was a cross industry effort to define how we could essentially use BGP as a control plane and push a lot of the concepts around L3 VPNs down into the end systems. So we use, in this case, BGP for East West Federation that allows us to talk natively to existing WAN gateways and many of the routers and layer three devices that are out there, as well as many of the service devices. As I mentioned, we use concepts like VRF tagging so that essentially we don't have flow by flow inspections and trying to manage flow, the scale challenges around managing flow by flow. We're essentially doing a very efficient label on top in the IP header so that we can use the same tricks that folks have learned over the past decade on how to make these networks very scalable. And then above that, of course, here we're showing a lot of this capability in terms of real time configuration updates. How do we enable better instrumentation all the way down to the host so that we can do better real time analytics? Types of things like doing a packet capture, doing port mirrors and pulling that information so we can see in a time series database what happened in the network. And we're hearing increasingly that if folks kind of, if the cloud administrators make a decision that doesn't play nicely with the network, the network folks start to lose a lot of the visibility that they used to have in their existing IP network. So we want to step up that level and even exceed the level. There's lots of new analytics that we can do now with much more granular data that's coming from the network. We now have a footprint in the host so it's not just about the first hop router or the switches. And now, because everything is natively networked, we can do a lot of the diagnostics and analytics, very rich analytics. You could do a Hadoop job on a lot of this data as a network administrator. It's not just about managing logs or anything like that. The second use case, which is really an extension of the first, which is the virtualized data center, is really now many of the cloud service providers and carriers want to essentially use their base with Layer 3 VPNs or Ethernet VPNs, which both use BGP as the control plane, to essentially extend a private enterprise cloud into a public cloud, but still allow the enterprise to see that as essentially an extension of the virtual private network. We essentially announced a partnership with Cloud Scaling yesterday, and much of what we'll be contributing there is both the quantum network orchestration piece, but on top of that is how do we enable this interoperability for virtual private cloud so that an enterprise customer can get AWS-like functionality, but we've solved a lot of the issues around floating IP, how do we solve some of the security challenges, et cetera. So this is becoming more and more prevalent as a request from our enterprise customers. The service providers obviously are very interested in sort of adding value into that without reinventing the wheel. So to them, it could be as an extension of their most successful and profitable services like their Layer 3 VPN services. So that really gives us an opportunity then to talk about Federation of Cloud Networks, and that's really what it's about. Networking companies understand how to work with ecosystems to essentially ensure interoperability from the beginning IP was defined as the ultimate abstraction so that whatever the wired or wireless media underneath was, and whatever the application on top was, we can ensure cross-vendor interoperability in a way that would scale, and that if, for instance, one router goes down, we drive eventual consistency so we don't bring the whole system down. So what we've seen with a lot of SDN or early network orchestrations that we've created a new single point of failure, and that's unacceptable for many of the large-scale customers who have seen this movie before, we've already learned some of those lessons, we do not wanna repeat that. So how do we build on top of a robust physical underlay with a virtual software overlay that addresses a lot of the new challenges that dynamic applications and cloud enablement and mobility are bringing to us as network companies, and how do we then work with the broader ecosystem? So please swing by our booth and let us know what you think or any questions that come up, and thanks for joining. Thank you.