 So, hello everyone. Thank you for coming. My name is Mario Sanchez. I am a research scientist at Hewlett-Pucker Labs. The word I'm showing today is actually a young word with my colleagues Joom Young and Ying, which I actually couldn't make it today. But it's basically related to dynamic network fabric for NFP. So, over the past few years we have seen, over the past months and the past year, we have seen a lot of these NFP use cases that we've all defined move away from POC implementations in a lab and actually becoming full-flown solutions that actually implemented an open stack. But in the dependent on the use case, invariably, what we end up having in this architecture is basically something like showing in the graph where we have these solutions are actually overlaid on top of a physical network topology that actually is used to interconnect different points of presence, right, or POPs. And in most cases, these POPs are basically open stack instances that is basically a data center instantiation of open stack where we have compute power, we have BNF solutions that we are actually able to instantiate to actually solve the specific use case that we're trying to tackle. But in most cases, what we normally have is this overlay network that is running on top of this underlay network that in most of the cases we have no visibility into. And the idea is that if we have no visibility into this underlay network, and of course we actually do that for a lot of good reasons, we're trying to hide complexity from the overlay, the problem is that we are limited into what we can actually do or how we can actually react to when something happens in the underlay, right. So if we want to maximize the utilization of the underlay, it would be good to have at least some visibility into what's happening on the underlay. Basically, today's talk is I'm going to be showing a little bit about the work we've been doing in the lab and actually at HPE in terms of trying to get some visibility into this underlay and how we can actually get these insights by using latency measurements of the overlay to be able to guide what's happening on the overlay itself. So the idea is that, let's imagine for a little bit if we had some visibility into the underlay, what sort of things or benefits would we get in any of these NFB implementation solutions that we have, right. So if we have some visibility in the network, we can, for example, have this with the same capacity of the underlay, of the same capacity of the entire network, if we have visibility into where the congestion is happening, we might be able to get away with a lot more performance and we'll be able to do a lot more in terms of workload depending on what's happening on the underlay. And the idea is that we will have better footprint in terms of operational management on this network. And the idea is that all of these concepts locally actually apply to many of the different NFB solutions that are multi-site. For example, this BCPE, VIMS, or VEPC use cases. So in this case, what we're going to discuss today is this, we call it Enhanced Smart Virtual Network Fabric for NFB and service providers. And the idea is that we provide, for the most part, we provide three different components that we actually working together to be able to provide an enhanced experience for these services. But today, because of time constraints, I'm only going to concentrate on what we can actually do at the overlay network and how we can actually see what's happening on the underlay. But the extended version of this talk, then, please, if you have more questions about any of the other components, please grab me at the end and I'm happy to discuss them with you. But the main idea is that we have these three new components in the system on top of the HPEs NFB solution, where we have, in one side, we have this what we call the underlay-aware overlay, which is the main topic of this talk, where we actually infer what's happening on the underlay by looking at measurements we take at the overlay. So we use analytics to be able to figure out where this congestion happening in the network and whether we can actually try to route the traffic away from congestion to maximize performance. The other two components is actually a bandwidth-aware resilient BNF placement, where, depending on what's happening on the network overall, on the remands from the end users, we are able to figure out which of the pubs that are interconnected in our solution we should pick to instantiate this new SFT chain, for example, that's going to be serving a specific flow. And in the other case, the third component would be the SLA monitoring and manager, where we're actually able to, on a per flow basis, figure out whether a specific service function chain that's going to be serving the traffic actually meets the SLA requirements for this specific flow. But the idea is that I'm going to be looking at this UAO underlay overlay technology in the light through this specific use case, which is a BECPE. I'm sure you were aware of the presentation from Barasson yesterday that actually explained the implementation of this universal CPE, which is pretty close to exactly what we're doing. And the idea is that, for those of you who are not familiar with the BECPE use case, the idea is that to take away or to remove the constraint of having a physical appliance or CPE as the customer premise and be able to replace it with a device that actually we can configure remotely from the network, and where we can actually instantiate these network functions as needed. But then what we call, in this slide we're calling BECPE, the first case, which is the enterprise CPE, virtual CPE, the idea is that we're going to take it that one step further. If we, in some cases, the device itself is not going to be capable of instantiating long or convoluted service function chains. So the idea is that we're going to figure out, we're in the network, we're going to place these service function chains, and then we're going to route the traffic to that specific instantiation of the chain to be served and so that the flow can go in, serve through the service function chain, and then out in the internet. In this case, we're calling it a classified BECPE solution where we are able to do all these things dynamically in the network. So basically the idea is that we normally have this multi-site topology, where in this case we're looking at four different pubs, in this case, independent instantiations of OpenStack, and we have, in this case, we're showcasing just one underlay topology of the network, where we actually have a whole bunch of, in this case, core switches that actually are used to carry the traffic between the different pubs. But the idea is that in this case, we, again, are only using knowledge that we can collect from the overlay network without looking at the specific on the underlay to be able to guide our traffic selection. So in this case, depending on what we're trying to do, we're going to have, say, with the information that we're going to obtain, which I'm going to go into detail in the next slide, we can do several things. For example, if a customer wants to create a new chain, we can use the information that we collect from these three modules to be able to figure out which of the pubs should be serving those service function chains. And on the fly, we should be able to create more instantiations of this service function chain in other pubs. And if we have multiple options of these service function chains out of the network, then we need to figure out which one is the best suited to serve a specific flow on real time. So the idea is that by concentrating on this underlay, aware of overlay, the nitty gritty of the idea is that I'm going to go back to this slide later. But the point is that for each of the pubs that are instantiated in our solution, we basically collect latency information between each pair of the different pubs in the network. So in this case, we have three different pubs, and we basically collect information that we collect using profs at the overlay level, and we basically collect information between different pubs. And all of the pubs basically exchange information, this latency information with each other, and using that information, these pubs are able to compute a correlation matrix, which we normally look at this, something like this in the network, in the UI that we've developed for this. The idea is that depending on what's happening in the network, depending on how much traffic there is in the network, we're going to be able to correlate how much of the underlay is a overlap for this, for those two specific paths. So the idea is that if there is an increase in the latency, entrant latency between two specific paths at the same time, then we can infer with some certainty that there is at least one queue in the underlay that is going to be shared between these paths. And then that means basically that there is some possibility that if we are able to avoid those paths when we're trying to instantiate a new service function chain to serve it a flow, then we will be able to reduce the potential congestion to the network. So the idea is that we end up with this underlay where overlay technology where we actually take measurements again on the overlay, we use this information to route information on which paths should serve specific traffic, specific flow on the network. And the idea is that this routing optimization is aware of joint paths and given that we are now able to infer what's happening in the underlay, we're able to infer which paths across each other on the underlay, then we can try to avoid those paths whenever possible. And if not possible to avoid them, this will able to guide us into the creation of both service function chains in any of the other paths to be able to figure that out. And with that, I think I am 10 minutes. Thank you so much. And if you have any questions I can answer, please grab me at the end if not.