 Good morning, everyone. Glad you guys stuck it out for the final day. I hope everybody's been enjoying this day. My name is Pete Cruz. I am the group manager for product and technical marketing at VMware focused on VMware integrated OpenStack. And so today, we're going to spend a little bit of time talking about VMware and OpenStack, what we're doing in the OpenStack space, kind of some of the goals, and what type of things we're looking at as we move forward into the future. And then at the end, we'll talk a little bit about some of the key areas, namely one of them being the telco and a vSpace. So to kind of begin, one of the things when we engage, we have a lot of customers that are engaged in OpenStack initiatives. We've obviously been a contributor for a very long time, a foundation partner since early 2013. But we look at it across all of our products, regardless of the technology, regardless of the space, regardless of the requirements. There's some things that we try to do across all our products is to really simplify how people can operate, manage, and deploy clouds. And so over the past couple of years, how many folks here have read the user survey, the OpenStack user survey reports? Great. So in the April survey, it continues to be some challenges that people are facing as you guys are trying to do OpenStack deployments within your environments. And so we, as VMware, look at some of these types of things to say, how can we help alleviate those challenges to move people forward towards success in deploying OpenStack in their environments? So obviously the first is to be able to actually build a cloud, get the framework deployed, get it tied into an underlying infrastructure to be able to start consuming that infrastructure for your developers through the OpenStack APIs. The second is monitoring visibility into not just the workloads that are running and how do you manage those workloads, and especially in an OpenStack environment, those things can become very dynamic as they're, if they're in production and are being programmatically created through the API. So monitoring and visibility into the workloads is a challenge, but also being able to monitor and understand what's happening in the OpenStack framework. All of the different components, the controllers, the interactions between the different projects within OpenStack. Obviously, if there's something wrong, how do I troubleshoot? How do I dive into what's going on at those different layers so I can resolve those issues quickly and get my users back up in consuming infrastructure through OpenStack? Scaling up and down, being able to programmatically be able to do that quickly, be able to scale up resources as you need them and then being able to return the resources back to the cloud when they're no longer needed. And obviously upgrading, patching the environment. How many folks have gone through the process of upgrading and patching OpenStack, say moving from Kilo to Liberty or Kilo or Skipping? Still a significant challenge. So we look at these type of challenges and we really wanna help our customers. Ultimately, our goal is to allow you to put an OpenStack framework on top of very resilient, very scalable, highly-featured, production-ready infrastructure both from compute, networking, and storage. But also make it easier, right? Leverage your expertise. If you're a VMware customer and you already have that infrastructure to be able to leverage the expertise and the tools that you have. But also make it easier to actually achieve or alleviate these challenges. So VMware-integrated OpenStack is an approach that really we wanna be able to deliver the OpenStack value to our customers. But we also want to simplify the operations, like I said, be able to easily deploy the OpenStack framework, get to time to production as soon as possible, allow you to configure and manage it simply, be able to troubleshoot it, monitor it, get visibility into how it's going, do the day-to-day operations of managing a cloud framework and make that easier. But we also wanna take, like I mentioned, that underlying VMware infrastructure and expose all of the differentiated capabilities of that infrastructure. So if you're running production workloads, things like HA, DRS, VMotion, advanced networking features, load balance, security firewalls, security groups, policies, micro-segmentation, from a storage perspective, all of the great things you can do from a QoS and storage policy perspective, be able to expose all of that up to the users through the OpenStack APIs and let you take advantage of it. So at the end of the day, it's still a very standard, production-ready, fully-featured OpenStack cloud. So what is VMware-integrated OpenStack? How many people are here already familiar with VIO? All right, so VIO is what we would call an integrated approach, an integrated product approach. And what that means is that we've delivered a fully-featured OpenStack distribution, that's def core compliant. We've optimized the key drivers for Nova, for vSphere, for compute resources, as I said, extracting all of the production-level enterprise-grade capabilities within the compute resources, NSX for networking, to provide all the advanced networking features that you would need if you're driving workloads into production, as well as any of the vSphere data storage. So any third-party storage you have connected into vCenter, as well as virtual SAN. So we deliver that standard OpenStack framework as an OVA. We also include a management server, which actually provides all of the capabilities to easily deploy, configure, patch, manage, update the overall release. And it's fully supported by VMware, not just at the infrastructure level, but also the OpenStack level. So all of the work that we do in supporting our customers, we feed that back up into the community as well, but we will support you from the top down. So some of the customers that we have out there, we have a growing number of customers that are using VIO in production today. If any of you have been at VMworld or have watched any of the recordings from VMworld, Nike was on stage. They're running somewhere over 5,000 VMs in production, running all of the e-commerce, e-retail site. From the time that they started the project and deploying OpenStack on top of the infrastructure, it was 10 weeks to full production. And they were able to get into production in time for their Air Jordan release. So they have four employees currently running the full, the complete OpenStack cloud. They leverage vSphere HA, obviously their production workloads, so they want to be able to leverage that HA as well as vMotion to protect workloads, so in case of resource contention or as they scale up demand on their applications, they can move things around and make sure that they're still performing properly. And they use their built-in automated patching so if they have issues and we resolve issues, it's an automated patching process. So on stage, they talked about being and running billions of dollars of revenue over an OpenStack over OpenStack orchestration. Extremely powerful, extremely powerful and shows the strength of what OpenStack can do for these organizations that really have dynamic environments that require the capabilities of what OpenStack brings to the table. Amadeus, we have a friend here from Amadeus. They're, I think it's around 200 plus VMs that my number might be more or less. Obviously it's online travel reservations. They're leveraging vSphere, clustering, leveraging NSX for compute. And they have a very high volume of concurrent workloads running at a given time. So being able to support that underlying infrastructure from a scale and a performance perspective is critical. And then HedgeServe is basically a hedge fund transaction organization in the States. And so they have a very strong CI CD pipeline. They've been using the product for quite some time. They started with our initial version which was IceHouse based. And as I mentioned before, one of the big challenges of being able to upgrade and, you know, upgrade and patch OpenStack framework. So they were able to use our automated upgrading utility to basically upgrade on their own. They went from IceHouse to Kilo on their own without any involvement from us or PSO or anything else. They basically were able to use the built-in tools in our VMware integrated OpenStack. And they leverage vSphere to reliably run their Windows VMs in their environment. Key value proposition for them was to be able to also not just run new workloads or new cloud native type applications on the OpenStack cloud and on top of the same infrastructure but also their existing legacy applications. It was very important that they had the same infrastructure. So they had the same support, the same tool set, the same team being able to manage both types of applications. So I'll give you a little history. You know, as I said, we were kind of talking about where we've been, where we're kind of going. So VIO as a distribution has been out since the beginning of 2015. It was IceHouse based. It really kind of, it was the first iteration to really expose what we were doing through the Nova Neutron and Cinder drivers up into the OpenStack APIs. We built in those day two operation workflows for being able to do the patching and the upgrading and those type of things. Q3 2.0 rolled out, we moved to Kilo. We provided that utility to do seamless automated updates, being able to upgrade from previous OpenStack builds to new builds but also the ability to roll back. So if you tested it out and said, okay, wait a minute, this build or this release is not quite up to par yet. I'm gonna go back to where I was and my known configuration, you can easily roll back and continue operations. None of the workloads are affected in this operation that are currently running. Built that back up in recovery and to be able to support customization. So if you needed to make changes or customizations or if there was work that you did in the environment through upgrade, those would be preserved. In June of this year, we rolled out 2.5 and 2.5, we really started to say, okay, what are some of the other challenges that customers face when they're deploying OpenStack? One of them is the resource footprint of OpenStack, the framework and stuff. There's a lot of modules, there's a lot of interactions between the modules. So our initial release was probably 15 VMs in a full HA environment, right? We done the load balancers, we done the controllers, cluster database tiers, RabbitMQ server components, et cetera. So what we did was we reduced the size of the footprint by over 50% without compromising any performance or scale. So what this allowed us to do or allowed our customers to do was actually consume less cloud resources, infrastructure resources for actually deploying the stack. It also reduces complexity, right? No more, not the less moving parts that you have, the less complexity is to find out what's going on. In an environment. We also wanted to, our customers wanted to be able to take advantage of the work that they've already done, like VM templates. If they already had VM templates for machines, we could easily, we can now easily import them in and convert them over to Glance Images and then provide those to your users to start consuming those images without having to start from scratch or have an empty cloud, let's say. As I mentioned with that architecture, we improved the scalability of the performance of the environment while being able to reduce that overall footprint. And we introduced built-in troubleshooting tools. We have an API profiler which looks at all of the calls that go on in the RabbitMQ as well as all of the inner module conversations to kind of see where is performance degrading, which actual calls are being made that are causing issues within the overall environment as well as being able to look at each of the different services, you know, Nova, Cinder, Heat, Glance, Horizon, et cetera, Keystone, are there any issues in any of the modules within the overall framework? So you can quickly start the troubleshoot. I'll make another point here that it's also tied into all of the management tools that we have for, you know, log insight, for providing, you know, log data analytics about what's going on in the overall environment, not just for the workloads, but also for the framework. There's custom built, custom built management packs for those as well as VR ops. So V realized operations are a tool for monitoring performance, health, compliance, and risk. And so that can provide visibility, not just into the workloads again, but also into the OpenStack framework. And we introduced NFV features. So obviously I can set up tenants, I can set up quotas, or the number of machines that they can do, but with NFV, I'm able to now set up users and define capacity pieces, like reserve a certain amount of capacity for those on a per tenant basis and say, okay, you can have this much storage, this much compute, this much, you know, these types of networks, et cetera. So you can kind of get more granular in the control of what you offer those tenants inside of OpenStack. 3.0 was just released in September. So 3.0, we rolled out and now Vio is based on Mataka. And again, being able to move from Kilo to Mataka in a seamless upgrade process. And if you're still here throughout the day, we'd love to have you come down to the booth and we can show you the whole process. Continuing on that, reducing the overall footprint, like I said, that's a full HA architecture that we reduced by 50%. We also offer what we call compact mode. So we can basically, for organizations that may have relaxed HA requirements for the framework itself, or you're trying to offer out OpenStack for remote offices or branches or smaller organizations, you can deploy the full OpenStack framework on two VMs, on one host. So it really reduces the overall footprint, still fully featured, still be able to provide the performance that you need, but allows you to quickly get things up and running and minimize the amount of infrastructure required to run the OpenStack framework. Building on the VM template importing to be able to take advantage of existing work. We have a lot of customers who had existing workloads running in vSphere. They might have been database servers, they might have been test environments, et cetera. But what they wanted to be able to do was take, or existing projects, they wanted to be able to take those VMs and bring them in and start managing them with the OpenStack APIs, all of the day two operations, power on, power off, et cetera, resize, those type of things through the OpenStack APIs. So you can now import those workloads, those running workloads into OpenStack and then they will basically bring them up as Nova instances inside of OpenStack. So again, not having to start from scratch and having an empty cloud with OpenStack sitting on top of it, you can now start to take advantage of the work that you already have done, or any of the development projects are already ongoing, you can easily bring them in under control under OpenStack. Our vRealize Automation suite is provides kind of the policy-driven self-service catalog, automation capabilities, governed capabilities for infrastructure and application provisioning. And so we have a lot of use cases and customers that have joint, they have multiple environments and they want those two environments to be able to work together, right? Being able to automatically provision through a self-service catalog tenant, OpenStack tenant as a service developer, as a project, they click a catalog button, it automatically goes in the back end and sets up that tenant with the quotas, with the capacity requirements, with the project name, et cetera, all automatically. So you don't have to go in and manually do that through Horizon or what have you. Others is being able to bring heat templates and bring those into a service catalog. So if you're moving workloads from development and moving them into test and production, those testing organizations can bring in and look at that heat template and click a button and deploy the full environment. And then of course, continuing to build on NFV features and get, I'll talk about NFV in a little bit, but building out features and capabilities for enhancing the multi-tenancy capabilities, et cetera, for network function virtualization. So just to touch a little bit, as I mentioned earlier, VMware is very active in the community. We have been for quite some time. Since 2013, we've been a Gold Foundation member through the years. So we have over 23 developers inside of VMware solely focused on OpenStack. To date, we have almost 5,800 commits, over 1.4 million lines of code written and submitted into OpenStack, into the core upstream releases. And over 26,000 patches reviewed, submitted and reviewed. So we are very active in the community. Our NSBU, Network and Service of Business Unit are the founders, they're former NYSERA, so they're the founders of the Neutron Project. So just to reiterate, VMware is very committed is very committed to OpenStack. And obviously if anybody saw the keynote yesterday, we were part of the 16 vendor panel for proving out OpenStack's interoperability across all of these different distributions in infrastructure technologies. So I can't state that enough how important OpenStack is for us at VMware. So what's on the horizon? So again, taking a look at that April user survey, we see obviously containers is a huge, containers is a huge area of interest for a lot of organization, especially from a development perspective, to be able to package up that full environment and easily move those things around and do development and reduce the amount of complexity the developers have to deal with from moving code from the system to system. The other key thing is the software defined networking or network function virtualization. I'll talk a little bit about that as well. So first we'll touch on container orchestration within VMware Integrator OpenStack. So we have a lot of work going on currently and how do we support running containers on an OpenStack framework on top of VMware infrastructure. So one of the approaches that we're taking is really the similar approach that we took for OpenStack. All of those key challenges that I talked about earlier, those are challenges from a container perspective. Being able, once you start to move from development, containers on a laptop to bringing that into test and into production, there are a lot of challenges, right? There are a lot of tools out there to help support those for orchestration perspective. A lot of projects, right? Magnum, Kubernetes, Mesosphere, et cetera. So there are a number of products out there to really solve the visibility problem. How many do I have? Affinity and Affinity rules, all of those type of things. But we also take the approach to say those types of tools, we wanna provide the same type of capabilities, the same type of operational value that we brought with VIO, right? Being able to easily deploy it, be able to configure it easily, be able to upgrade, patch, do all of those type of things that you knew, the operations around the tool sets that you use for driving the cloud framework. Now from a telco or an NFV perspective, right? There's a growing shift in telco organizations, mobile operators. I don't know how many folks here are kind of familiar with what happens when you show up, when your plane lands and your phone tries to connect to an environment. You know what goes on in the back end? Pretty complex process, right? So you basically are connecting up, you've got your packet gateways, you've got multimedia engines, you've got authentication services to validate who you are, what services you have access to, what's gotta go back through, configure all of the machines to make sure you have all the gateways and all the routers to make sure that you have the right services available to you when you connect up at the right rates. And so there's a lot of back end work. So each one of these different pieces, right? The GSN, the packet gateways, the MMEs, all of these engines, they're all the traditionally banned, purpose-built boxes, right? Big, huge Philippa data center, Philippa data center with these boxes, right? They're fixed, they're large. And so what happens right now is when demand grows greatly, right? And if you're like FIFA World Cup, right? And there's like 50 million people in an area where there's normally five million, right? So how do these organizations easily scale without having to put more boxes in to handle the scale? And that's really where kind of network function virtualization comes into play, right? So again, these boxes are very hard, very large, right? And it's not exactly easy to bring in new boxes, right? So what does the virtualized landscape look like, right? It's basically taking those boxes, those network functions and virtualizing them into what we call network function virtualization. So those functions become images that run on a virtual infrastructure, just like any other application. What that allows is traditionally kind of point and click to create new services. It makes it a lot easier to add services to the environment to support demand, autoscale up and autoscale down. New services can get delivered quickly and you can scale them up easily. But traditionally, the telco folks, the telco engineer is familiar with the big boxes, right? They're not necessarily familiar with the virtualization layer. So there's a lot of collaboration now between the telco, the traditional telco folks, and the virtual infrastructure folks to try to bring those two worlds together to provide that value. Now that brings together some challenges, right? So self-service, right? If I'm looking at an immediate increase in demand, I want to be able to programmatically scale up or add new services to the environment. And OpenStack is really a key enabler in network function virtualization. We see a lot of telcos basically looking at OpenStack just for that reason, right? It's a common API that I can then drive all of these different gateways. So typically all of those different boxes come from many different vendors, but having that one API to be able to programmatically execute as watching demand increases, spin up new infrastructure, spin up new systems, tear them down, build those networks up. So networking obviously between all of these systems when they come up is critical to security to make sure everything gets routed. So if you can think about the complexity of that and then having all of that simplified by laying OpenStack on top of that virtualized infrastructure really helps them move forward as a business. So we look at VMware, right? VMware is providing that virtual infrastructure management layer, obviously OpenStack, right? On top of the compute, the storage, and the networking for the network virtual, the network function virtual infrastructure. And so there are some key benefits from an NFE perspective when we look at OpenStack, right? It helps folks that are doing NFE accelerate and deploy new services faster. It kind of automates the management of that infrastructure, right? So they can't afford to have somebody sitting in the room waiting to push a button if somebody says, hey, we need more services, right? We need more gateways, we need more bandwidth, we need more pipes. They can't sit there and click a button, right? It's not fast enough. They need to monitor that stuff, monitor that stuff, look for thresholds, look for triggers, automatically drive through the programmatic APIs to spin up new infrastructure as required, as needed. And it's an open carrier-grade platform, right? VMware spends a lot of time and effort really building out the robustness to scale the performance of the underlying virtual infrastructure for compute and networking with NSX and storage, right? With our storage partners. So a very robust, solid infrastructure is required to be able to deliver on that type of an environment, those type of services. And we have, obviously, it's proven in production. We have a number of customers that are running in the enterprise in production. We also have customers in the telco space like Joyn running production environments, running their production business on OpenStack, on VMware infrastructure. So we have Jason here. So my colleague, Jason, is in our telco NFV team and he's gonna kind of go through a quick demo setup for you. You wanna use these slides or you wanna, I put your slides in the back, HD monitor right here. Mike's on? Okay, one second. Anybody in the audience actually working with any kind of telco NFV deployment? Nobody? Majority of the people actually deploying for IT, DevOps. So what I thought I would do is I would start just by going through a couple of slides and then I'll jump into the demo. So basically, again, my name is Jason Soviak. I'm the lead solution architect for the NFV business unit. And as Pete mentioned, I wanna talk to you a little bit about telco NFV. I'm gonna give you a brief overview of what we actually are doing in this space a little bit more in depth. And then I'm gonna go through a demo. We got a couple of new additions that's very exciting as far as telco NFV goes. I'm gonna talk to you a lot about what the network can do, how we can actually gain a lot more insight to be able to help you to deliver services quicker and resolve problems a lot faster than you typically would if you just had traditional networking people. How many people are more on the networking side or more on the compute side in the audience? Who's on the networking side? So we got a few, and most everybody else is gonna be on the software side. But one of the key things, people that are, as we've been in, I've been in networking for 18 years, worked for some large carriers, large vendors. And it's always been a challenge to try to get insight into the network itself. It's always been difficult. That's part of the reason why SDN came out was because the network itself was so fragile. And so now what we have is we have the ability to be able to program and to push networks out, push whole workload loads out. And if you take a look at what we're actually doing in the telco space with NFE, we start off first by talking about we're Etsy compliant. So we follow the Etsy standards as far as architecture goes. This is a high level block diagram of that architecture. So vCloud NFE sits on the bottom and we have a variety of different components that are part of that. So of course, compute network and storage that makes that piece up. And then on the Vim layer, we also, that's where we use Vio and we also have vCloud director. And then in our management stack, we have vRealize operations, log insight for different types of log messages and ingestion of a lot of unstructured data. And then we have the new product that we're gonna talk about a lot today is a vRealize network insight. So in and at the top, of course, we have the VNF workloads that are on the top and the OSS-BSS at the top as well. But for the majority, where we're playing at is in this bottom portion. So what are we really trying to accomplish inside of Telco NFE? Basically, we look at four major things that we're focused on. Number one for Telcos, earning revenue is at the top of the list for any company, whether it's a Telco or not. So what we wanna try to be able to do is help them to be able to accelerate that particular deployment for the services. We wanna be able to create a robust type of management and operation system that allows them to be ready for day two. It's great that you actually can go and deploy a whole new infrastructure and have everything ready to go and deploy services, but you also have to be able to manage it too. So we're heavily focused on that piece. Carry grade platform, again, we're standards compliant with respect to Etsy and we have several deployments that are out there proven in production. So from a high level perspective, the way I like to think about it is we wanna help you to be able to deploy very quickly as far as your infrastructure goes to support your services. We wanna then help you to be able to deploy new services whether or not that was an INS service, EPC type service, whatever that workload might be. And then we wanna be able to help you to manage that infrastructure a lot better than if you didn't have the right tools or the right capabilities. So I wanna put the, before I get into the demo, I wanna start to give you an idea of what I'm actually looking at in this demo. This is a high level picture of basically our telco vCloud interface infrastructure as well as VNFs on the top. And in this particular example, what we're showing is, this is a telco IMS workload deployment. So at the very top of the diagram, you see the different VMs that make up the VNF. We have a DNS server, we have DCM for license management, we have a SAS server, we have a HSS cache server, perimeter, which is the ISBC and a PCFCF, and then the ISCSCF. And on the infrastructure side, from a VMware perspective, we have everything that you need to be able to support that. Starting off with the distributed logical router coupled with the Edge services gateway. If you need that for scale and you can fan out. We have NSX, so here's the NSX controller to be able to help us on the SDN side. And then inside the management cluster, we have a combination of a variety of different products. We have VIO that's a part of this. We realize operations for operations management, network insight, log insight, et cetera. All of these pieces help to make up that management block that we actually are using. So what I'm gonna kinda go through in this demo is I'm gonna use this as a baseline kind of topology and I like to think of it in two planes. One is we have an infrastructure plane and it could be two routers or it could be 200,000 routers. It's a transport to get from point A to point B. And then you have a services level. It's what everybody is trying to either pay for or they're trying to get access to, whether you're a DB administrator or DevOps or you're a TOCO. So I kinda break them apart in those two different planes. So I mentioned that with this particular deployment, what we actually did was one of the things is we installed, we had VIO and OpenStack installed and then we came back and we built a heat template specifically for this particular TOCO workload. So once we actually had the infrastructure deployed, then we came back and said, let's enable services on top so you can start generating revenue. So we built a heat template using working with MetaSwitch and then that allowed us to be able to basically deploy this entire package in one shot through the heat template. In this particular topology, again, it's the same components that you saw on the previous diagram. I just wanted to kinda give you a full shot of what it looked like actually for us to deployment. To deploy this kind of a workload, it was really about three different LAN segments and about seven different VMs that make up the actual workload here. So I'm gonna get into the demo in just a minute. I wanna just kinda give you a precursor to what you're about to see. And that's the 360 degree service visibility. I mentioned that we're heavily focused on the management operations piece because we wanna be able to help customers deploy quickly. We wanna help them to be able to manage quickly. And so anytime there are particular issues that come up and they help them remediate. One of the things that you see right here, this is a custom dashboard that was built in order to do service to infrastructure, component relationships. So at the very high level, you would have a voice service running and then underneath it, you would have your infrastructure and your components to enable it. If anything were to happen that would render that voice service useless or the network would be down, then it would raise an error. And that's basically what I'm showing here at a high level is there are three different badges that are on here. The health badge, the risk and the efficiency badge. Right now the health badge is red. The reason why is because the transport at the top left is down. And so that's what it's saying here. The SP transport is down and then the service provider's voice is down. So we have a correlation capability between infrastructure and the application at the top. The second piece which is really is probably gonna be for all the networking guys and even if you're an application guy and if your application goes down and you're trying to troubleshoot, everybody's got a troubleshoot, how do we get from point A to point B? At DB, everybody has to take a look at this. So what are our traditional mechanisms to do that? So we get inside, we do pings, we do trace routes, we do record routes, we do MTU path, we do a bunch of different, we have a bunch of different techniques. Well, what if I could give you the opportunity of using one platform that would basically be able to give you visibility to the overlay and to the underlay? So as networks themselves are starting to scale out well beyond VLANs and VXLAN has taken over as the predominant choice with 16 million different segments, how do you actually get visibility? Is virtual networking actually easier or is it harder to troubleshoot? Can you find the links? Can you find the path? That's what network insight is gonna give you the ability to do. What you're seeing right here before the demo is just to get your mind a little oriented here is these particular three green boxes, these are all ESXi hosts. And what I'm looking at is connectivity between two VMs. Doesn't matter where the VMs are at. As long as I have access to being able to ping those VMs, I'll have the ability to be able to do this path trace route. What you're seeing here is a new level is a new level of visibility is what you, what we're looking at is there's two VXLAN segments here. There's also one VLAN segment going in between three different hosts with multiple firewalls, a edge router, a gateway router, and on the outside, I got a physical switch. So not only can you tell what the path is from point A to point B, I can look at it just not at the logical layer, but I can look at it at the physical layer. And every hop along the way in this path, you can see every single port that goes in and goes out of whatever device it is. And for me, my background's heavy and networking 17, 18 years. I mean, to get this level of visibility, just a disloving, this is only one thing that it can do. We'll look at some examples. This is just a second screenshot before we get into the demo. This particular picture right here is showing you the physical topology of whatever it would be that would be connected to a particular segment. In this case right here, what I did was a topology view. And I said, okay, we've got a bunch of layer, three topologies, we've got VXLAN segments, we've got VLAN segments. Let me just take a look at, just give me a topology. So I pulled up this particular topology. And what it's actually showing you is really interesting. Hopefully you'll be able to see in this, on the outside, the different color boxes are where the ESSI hosts are at. The first layer on the inside from the different colors. So right now we've got three different ESSI hosts. The next level underneath is all the VMs. So whatever VMs are connected to the segment, you see them right away. The next layer underneath that is the firewall. So if you have NSX or you have a distributed firewall deployed, that's what the circle is going all the way around the segment because it's protecting the whole segment. The next level is gonna be the VXLAN segment or the actual distributed virtual switch. And then after that you've got physical interfaces and in the middle you got whatever your tours are that you're using in a data center. So this actually is giving you full connectivity picture. All you have to do, log in, click on the particular device and it's gonna give you all the information. You don't have to worry about trying to troubleshoot it, record a path, look at the routes. You're able to see all of that right here. And there's a host of other information that you can see. On the right hand side, I just took this screenshot out of my lab. It's just showing you some of the events that are going on, some of the layer two metrics. You can see one way traffic, two way traffic. You could see the MAC addresses, you could see the routes. You can see the application path. You could see a five tuple flow through the segment. Lots of information you can see. So again, just to highlight this, heavy, heavy focused on the actual, on the operations side, especially with this new addition. So you've already got great visibility with vSphere. We already can get into the virtual network and piece. We can trace, we can see a lot of things that are happening on the compute and the storage side. With this level of visibility on the network and side, there's none of the tool that I've seen yet that can actually go down to that kind of level for correlation. So let me pull up the demo here one second. Any questions on that? Any questions on the network and side, application side, what we're doing with VIO? Sometimes it, this computer can, I got a couple of different, I built a couple of different videos in case other people wanted to ask me some questions. So I've got a couple on here and a couple of different flavors. Okay. So the first thing, like I said, I was trying to set the context earlier is think of this particular topology. Think of this as a regular telco, someone that's actually trying to deploy a service or it doesn't matter if it's an app DB type of a service that somebody's trying to deploy so you can use it. What I've got right here is I'm basically showing you, here's the underlying infrastructure that's here. We've got routing, switching, compute, storage, et cetera. But on the top, we have the application. The application is what telcos are gonna actually make the money from as they actually start to deploy these different services. Now from an end user perspective, both you and I, we pick up the phone, we make a call, it don't, the call doesn't go through, we try again, it doesn't work, we pick up another phone, we try to call again. But there's always this separation and I drew this picture like this on purpose. There's always this separation between application level and the infrastructure. Well, what if there was a way for us to actually bring some pieces together, have some deeper visibility at that application level and then on that infrastructure side all the way through? So that's what I'm gonna actually kind of go through and let you take a look at here. So in this particular right now inside the lab, like I mentioned, we have this service level dashboard. This is actually showing us the critical aspects of whatever the service, if it's up or it's down. Right now the service is down. The health badge at the top left is down. That means that something either in a transport or some kind of the service components went down and that's the error message that we actually see on the top left. It says the transport for service provider number one, it's down, but we don't have any idea why. Even so, people that are actually looking at the application level, they wouldn't know what's going on with the application either. They just know that, hey, the application is down, so where do I start troubleshooting at? Start from the bottom up, the top down, and so that's what I'm actually looking at right here, is to give you an alternative way. So as we go through this, in this particular picture, I'm gonna go and I'm gonna start kind of looking from the network level down and start the troubleshoot and let you kind of take a look at see what this looks like. So right now, what's happening is in this demo, I have two different virtual machines and basically what I'm doing is I'm simulating one that's a mobile subscriber and he's coming in through the access and he's trying to get into the voice components to make a call. So on the left hand side, what you see is a mobile subscriber and he's coming in through a segment that's gonna come in through a VLAN segment, he's gonna hit the edge gateway. That's what's happening right here. So this user that's at the top left, he's gonna come across this VLAN segment and I'm having full visibility to this whole thing as it's going on. He's gonna go across this VLAN segment, go up here to the edge services gateway and then connect to a DLR and then come back around and go connect over to a PCFCF, right? And so that's what you're actually seeing with this trace. With this level of visibility, with this level on the network inside, you can actually see I'm going through each one of the hops, each one of the different boxes, the light blue all the way around, that represents a port, that's a port. So I can actually see what nicks are connected in the entire path all the way through to the infrastructure from the point where the customer comes in, all the way to the point where he goes out. So after we kind of go through this, go through the trace here, what you have an opportunity to do, because this is, I'm showing you what it looks like before I would actually break it. And so on the right hand side, this is some of the information that I was talking about before. There's a whole list of information that you can actually see. This shows you some of the events that are actually happening in the environment. So you can actually see in real time, if something was happening in your environment, you could see right away, configuration change, spoof guard issues in the network, security issues, heartbeat issues that's going on with your infrastructure. You can see statistics. I got a shot here of the actual statistics. One way and two way statistics, you can see at the bottom left, I'm actually starting to show down here. These are on what like micro segment flows. These are five tuple information source and destination with port numbers that you can actually see through the infrastructure. So I'm gonna log in to VRealize Operations. VRealize Operations is what I mentioned is our key platform for operations management. What you're actually seeing here is what we already know is kind of going on, but at the very top level, we had the service that's read. The components underneath it, they're also read, because I failed some links within the infrastructure. So VRealize Operations is showing you which routers are there, which virtual machines are there, which virtual machines are there. It's showing you a topology on the right hand side over here. From this one topology, we can go straight into Syslog servers or go straight into configuring other elements that are part of the environment. On the right hand side, on the right hand side, to prove that everything is working correctly, this is actual flows for voice that's going through the network. So you can actually see I've got RTP streams, calls that are going between entities. And as you can see in the second graph, the traffic is failing. It's dropping off on the red line. And so that's why everything is read. So our goal is really just to try to figure out, because we would have just got a ticket so that the voice service was down. So one of the things that we do is straight from VR operations is log into log insight to be able to see what exactly was going on. And with this log, because we failed it ourselves, this was actually just a configuration change, but log insight actually catches that it was actually a configured error here. And we're also gonna see the same thing whenever we go into the infrastructure. So what we did is we went back and as we know that something was failed, we didn't know the whole extent of what it was, but this gives you an idea of what I was talking about before is the link has failed between entities and now you can't find his way back. So what we're doing is we're going through and we're just troubleshooting to try to figure out the dotted line just saying the VM was trying to get out through his first hop gateway, couldn't get out, doesn't know where exactly he's going to. So it actually shows you, so right out the gate, you actually know where your problem is at within the infrastructure because it's down. Okay, I go into the next level of detail and then try to look at exactly the topology itself with the links. So you get a very detailed view on the topology. You can see exactly where the links are at within this infrastructure. You can see the, I'm giving you an example of what the VXLAN segment would look like. So you can see the VXLAN segment and all the details that are there. And then we actually identify, V-realized network insight identifies that the problem was an interface, was an interface down and that's what this bottom part down here is where the edge is showing that it's down. So you get to see all of the information right from a single point, from a single point. The network information, you can see certain five pieces of five, tuple information all the way through the infrastructure. So then I just go back and enable the interface and interface comes back up on the DLR. I go back and trace, go back and trace through the entire network to prove that the network is actually up and working. And then we monitor and traffic through VR operations. We can monitor all the statistics that are associated with the transport links and ethernet, et cetera. But this will be the end of the demo here. But if you have any questions about V-realized operations, what we're doing with VIO, how we're integrating it to deploy telco workloads, how we're monitoring the environment and trying to reduce the amount of time that it actually takes to troubleshoot, just come and talk to me.