 Hello everyone, this is Abdullah Said and today we will cover Cisco Virtualize Infrastructure Manager prepared for the 5G-era with Cisco 5G support. As a start, we would like to start with this safe statement. Then Cisco UltraCloud 5G architecture, this is our message an evolutionary jump delivering advanced automation, higher resiliency, greater security and deployment simplicity into service provider infrastructure. The cloud native, the key benefits for the 5G are the light weight footprint, increased service velocity, state separation, service mesh, increased security, improved performance and hardware efficiency and scalability and availability. All these benefits can be translated to easier upgrade, faster time to market, faster security response and true scalability. Cisco UltraCloud contains from 4 pillars, microservices, containers, DevOps and continuous delivery. Microservices as you know, the couple software services, individual deploy and life cycle managed. Cooperatives will be the orchestrator used for automation and scheduling and scaling. On top of that one, we need the continuous delivery, automated continuous integration, validation and availability for containers. DevOps is the automate and manage rapid deployment, isolate production change and deploy once validated. If we move to Cisco cloud native architecture, we have the subscriber microservices infrastructure so called SMI. This will be over Cisco virtualized infrastructure manager CVIM. It provides vertical stack design for high performance low latency GUHA. In this vertical architecture, we can see the first stack is the infrastructure, the open stack, which will be created and deployed by CVIM. Then the next layer is the cloud infrastructure itself, which is the Docker's cooperatives STO Helm as a chart manager. Then on top of that, we will have the common application infrastructure, Prometheus, Grafana, Yeager for open tracing. Then after that, we will have the layer which will contain the 5G network functions like SMF, PCF, NRF and so on. All of that orchestrated and managed and deployed by Cisco orchestration layer, which is the NSO network service orchestrator and EC, the elastic server controller, which will make sure all these layers and the full cycle of the hardware and software, all of them orchestrated and managed properly. Here we compare between what was before the cloud native as a monolithic software, where all the state and application are in single process. However, in the micro services and container platform, we can decouple the software layers from the services from the state itself. So we can have stateless applications, then we have service application, then we have the common layer and after that we can have the front end layers, which will provide the interfaces for logging, tracing and northbound interface. Cisco Subscriber Micro Services Infrastructure, also called SMI, made of 5 pillars. The first one is the SMI cluster manager, who is responsible to manage pod deployments in cluster, configuration, health monitoring, resource scheduling and lifecycle management. Then we have the opcenter for all network functions, where they are the common API for deployment, configuration and management to enable automation. Then the third layer is the common execution environment, which is shared by all application for non-application functions. It is used as data storage, telemetry and alarming. Then we have the Cisco service mesh, which is the intelligent service mesh to connect micro services and containers for application and steer traffic between the containers. Then the 5G database, which is the common database layer, built for high performance low latency, especially for application like 5G and cable. And now we move to J to cover the CVM part. Thank you Abdullah. So with that, let's take a look at CVM. So Cisco VIM is Cisco's telco platform, OpenStack based, and this is the platform that we hopefully enable you and telco operators to achieve what we think they are trying to achieve. So what do we think they are trying to achieve? So obviously, as with any company, it's all about trying to maximize your revenue while keeping costs down and minimizing risk. So when we talk about something as complex as a telco-cloud platform, it's almost obvious these days that the only way to achieve this and do this properly is through extensive automation. And that's exactly what Cisco VIM will do for you. So when we take a look at what it takes to build and operate a telco-cloud, if we'll go from a bunch of servers all the way to a fully open running cloud, there's a whole bunch of things that need to happen. Now, this slide is actually intentionally quite busy, but this is just to show all the things that Cisco VIM will do for you. So all of these things need to happen, but most of these can actually be automated and will be automated in the context of Cisco VIM. So with Cisco VIM, we only expose those activities to the end-user, to the operator, to the admins that are actually meaningful. So that in the end, the amount of time that you as an operator and administrator need to spend dealing with telco-cloud itself is going to be as little as possible so that you can spend your time where it matters, which is with the workloads, with the mobility workloads, with whatever VNFs you want to run, with whatever applications you want to run. Because in the end, that's where the money is. Those applications, those VNFs are what are driving your business. So as such, if we can enable it, if we can enable you to maximize your resources to use them where it matters, then we are setting you up for success. So as such, the way we intend to do that is by automating most of all of these tasks. So if I take an example, if we want to apply a patch on a cloud or a telco-cloud, which can be 10, 20, 100 nodes, everything will be automated. So you just say, look, this is the destination version I want to go to. And then the installer and the orchestrator, engine and lifecycle management tools will automatically download all the necessary software that is comprised in this particular patch. It will push it toward the necessary nodes. It will start and stop the processes in a way which makes sense, so that we don't have any downtime where it's not needed. And everything will be configured and set up correctly, validated automatically and tested automatically. So at the end, when the process finishes and it goes back to the administrator saying, look, patch applied successfully, you will actually have the confidence and the trust that this has actually happened as it was supposed to happen. So when we take a look at the different deployment models with CVM, you actually have quite a lot of flexibility to choose how are we going to deploy this, what is going to be the form factor and the amount of physical resources that we're going to dedicate to running our instance of this telco-cloud. So if we take a look at things here, if we go from right to left, then on the right we have the traditional full port where you have a dedicated set of the service from controllers, three or more dedicated service for storage and everything else for compute, along with the management server which these days can be evangelized as well. So that doesn't have to be a physical server. This is usually what we find in core data centers where you have first of all a lot of workloads to run and at the same time you also have the physical space to run all of these servers. While if you go more toward the middle, you can say, look, there are use cases where it actually makes sense where we will have instead of fewer bigger ports we'll have more smaller ports. And these ports are typically closer towards the edge. And with the edge as this can mean many different things that usually means in this case closer to the source of your user traffic. And with these micro ports we basically reduce the hardware footprint so that you can have more optimizations in the sense of given a certain amount of physical servers on how many of these servers can we actually run workloads. So with the micro port what we have done is we collapsed the control, the storage and the compute plan all on the same server. But we still take three of each just to make sure that we still have the HA, the redundancy and the reliability that you come to expect in telco networks in the service provider in communication providers networks where everything needs to remain up no matter what. And then when we go to a somewhat more optimized version probably even closer to the edge and typically this is also where you will start to see virtual radio access network workloads that's what we call the edge port and an edge port is basically an optimization of the micro port in the sense that we've taken out storage component. So with an edge port we no longer run local storage and we depend on a centralized storage cluster for our edge compute nodes to actually gain access to their image repositories. And usually this is fine especially in the context of virtualized network functions which typically don't need persistent storage and only need storage when they're actually booting up the instance so that we can grab the image from that. And then something that we are about to release is what we call the Nanopod which is basically a telco server a telco cloud inside a single server so that's basically a one server form factor. Now to deploy syscopin next to different pod types we also have a whole different ways of deploying the entire stack because just having open stack obviously is not enough even as awesome as open stack is we need a bunch of tooling in the ecosystem to make everything work. Because it's not just the telco cloud platform that needs to be automated automated we need to be able to automate everything around it. So basically everything is syscopin the hardware the software the different VNFs the automation in the form of the VNF manager in the form of ESCE and the NFV orchestrator in the form of NSO but this is not the only way that you can use syscopin so syscopin we pretty much have very close to an a la carte way of deploying your entire stack where you can basically plug and play the different components so in Model 2 you say look we still use most of the components but we will also run third party VNFs and actually if I look at the syscopin deployments that I am aware of and I am aware of most of them I will actually see that in many if not all of the syscopin deployments they have at least some VNFs which are not syscopin some of them that's even most of them which are non syscopin VNFs at this point in time now with Model 3 we are also going to be introducing some non UCS servers and then with Model 4 we are not just going to be taking a third party VNFs and a third party hide work but we also introduce the capability to run more cloud native if you want to call on that containerized network functions which can be called sysco and not sysco CNFs so with that I hope to have given you a very short introduction into what is syscopin this is not the first time we have talked about syscopin at OpenStack events so I'm sure you will find other recordings and videos and sessions from previous events in case you have more interest and more appetite to learn more about syscopin thank you very much hello everyone this is Abdullah Said and today we will cover how to utilize sysco virtualized infrastructure manager so called CVM to deploy sysco 5G stand alone packet core so our demo we will show how to take a full advantage of sysco automated ecosystem to fully deploy 5G stand alone packet core on top of the OpenStack layer which was created by CVM and including creation of the virtual machines Kubernetes cluster 5G network functions and all day zero day one configuration so our demo we'll start from using OpenStack which was created by sysco CVM also we will use a single dashboard this will be used just to trigger the configuration utilizing NSO which is the network service orchestrator it will be used to orchestrate and send all the configuration and also we will use sysco ESC which is the elastic server controller this will be utilized to use and configure all the virtual machines so the single dashboard will send the rest of the API to the sysco NSO then the sysco NSO will send the netcom to the sysco ESC sysco ESC will send the rest of the API to the OpenStack to start creation all the virtual machines SMI cluster manager Kubernetes masters Kubernetes workers and UPF as well then the next step it will be to create the 5G network functions that will be done by sending the rest of the API to the sysco NSO to start day 1 configuration so the sysco NSO will communicate to the SMI and to the UPF as well that will install day 1 configuration to the UPF and will install all the 5G packet core network function like AMF, SMF, PCF and NRF and let's go to our actual demo this is the OpenStack infrastructure which was created by sysco CVM and as you can see here we allocated one project for our demo for our 5G packet core SA installation this is our project and as you see here from the hypervisor point of view we have 7 nodes allocated to our project this is where we will create all our virtual network network all our virtual function here we just have here only 2 instances NSO network service orchestrators and the ESC which is the sysco elastic server the network already created in advance just for allocation or for this project so this is the networking that we would use to allocate for our 5G packet core if you look here this is the topology that which we have right now and as you can see this is only 2 network elements are created which will be used for creation of the VNM so this is what we have right now NSO AC in the service manager as you see this is the deployment here so if we go to the dashboard here there is nothing here it's configured yet so we don't have any devices except the ESC if we go to the services you can see there is zero deployment still here nothing installed here except for sure the ESC which is the one which will be used for the installation in ESC level in the dashboard you see there is only one tenant is created here but there is no still no deployment so we started deployment from the NSO and as soon as the NSO trigger ESC to create all the virtual machines in the cluster so we will start to see our instances start to be created in OpenStack let's just give it a time and just do refresh here as you can see here some deployment start to be displayed in the ESC level which is the masters the masters here the OpenStack level as well we will just do refresh you can see the start here building more instances coming up in the ESC portal as you can see here there is a start deployment more more of the deployment as you can see we will have multiple deployment creating for each VM so let's see here how many instances we have so we have our worker we have our ATCDs we have the masters and we have the SMI Deployer which is the cluster manager and we see all the worker again all of them are active all of them are created here if we go to the ESC you can see the ESC all of them start to be active here so you can see all our deployment in the ESC here you can see the if we go to the network topology you can see here now all instances created attached to the network and you can see all of them new you can see they are created now should be active now as you can see 3 devices connecting the SMI cluster which is the Kubernetes cluster and the UPF Star OS and initially already had the ESC from the beginning you can see how many deployment that we have here and have services here so you can also check the service manager to see what was deployed from NSO side first of all you can check the SMI cluster and you can see here we have a plan for the SMI so this is the plan here and you can see this is all the SMI cluster which was orchestrated and created by NSO as you can see 85 components of the SMI cluster including all the elements in the Kubernetes cluster which will be the base to install later on all our 5G as you can see here this is the first component which is the CEE the common executive environment which is used for KPI's 1M and monitoring all the 5G NF's here you can see as well the NRF already created and this is we can check what was created from NSO and you can see multiple components were created for NRF as you can see here PCF SMI so let's check the PCF side again in PCF side 7 components was created or monitored by NSO same here for SMI so now the orchestrator which is NSO is looking after all these components and as one we can see all of them which we created them under a slice so you can see 31 components now at the final step we can check all the network function are created by login to the Grafana Grafana is our dashboard where we can see all the network functions and get the KPI's the graphs from them as you can see here we can list all the code created and all the Kubernetes elements mainly you can see all the nodes here this is the master here we can go back as well to check the other network function like SML so here you can see some graphs and KPI is collected from the SML itself related to the hardware resources KPI's request again this is also some graphs collected from AMF so basically you can see all the network functions AMF PCF SML and that was all from my site thank you