 Hello everyone, this is Abdullah Said, and today we will cover Cisco Virtualize Infrastructure Manager prepared for the 5G-era with Cisco 5G support. As a start, we would like to start with this safe statement. Then Cisco UltraCloud 5G Architecture, this is our message, an evolutionary jump delivering advanced automation, higher resiliency, greater security and deployment simplicity into service provider infrastructure. The cloud native, the key benefits for the 5G are the light weight footprint, increased service velocity, state separation, service mesh, increased security, improved performance and hardware efficiency, and scalability and availability. All these benefits can be translated to easier upgrade, faster time to market, faster security response and true scalability. Cisco UltraCloud contains from 4 pillars. Microservices, Container, DevOps and Continuous Delivery. Microservices as you know, the couple software services, Individual Deploy and Lifecycle Managed. Cooperatives will be the orchestrator used for automation and scheduling and scaling. On top of that one, we need the Continuous Delivery, Automated Continuous Integration, Validation and Availability for Containers. DevOps is the automate and manage rapid deployment, isolate production change and deploy once validated. If we move to Cisco Cloud Native Architecture, we have the Subscriber Microservices Infrastructure, so called SMI. This will be over Cisco Virtualized Infrastructure Manager, CVIM. It provides vertical stack design for high performance low latency GUHA. In this vertical architecture, we can see the first stack is the infrastructure, the open stack, which will be created and deployed by CVIM. Then the next layer is the Cloud Infrastructure itself, which is the Docker, Cooperatives, STO, Helm as a Chart Manager. Then on top of that, we will have the Common Application Infrastructure, Prometheus, Grafana, Yeager for Open Tracing. Then after that, we will have the layer which will contain the 5G network functions like SMF, PCF, NRF and so on. All of that orchestrated and managed and deployed by Cisco Orchestration Layer, which is the NSO Network Service Orchestrator and EEC, the Elastic Server Controller, which will make sure all these layers and the full cycle of the hardware and software, all of them orchestrated and managed properly. Here we compare between what was before the Cloud Native as a monolithic software, where all the state and application are in single process. However, in the Micro Services and Container Platform, we can decouple the software layers from the services from the state itself. So we can have stateless applications, then we have service application, then we have the Common Layer, and after that we can have the Front End Layers which will provide the Interfaces for Logging, Tracing and Northbound Interface. Cisco Subscriber Micro Services Infrastructure or so called SMI, made of 5 Pillars. The first one is the SMI Cluster Manager who is responsible to manage pod deployments in cluster, configuration, health monitoring, resource scheduling and life cycle management. Then we have the Ops Center for all Network Functions where they are the Common API for deployment, configuration and management to enable automation. Then the third layer is the Common Execution Environment which is shared by all application for non-application functions. It is used as data storage, telemetry and alarming. Then we have the Cisco Service Mesh which is the Intelligent Service Mesh to connect Microsoft Services and containers for application and steer traffic between the containers. Then the 5G Database which is the Common Database Layer built for high-performance low latency especially for application like 5G and cable. And now we move to J to cover the CVM part. Thank you Abdullah. So with that let's take a look at CVM. So Cisco VIM is Cisco's Telco Cloud Platform OpenStack Paste and this is the platform that we hopefully enable you and Telco operators to achieve what we think they are trying to achieve. So what do we think they are trying to achieve? So obviously as with any company it's all about trying to maximize your revenue while keeping costs down and minimizing risk. So when we talk about something as complex as a Telco Cloud Platform it's almost obvious these days that the only way to achieve this and do this properly is through extensive automation and that's exactly what Cisco VIM will do for you. So when we take a look at what it takes to build and operate a Telco Cloud if we will go from a bunch of servers all the way to a fully up and running Cloud there's a whole bunch of things that need to happen. Now this slide is actually intentionally quite busy but this is just to show all the things that Cisco VIM will do for you. So all of these things need to happen but most of these can actually be automated and will be automated in the context of Cisco VIM. So with Cisco VIM we only expose those activities to the end user to the operator to the admins that are actually meaningful. So that in the end the amount of time that you as an operator and administrator need to spend dealing with Telco Cloud itself is going to be as little as possible so that you can spend your time where it matters which is with the workloads with the mobility workloads with whatever VNF you want to run with whatever applications you want to run because in the end that's where the money is. This is those applications those VNFs are what are driving your business So as such if we can enable it if we can enable you to maximize your resources to use them where it matters then we are setting you up for success. So as such the way we intend to do that is by automating most of all of these tasks. So if I take an example if we want to apply a patch on the Telco Cloud which can be 10-20-100 nodes everything will be automated. So you just say look this is the destination version I want to go to and then the installer and the orchestrator and the engine and the lifecycle management tools will automatically download all the necessary software that is comprised in this particular patch it will push it toward the necessary nodes it will start and stop the processes in a way which makes sense so that we don't have any downtime where it's not needed and everything will be configured set up correctly validated automatically and tested automatically so at the end when the process finishes and it goes back to the administrator saying look patch applied successfully you will actually have the confidence and the trust that this has actually happened as it was supposed to happen So when we take a look at the different deployment models with CVM we actually have quite a lot of flexibility to choose how are we going to deploy this what is going to be the form factor and the amount of physical resources that we are going to dedicate to running our instance of this Telco Cloud So if we take a look at things here if we go from right to left then on the right we have the traditional football where you have a dedicated set of the service from controllers a three or more dedicated service for storage and everything else for compute along with the management server which these days can be evangelized as well so that doesn't have to be a physical server This is usually what we find in core data centers where you have first of all a lot of workloads to run and at the same time you also have the physical space to run all of these servers While if you go more toward the middle you can say look there are use cases where it actually makes sense where we will have instead of fewer bigger pots we will have more smaller pots and these pots are typically closer towards the edge and with the edge as this can mean many different things that usually means in this case closer to the source of your user traffic and with these micro pots we basically reduce the hydra footprint so that you can have more optimizations in the sense of given a certain amount of physical servers and how many of these servers can we actually run workloads With the micro pot what we have done is we collapsed the control the storage and the compute plan all on the same server but we still take three of each just to make sure that we still have the HA the redundancy and the reliability that you come to expect in telco networks and service provider communication providers networks where everything needs to remain no matter what and then when we go to a somewhat more optimized version probably even closer to the edge and typically this is also where you will start to see virtual radio access network workloads that's what we call the edge pot and an edge pot is basically an optimization of the micro pot in the sense that we've taken out storage component so with an edge pot we no longer run local storage and we depend on a centralized storage cluster for our edge compute nodes to actually gain access to their image repositories and usually this is fine especially in the context of virtualized network functions which typically need don't need persistent storage and only need storage when they're actually booting up the instance so that we can grab the image from that and then something that we are about to release is what we call the nano pot which is basically a telco server a telco cloud inside the single server so that's basically a one server form factor now to deploy this to be next to different types you also have a whole different ways of deploying the entire because just having open stack obviously is not enough even as awesome as open stack is we need a bunch of tooling in the ecosystem to make everything work because it's not just the telco cloud platform that needs to be automated we need to be able to automate the system the hardware the software the different VNFs the automation and the form of the VNF manager in the form of ESC and the NFV orchestrator in the form of NSO but this is not the only way that you can use syscovin so syscovin we pretty much have very close to a la carte way of deploying or your entire stack where you can basically plug and play the different components so in model 2 you say we still use most of that we will also run third party VNFs and actually if I look at the syscovin deployments that I'm aware of and I'm aware of most of them I will actually see that in many if not all of the syscovin deployments they have at least some VNFs which are not syscovin some of them that's even most of them which are not syscovin VNFs at this point in time now with model 3 we're also going to be introducing compute servers from UCS and then with model 4 we're not just going to be taking third party VNFs and third party hide work but we also introduce the capability to run more cloud native if you want to go on that containerized network functions which can be both syscovin and syscovin CNFs so with that I hope to have given you a very short introduction into what is syscovin this is not the first time we have talked about syscovin at open stack events so I'm sure you will find other recordings and videos and sessions from previous events in case you have more interest and more appetite to learn more about syscovin thank you very much hello everyone this is Abdullah Said and today we'll cover how to utilize sysco virtualize infrastructure manager so called syscovin to deploy sysco 5G standalone packet core in this demo we will show how to take a full advantage of sysco automated ecosystem to fully deploy 5G standalone packet core on top of the open stack layer which was created by CVM and including creation of the virtual machines Kubernetes cluster 5G network functions and all day zero day one configuration so our demo will start from using existing infrastructure which is the open stack which was created by sysco CVM also we will use a single dashboard this will be used just to trigger the configuration utilizing NSO which is the network service orchestrators it will be used to orchestrate and send all the configuration and also we will use sysco ESC which is the elastic server controller this will be utilized to use and configure all the virtual machines so the single dashboard will send the rest of the API to the sysco NSO then the sysco NSO will send then it comes to the sysco ESC sysco ESC will send the rest of the API to the open stack to start creation all the virtual machines SMI cluster manager Kubernetes masters Kubernetes workers and UPF as well then the next step it will be to create the 5G network functions that will be done by sending the rest of the API to the sysco NSO to start day 1 configuration so the sysco NSO will communicate to the SMI and to the UPF as well that will install day 1 configuration to the UPF and will install all the 5G packet core network function like AMF, SMF, PCF and NRF and let's go to our actual demo this is the open stack infrastructure which was created by sysco CBM and as you can see here we allocated one project for our demo for our 5G packet core SA installation this is our project and as you see here from the hypervisor point of view we have 7 nodes allocated to our project this is where we will create all our virtual network all our virtual function here we just have here only 2 instances NSO network service orchestrator and the ESC which is the sysco elastic server the network already created in advance for just for allocation or for this project so this is the networking that we would use to allocate for our 5G packet core if you look here this is the topology that which we have right now and as you can see this is only 2 network elements are created which will be used for creation of the VNM so this is what we have right now NSO AC in the service manager as you see this is the deployment here so if we go to the dashboard here there is nothing here it's configured yet so we don't have any devices except the AC if we go to the services you can see there is zero deployment still here nothing installed here except for sure the ESC which is the one which will be used for the installation in ESC level in the dashboard you see there is only one tenant is created here but there is no still no deployment so we started deployment from the NSO and as soon as the NSO trigger ESC to create all the virtual machines in the cluster so we will start to see our instances start to be created in OpenStack let's just give it a time and just do refresh here as you can see here some deployment start to be displayed in the ESC level which is the masters the masters here the OpenStack level as well you can just do refresh you can see the start here start here building more instances coming up in the ESC portal as you can see here they are starting deployment more more of the deployment as you can see we have multiple deployment creating for each VM so let's see here how many instances we have so we have our worker we have our ATCDs we have the masters and we have the SMI Deployer which is the cluster manager and we see all the worker again all of them they are active all of them are created here if we go to the ESC you can see the ESC all of them start to be active here so you can see all our deployment in the ESC here you can see the if we go to the network topology you can see here now all instances created attached to the network and you can see all of them here you can see they are created now should be active now as you can see three devices connecting the SMI cluster which is the Kubernetes cluster and the UPF Star OS and initially already already had the ESC from the beginning you can see how many deployment that we have here and how many services are there so you can also you can also check the service manager to see what was deployed from NSO side first of all you can check the SMI cluster and you can see here we have a plan for the SMI so this is the plan here and you can see this is all the SMI cluster which was orchestrated and created by NSO as you can see 85 components of the SMI cluster including all the elements in the components cluster which will be the base to install later on all our 5G as you can see here this is the first component which is the CE the common executive environment which is used for KPIs on them and monitoring all the 5G NFS here you can see as well the NRF already created and this is we can check what was created from NSO and you can see multiple components were created for NRF as you can see here PCF SMI so let's check the PCF side again in PCF side 7 components was created or monitored by NSO same here for SMI so now the orchestrator which is NSO is looking after all these components and as one we can see all of them which we created them under a slice so you can see 31 components now at the final step we can check all the network function are created by login to the Grafana Grafana is our dashboard where we can see all the network functions and get the KPIs the graphs from them as you can see here we can list all the code created and all the Kubernetes elements mainly you can see all the nodes here this is the master here we can go back as well to check the other network function like SMF so here you can see some graphs and KPIs collected from the SMF itself related to the hardware resources KPIs request again this is also some graphs collected from AMF so basically you can see all the network functions graphs AMF, PCF, SMF in RF and that was all from my side thank you