 Okay, let's get started. Hello everyone thank you very much for joining us for today's CNCF webinar, how to migrate NF or VNF to CNF without vendor lock-in. I am Jerry Fallon and I will be moderating today's webinar. We would like to welcome our presenters today. Greg Sakura, VP of Business Development at Ovo, Rafal Misalik, Software Engineer at Ovo, and Paweil Kulpa, Software Engineer at Ovo. Just a few housekeeping items before we get started. During the webinar you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen so please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that are in violation of the Code of Conduct. Please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And with that I will hand it over to our presenters for today's presentation. Okay, thanks Jerry. So my name is Greg Sakura. I will be going through this presentation today with Rafal and Paweil. And today we're going to talk about migration of network function to cloud native function without vendor lock-in. How does it work so we'll see. Our speakers as mentioned Greg Shikora myself. I am open source evangelist in Telco world. So I was personally responsible for introduction of open source based platforms in the operator environment. Privately I am ultra martin runner and I am telecom and cloud expert with 20 plus years experience and blockchain addicted enthusiasts. Rafał Myśliewiec is our Swiss knife of messaging experience in software engineering. He is expert in legacy protocols migration and Paweł Kulpa is responsible here for cloud cloudification and he is master chef of cloudification. What we do best as we do cloud as you see we know how to do cloud with OpenStack, Kubernetes, Ansible, Chef, Terraform and so on and so on. The company was settled down in 2012 and since the beginning we are focused at Telco. So Telco is our domain and we do the best Telco and we have few products here are some examples of our products. On top of that we do other services like development and consulting in blockchain artificial intelligence, machine learning, data mining and so on. We love open source. As used to say our CEO Dominic, we love open source because it gives us the freedom to cook delicious dishes using best ingredients. It's quite easy to find projects open source projects which fulfill your needs, but the most important is to know how to connect them together and use them to provide solution, which is either career grade or Telco grade and can provide SLA. And we have this experience we did more than 15 big migrations which we're using open source projects. Our open source based solution is handling more than 30 million SMS per day, which is rated in online. And we are handling more than 150 million users on open source. But before we move forward I'd like to tell you what are market and technology drivers which push us into this direction. Definitely smart everything. 5G is not just next G, it's a completely different, provides us new demands. We can work and play in the cloud. We have huge demands for data, gigabits in the second, augmented reality, virtual reality, new use cases like vehicle to vehicle communication which require extremely low latency. On the other hand, we have a lot of devices, IoT devices which communicate between each other, sending small portion of data. And we have home customers who are using huge pipes for ultra HD video and so on or online gaming. So as you see you are not able to fulfill those demands using one pattern. That's why you have to be flexible and provide the technology which fulfills, which is ready for different use cases. As you see, we see some path between network functions and cloud-nating functions and this is kind of the evolution to the cloud. Before 2015, most of services applications were deployed as a native network functions which were usually deployed as a solution placed on bare metal in the service provider data center. Then we had hype of NFV and most of those legacy solution were somehow migrated to VNF, virtual network function, but it wasn't disruptive. It was just squeezing and fitting the legacy solution to virtualized solution. The cloud-native functions, it's completely different story. The solution has to be redesigned to be able to work in such environment. Nowadays we have some kind of hybrid deployments because we have to fulfill requirements of legacy services and legacy protocols. In the future everything will be purely cloud using restful API to communicate each other. And how to do this evolution? So we have to focus at different areas. I mentioned here a few of them like development process. Definitely you are not able to do that in a traditional waterfall. You have to be DevOps and do this kind of continuous integration, continuous delivery process. Application has to be migrated from monoliting to microservices. It has to be cut to smaller portions and those smaller portions, microservices has to be containerized to be ready to cloud. And this is more or less how we see this evolution. At the following slides we're going to show how to do that step by step. And based on this slide we figured out and we worked out architecture blueprint for such kind of services which can be deployed either in a service provider backend data center or at edge. And here we have the architecture blueprint. And I'd like to mention here one important point. But previously here, as I mentioned, service providers, sorry, network equipment providers who wanted to do migration from network function to virtual network function, they just migrated service as such. Migration to cloud native function require redesign and shifting part of functionality, part of the service to open source projects. And here we are showing how to do that. For the high availability for automatic deployment we are using such solutions products like Ansible, Terraform, Docker, Kubernetes, Chef. So we can use them to do automatic deployment on infrastructure as a service or even bare metal depend on service provider needs and demands we can use different years. On top of that, we have to manage the access layer and communication with other services and platforms using either HTTP RESTful API or SS7 protocol like Camel, MAP and so on. Diameter, SIP, SMPP, and then we have services and we don't need any middleware here like previously Gensley, SIP servers and so on. We can use frameworks like ACCA or Springwood to implement service and use Kubernetes to orchestrate those services. On top of that we need data layer either for session replication or persistent data using different applications and solutions like Hazelcast, Cockroach, and Chef for block storage. So I wanted to make my part extremely short because the most exciting part will start right now and so we will move forward to demo part and Trafal, Mike is yours. Thanks Greg. So our demo will show the steps needed for us to migrate from virtual machines environment to cloud environment. Some of them will be redesigned slides. As Greg mentioned earlier, they will be some kind presented in a verbal way and other slides will be shown in practical way by Pavel, by I mean some creation of the Kubernetes and other parts of our migration stuff. So at the beginning we have a single instance application, monolithic application, hard to maintain one, which is very complex because it's not an easy, stateless application but it contains a lot of advanced mechanisms such as caching, such as queuing. What's important the most, it is handling the integration with telco protocols, such as CIP, MAP, SMPP, Diameter, which is making them ready to cooperate with 5G words in cloud native environment which is some kind of stuff that all service. These network functions providers were afraid of and at the end we want to achieve the distributed microservice oriented system, which is integrated with the Kubernetes stack. And please to the next slide. We will make our demo based on one of our solutions, which is over messaging gateway, which is handling such functions as SMSC, IPSM gateway, SMS gateway. The reason why we selected it because it has the greatest variety of the integration with telco protocols, so CIP, SMPP, MAP, Diameter with OCS and so on. And it is some kind of replacement of earlier SMSCs, which existed on the market, which were non-distributed, which worked in a legacy NF or GNF way. So when it comes to initial infrastructure, what do we have right now? At the bottom we have the external infrastructure, so we have the layer of the client. And at the upper level we have our local infrastructure, which is what we have inside our virtual machines. So everything we have in OpenStack, as we see every service is laying on the separate virtual machine. Also, when it comes to services, they are monolithic, they handle logic, they handle API within the single deployable units, they have a separate cache layer, they have a separate queuing layer. They have a separate connection points with the clients, with CIP, MAP, SMPP, Diameter clients, which is very hard to maintain, which causes us the problems when we want to scale up or scale down or we want to change versions. So it really requires some interventions, not only from our side, but also from an external point of view. So also we have the configuration for, we have the management database, which is MySQL, which is very good for us, but when we want to move to cloud native environment, we want to have a distributed database. And the step number one, what we will do here is to replace the alt mechanism, which makes our system single stated and separate when it comes to queuing and we want to make it shared and distributed. So we will use Hazelcast for caching service, for caching of the state, for caching of the configuration management, also here we can use also AeroSpike or Redisource or any other distributed caching system that one can imagine. Also for queuing layer we are using Kafka because it fits for us in the best way, replacing the alt Java or other queues. In the step number two, when we have distributed service, we have the problem also with the fact that we have connected logic with the access layer. So we will isolate the access layer and we will, we are building the API gateways for each protocol separately, making them connect directly to the client and leave all the internal communication within the HTTP. So now we have the situation when we want to add the node or remove the node or we want to change the versions, the client will not have any awareness of this. So he's completely unaware and agnostic when it comes to what's going inside our infrastructure and these advantages for both of sites. In the step number three, as I said earlier, we want to migrate from MySQL to some kind of distributed database which also handles SQL. So our migration was not that hard, but also to make this database, to have the database to be a transactional one. It is important for us that the distributed database will be transactional because of requirements from many clients. And that's why we chose to go with KakrosDB, which is one of the databases also recommended by CLCF. It's not only distributed, but also easily horizontally scalable. It's fault tolerant and when it comes to migration, we had some, we didn't have any problems when it comes to access layer because the only thing to do was to migrate the drivers and make some minor changes in SQL statements, but they are only because the driver is changed from MySQL to PostgreSQL, but these are only minor small changes. Also, we had to make a migration of the triggers to the application. In some deployments, it can be, it can be however very, very tricky part for us, for our, it was not the most crucial thing. And now I'm passing the voice to Paweł, which is presenting how to build the Kubernetes cluster as a step number four. Okay, so first we need the Kubernetes cluster. Our cluster contain three master and three workers. For storage purposes and volume, we are using Ceph. We deploy our Kubernetes cluster on OpenStack. So let's go to build the cluster. Before webinar, I record some view from creation cluster and deploying application on Kubernetes. So we can start. We create Kubernetes cluster using, could you go to the step four? Okay, this video. Yeah, first we create a cluster on Rancher. Type some name for the cluster and copy command run, which we must run on every node with proper prefix. To create virtual machine on OpenStack, we use heat with cloud in it, and we add command to run after machine will be ready. Of course to cloud in it. And we set proper paramet node role. So we provision virtual machine on our OpenStack. And after some time, some time we have ready machine. This is the last machine worker number three. Now we see on our OpenStack dashboard, we have six virtual machine, three master and two worker. And we see on our Rancher, we have a proper provision Kubernetes cluster. Okay, next slide. We have a SS7 protocol using SAP, so we need enable in our Kubernetes cluster because SCTP isn't default enable in Kubernetes. We add SCTP to our cluster, we go to Rancher, we edit cluster and we add in copy API extra arcs, future gate with SCTP support. And now our cluster is updating and after some time we have a working Kubernetes cluster with SCTP. The standalone SIP stack and SS7, SCTP layer require adoption to proper expose SIP and SS7 toward external infrastructure, minor SCTP update to establish association on node port and SS7 layer above is not impact. In a SIP stack, there is a lot of changes in message processing due to IP addresses and port presenting different headers like VR route. These header mics contain the workers IP address or VIP address. Additional modifications are necessary to achieve redundancy and load balancing. On our cluster we install necessary component, which is elastic stack, Cassandra Kafka. To install open source software, we are using Helm and before that we prepared values proper, now with Helm we are installing Kafka. We go to Cassandra and we deploy using Helm Cassandra in messaging gateway namespace. And now we install elastic stack versus file bit in monitoring namespace, elastic search also in monitoring namespace. And the last is Kibana for see everything and also in monitoring namespace. And we have ready Kubernetes cluster with open source software installed using Helm. To the logic services. Firstly, as they were on a pure installed on a pure virtual machines. There was no demand for them to be dockerized, but now when we want to move to Kubernetes, we had to do it. We created a Docker configuration and Docker images for every service that we have also for the API gateways. And also when it comes to our CI CD pipelines, they must have been reconfigured. And in order for them to be integrated with our new Docker registries, our new doc, a Kubernetes cluster. So now Jenkins was about to build an image to push it to the Docker registry and then deploy it to the Kubernetes cluster when only needed. So after that, coming to the next step. Okay, so I can't. Okay, we create a config map config map allows us to separate configuration file from image content. Instead of putting all config file inside inside the image, we create a config map. This example show config as a seven map gateway and configuration for messaging gateway and coming back to the Kubernetes configuration now that we have our config maps our integration our. Docker configuration we are now ready to build our first team of fires to in order for our services to be properly installed in a Kubernetes environment. In this, in these fires we can, we can configure some kind of parameters needed for our service to be working properly. Some kinds, for example, volume, our volume, volume's connections, open ports, and all the other features that are provided to us by Kubernetes and we want to use it. For example, the replication scenarios, replica, replica numbers and so on. And the last step is to actually create the application logic services and it will also be shown within the short, short, short film. Okay, now create a messaging gateway deployment. You can start the video. Give me seconds. Okay, we are creating messaging gateway deployment in messaging gateway and namespace, and we create a service for that deployment. We see we have a one pot with messaging gateway, which is running. And we need to have three ports. So we change the replica count to three. We apply on cubes at all. And after. Okay. It shows that this is real. After a couple of seconds, we see we have a three replica count. And we actually read the dance on messaging gateway. So that's it. I think all the steps are discussed. Now, I will summarize the technical part. So before the migration what we have what we had had a non scalable unmanageable monolith with no shared state with not no shared cues. So the logic layer was not distributed at all. We had manual. My SQL replication master to slave slave to master. Which was non transactional, which we had problems with with some kind of the synchronization and so on. We had the situation where each client had to adjust conflict after our internal infrastructure changes. So virtual machines, added resources utilization overhead, not only for our services, but also for the operational maintenance stuff so for elastic for, for prometheus and stuff and so on. And we had to do manual deployment manual scanning of the services. Actually, the only thing what we could have done was to optimize it with some kind of unstable stuff and so on. But, you know, this, this also is a piece of code this also had to have to be managed. This is not provided as the ready to go services as it is being done right now with the Kubernetes environment and Kubernetes community and so on. And after the migration, after the migration we have logically simple architecture we have fault tolerance see for basically all of our services, what's most important for our logic services. And when some kinds of disruptions appear on our logic services they will, they will be restarted or the pots will be restarted and no changes in configuration will be required from the client side. We have a distributed configuration to be we have no resource utilization when it comes to us. We have also ready to go configuration from the vendors that are giving us the. I am staff operation and maintenance so for elastic for lock stuff for Kibana Grafana and prometheus and so on for even cockroach. We all have ready to go config recipes from the vendors within with also the Kubernetes YAML files to integrate so we don't have to invent anything because this this integration with Kubernetes is pretty straightforward and managed by the vendor. So no, no worries here what's most important for us. And for the telco words we proved that the integration with the sometimes legacy telco protocol such as CIP map as MPP diameter is possible and can be done with a little bit extra effort that we've managed to do with within the Kubernetes environment also with the Kubernetes network and so on. And coming back to the effects that we've made these are the numbers only for our migration. So, these numbers can differ completely when it's depending on the different elements that you're running. But for our for us in a scenario where we were running a campaign of SMS is with 1000 TPS is that are affecting all of our services inside. We achieve which we achieve the visible numbers the presented numbers of the reduction when it comes to computer computation. Okay, so guys, thanks a lot. In my opinion, it was it was excellent. So, I didn't see the full demo before and so I have to say that it was amazing and as we presented the problem with legacy deployments is like lack of service separation. Usually, for instance, even if you have SDP service delivery platform you have many services running on the same platform, monolithic big deployments heavy regression testing so this is something put utilize huge amount of time resources from vendor and and customer perspective complicated deployment procedure, usually manual, a lot of current coordination and so on. Have you accept us this limited automatic scalability or even no automatic scalability. Usually it is manual you have to buy either new hardware of or software or license whatever. And thanks to using cloud native approach, we can get rid of those, those, those stuff. And, and right now, I wanted to, to, to ask why, why, why to migrate to CNF and especially based on open source. So, first from the technical perspective, we can, we can tailor to DevOps mode, where we have quick updates and smooth update upgrades, and we can even upgrade piece of software and and and also something that even possible and we couldn't imagine before the well known AB test from web word here can be applied as well. We can imagine that we can update just one, one port and verify a small portion of traffic that it works and so we are able to do that. Out of the box automatic scalability and availability and this is something what gives us Kubernetes once we are able to utility Taylor legacy protocols and squeeze our service to to to be implemented as a port. So we can take advantage of such approach, simplified automatic testing configuration so we can use continuous integration continuous delivery pipelines and apply automatic testing there as well. And as Rafa mentioned, we have rich operation maintenance toolset, which can, which we can use, we have ready to use configuration and so on. On the business side, so definitely cost-effectiveness and time to market. I know this is buzzword but we shown that and we prove that even I remember before we started playing with with Kubernetes, our engineers years were a bit scared and they said okay it has to last a long time we have to to change our processes and so on but at the end of the day we see that it's it's straightforward. Once you do that, it's the next projects, next services, it will be really straightforward and you can do that very quickly. No vendor lock-in. We are using whole bunch of open source projects and we don't have to buy any royalty and the licenses we can use community driven projects. No need to buy licenses or subscription. So, if you know how what and how to use, you can rule the world. And service providers, this is last but not least, can contribute, even contribute to open source development community and we have such exams, for instance, T-Mobile Poland, which was contributing to ONF and they contribute to develop EPC for use case which is important for them. And they did it and right now this work can be used by others and other service providers, other operators can reuse this work, add to some portion of functionality and this is something what gives advantages to whole community. And last, because I did, you don't see here a clear statement without vendor lock-in and let me come back to this picture. So, as you see, let me use some annotation, yes. So, as you see here we have monolithic applications and actually, this is actually previously that the service was really monolithic. We added whole operation maintenance stuff, we used OpenStack, we had automatization and so on, but still the application was monolithic. And we had here huge logic and the application was quite heavy. What we did, we got rid, sorry, we removed, where is my pen? Yes, we removed a huge portion of functionality, like we used Kubernetes, so means high availability, orchestration, automatic scalability, resilience. We added, we used other protocol projects for session replication, so we don't have to reimplement that again. We used open source project for database for storage, we are using as well ready to use patterns for deployment of operation maintenance stack. Here is some work needed to separate legacy protocols and at the end of the day, you are migrate, you have to concentrate at business logic, which is just part of the solution. And even if you say, I don't want to use OVO solution, I want to use different solution, you can replace this part as well. So this is something what is important to mention that service provider operator has freedom and from our perspective, we have to justify and show that we are a valuable partner vendor and we want to be as good as possible to provide the best solution. Okay guys, thanks a lot for that, it was pleasure to present our achievement, our solution, and right now we have time for Q&A. Alright, well thank you very much everyone for that wonderful presentation. We have about nine minutes for questions so please feel free to drop them into the Q&A section. With a reference to a VNF where a specific file for it has connection points, VDU and virtual links. How are these components translated to CNF or in your implementation? Rafael. Time to understand the question. Derek, please can repeat because I also have some problems to understand the root of this question. Sure. With reference to a VNF where a specific file for it has connection points, VDU and virtual links. How are these components translated to CNF or in your implementation? Maybe in general because I'm not sure that I fully understand the question but what we've shown and we presented that virtual environment, for instance in our case we are using OpenStack so we can use automatic deployment and in our case CHEF for automatic deployment and we can use resources like computation, memory storage from virtualized environment and if we want to migrate to different virtualized environment we have to repeat this work and use different automatization and deployment patterns for instance if you want to migrate to public cloud or VMware based solution. In case of CNF so we are providing this solution as a solution which runs as ports in Kubernetes cluster and nevertheless where this Kubernetes cluster runs we can use the same deployment patterns. I hope so that I reply to these questions in right away. He added a little more specification for the question. For a specification file which has TOSCA templates for the VNF. Okay because Pavel maybe this is a question for you because in case of VNF we are using CHEF, am I right? In VNF we are using HIT. We create a template file for example for Keeper, for instance for Network and for Tenant and for all that stuff and in this case in our OpenStack Kubernetes cluster we are using this tool to create a virtual machine and our Kubernetes cluster is put on this virtual machine. The VNF template file are using to create an instance needed to create a cluster. Okay. Do you have any use cases of migration to CNF in OSS telecom domains? So this example is for real time service and messaging gateways. So the other example which we didn't mention is OCS. We have our OCS online charging system created to CNF as well. So we are not OSS provider. So we are rather concentrated at real time use cases like prepaid, like MMTEL, OCS and so on. Where can we get support for CNF deployment? Support, like you can get from OVO because, thanks, this is a really important question. So we can cook and prepare the best delicious dish from open source projects and of course everyone can do that. There is no restriction for that. But what OVO can give and other vendors who are who knows how to do that, we can provide SLA and full responsibility. So we can give you five nines and carrier grade SLA for support of the solution. And so this is value added of such companies like OVO. Okay, we have any other questions at all? Anyone? We have about three minutes left. Is it possible to deploy a CVNF directly on open stack with Kubernetes integrated instead of a VM? Yeah, this is actually the example. Let me come back to this slide. Maybe it's not shown here. But actually in our lab we have open stack and we have few Kubernetes clusters and one of the cluster contains messaging gateways. So this is the deployment which we selected. Depending on the needs and use case which you'd like to achieve, we can recommend deployment Kubernetes at open stack or bare metal. At open stack we can decide to select the solution if you want to have flexibility. At bare metal if you want to have efficiency. For instance, EPC gateway like P gateway should run on bare metal because we need efficiency and we have to be as close as possible to CPU. Okay. Do we have any other questions at all? We have less than a minute left. When there are multiple VIM, CVNF deployed as a VM on open stack and others as CVNF on Kubernetes, can we make a NS out of it? NS? What do you mean? Network service. Network service. Pavel? I don't understand this question at all. Jerry, can you repeat or rephrase the question? Yeah, sure. When there are multiple VIM, say a VNF deployed as a VM on open stack and others as CVNF on Kubernetes, can we make a network service out of it? On our open stack we are using on a network service and our clusters are communicating on this network and that's it. Again, from my perspective, I'm also not sure that I understand the question in the right way, but as Pavel presented here, we have open stack and we are using virtual VMs provided by open stack to deploy workers and masters and we are using Ceph for storage. And what service we are using for networking Pavel here? For networking here we are using Flannel. Okay. And so this works as such. Our network is using Neutron and our cluster is using Flannel. And we also can use Khan or a different network provider to our Kubernetes cluster. Okay. Well, thank you all for a wonderful presentation and for a great Q&A session. This is the time we have for today's presentation. Today's webinar will be available later today on the CNCF webinar page at cncf.io slash webinars. Thank you to our presenters and to all of you for joining us today. Everyone take care and we will see you all next time. Okay. Thank you very much. It was pleasure to present. Thank you. Having you all. Thank you. Bye.