 So, good afternoon everyone and thank you so much for joining us here. I hope you have been enjoying OSSNA 2018 so far and weather at Vancouver is also very nice. So today my talk is focused on opportunities from AI and machine learning and how that changes our world with the evolution of cloud native applications. And I really hope by the end of this topic you will really learn something out of it. So quickly moving on to agenda. First of all I will walk you through what it really means to a telecom service provider by what we say cloud native applications. Then I will introduce you to the concept of closed loop which will be a static closed loop and then we will see how to write dynamic policies and dynamic closed loop which is basically assisted with AI and machine learning. The very next thing I am going to do is to walk you through some AI and machine learning practical use cases and show you an architecture that is going to drive the orchestration analytics and AI and machine learning all together. And at the end we will see what are the open source projects and orchestration space and the AI and machine learning space that will drive and help the community move forward. So let's dive into it. The picture here on the on the screens that you are seeing I will start with a brief history of our telecommunication applications. Some of you might be coming from a web scale application experience and maybe some of you have experience with telecom applications and like Jim mentioned before he introduced me here that the biggest challenge at telecommunications industry right now is that the applications that we work with are really complex and they are really hard to automate and really hard to make that to the agile world of orchestration where we are portable scaling in and scaling out. So I just wanted to introduce you some of our challenges. So the physical network functions that you're seeing on this side of the picture here we used to have a number of sub functions represented each in a big box. And at that time when the telecommunications were pitching in their services to the users the criteria for developing that physical network functions was that you need to build a joint big box each with internal sub functions tightly coupled and that will run on a proprietary hardware. And at that time the vendors would say that this particular box can support 50 million subscribers. So basically the code the way it was written was not written with keeping the cloud concepts in mind to make it cloud native or to make it something that's going to run in today's world. So it was the biggest challenge for us to right move into the world of NFE. Then that changed the way of NFE started coming into the picture. So all those vendors what they did is they helped us move that function to alleviate the need of running on a dedicated hardware. And now that function can actually run on a cloud. But the internal component and internal composition of that code was not possible to change overnight. So we were still in that phase. Today we are trying to automate what is called NFE 1.0 and the amount of challenges that we are facing are of course enormous. To represent application like that you need to write a very complex Tosca template and the VNFD packages are very complex and you have to have proprietary interfaces to be able to scale them in, scale them out and to be able to perform configurations. The solution to these problems is our evolution to the cloud native applications. I was really happy to attend a seminar hosted by Linux Foundation a few days ago. The cloud native computing foundation is helping us drive that vision across the industry with a number of vendors. So that we can really deliver that vision where all these internal functions can be represented as microservices. And we can have generic APIs to be able to scale them in and scale them out and be able to develop that stuff more in an open manner. Plus they'll be thin, lightweight, microservices based. We are looking at a world of automation with much better agility and ease. So once that happens for the telecom service providers, the landscape of service delivery will drastically change. With the evolution of 5G and our support of network slicing from our orchestration layer where we can represent the whole network slice with a Tosca template or we can look at the APIs that would lifecycle manage a network slice like that. We're looking at a software defined agile service delivery model in which each offer or product or service can be represented by just one slice. And these teeny tiny micro VNFs in that are represented would be agile. Faster to respond to your automation templates and orchestration workflows. So really looking forward to the evolution of cloud native and the realization of a vision and dream like that. The next important thing that has to happen before we are even able to build the foundation towards AI or machine learning or orchestration or automation is our ability to bring our APIs on a common API gateway. So for example, we have different tools across the organization. We might have testing tools, we might have APIs coming from our cloud infrastructure. We might have APIs from orchestration or analytics. And if we are working with the AI machine learning, we are developing models. And you can even call those models through APIs, mobile edge computing, security, all of that different domains can actually work together much better. If you have a gateway for APIs where you can cross communicate that. So the availability of APIs and management of those APIs on a common API gateway is going to help us set the foundation towards AI and machine learning. So now I'm going to walk you through a simple example of a static closed loop. This is a closed loop orchestration 101. What this example really shows you is that we have a situation where a virtual firewall is serving a production traffic. This virtual firewall, we have associated a policy here, which says that when it hits 80%, we need to perform a scale out action. So the policy has two constructs, a condition and action. Right now, we are looking at a static policy and let's see how this whole thing works. So we are at the stage where the analytics are running very smooth for this virtual firewall. Gradually, there is a production hit and we reach the 80% mark. So we have a static policy executed which is going to perform a scale out operation. We are back to our healthy happy state. This is called a reactive response where static policy has received a trigger and has executed on a scale out operation in almost reactive manner. Now we are going towards a stage where we will build AI application and why do we really want to build an AI application for that? Because there will be too many triggers from complex analytics stream that will hit us in the future. Plus the problem that we tried to solve here was almost real time in nature. But sometimes we want to solve problems like I want to do my cloud optimization by looking at my data or the trends in that data for a longer term. Or I want to optimize my network, or optimize the quality of service delivery of my network on my SDN control plane. Or I want to improve the quality of service of my users based on the user's demographics. Those kind of problems cannot be solved by writing static policies or by reacting on your analytics real time inputs. What we need to do is, we need to build AI machine learning applications to be able to learn from that data to be able to take proactive and predictive inputs in that case. So let's review how do we really build an AI application? The very first step of building your AI application is, think about the problem you want to solve and identify that what problem can be really solved with AI machine learning. That's the critical step. The next step is gathering of real sizeable data around that problem. So say for example, I picked a problem that I want to optimize my cloud for capacity or energy. What I need to do is to get the real time production data of the cloud usage for over few months or years, maybe to take a look at the pattern and trends and how that is evolving. Once that is done, the next important step in the world of a data scientist is the preparation of data. Cleansing the data feature extraction that helps you prepare you for the training stage. And the most compute intensive stage is the training stage where you run your algorithms through a large data set which has the answers pre-populated in the data set so the algorithm would learn an acquired behavior. Once the algorithm has learned, that's called a predictor or a model. And how accurate your predictor or model is, you run your test data through it. Once that's done, if you feel that it has reached a certain level of accuracy, you're good. But if you haven't reached that phase, you have to go back and rework on your model. And when you have your model and it's ready with you, either you feel that all the complex work that you have done so far, you have accomplished your goal. Or you feel that you will have to rework some of that stuff because generally getting to a right model and getting to a right level of accuracy is not a one-time iteration. Now this is another bit of complex diagram in front of you. Yesterday I was going through my slides and my daughter saw this picture and all she got out of that was there is a lake and the elephant doesn't want to drink water from that lake. So that's not the message that I wanted to convey out of the slide. So I think I have to spend some time on really explaining what this really means to me. This diagram is helping bring the world of orchestration, analytics, and AI and machine learning training all together. So on the orchestration side you can see that this is the orchestration layer which is capable of working on your workflows, your templates, your services definition. So based on that, this particular orchestration layer can look at the static policies or dynamic policies and it can take actions. And those actions can be mapped to either services, create services, deploy services, optimize services, or your cloud. Do something with your cloud. Optimize your cloud resources, shift your workloads from here to there, or turn off some of the clusters to save some energy for you. Or then maybe look at the network and take some actions on your network. All the actions are going to come from cloud which are listening to a written workflow or a template. And it has ability to run either static policies or dynamic policies. Now let's look at the analytics side. There are a number of analytics sources across your organization. Some are services level analytics, sometimes cloud level analytics. And you really need to bring all those analytics into a monitoring engine where you can have a consolidated view of how these things work together. So that you can create a holistic result for a policy engine to take action on. And if you want to do static policies, you will be taking the reactive input and taking that path. But if you have a use case or a problem that needs to actually sit in your data lake where long term data is stored, that's the stage where you will run your AI machine learning algorithms to gain something out of your patterns or your predictions. So either you can do the training on site, which means that the GPUs or the TPUs have to be built on your local cloud. Or I have been learning that there is a possibility you can take your data off site and some of the public clouds can basically provide you the facility of training your algorithm. Either way, what you really need to care about is a model. Once you get that model, you can onboard that model and send it to the orchestration layer. And based on that model, your dynamic policies will be able to take actions on the predictive or proactive inputs coming from your machine learning algorithms based on subject to the test data. Did that make sense? I hope it did more than elephant in the water. All right then. Another important thing that I wanted to get across in today's our message is that AI is becoming easier as we are heading towards the future. A very long time ago when we would start writing our AI machine learning applications, we were really looking into the nitty-gritty details of mathematics of developing your neural nets or UBASian algorithms. Gradually over time, the libraries became more mature and you had ability to have libraries that you could use some functions from them and you could use some AI machine learning. Still, it was not that easy. Then more advanced projects like TensorFlow came into picture. The evolution of AI machine learning started becoming more and more common. And now what I'm recently learning from the Linux Foundation that there are projects like, for example, Ecumos where community can share data, where community can share use cases, and you can really get models just by invoking an API call. The world of using AI machine learning is drastically becoming simplified for people who are not data scientists. So it's a good thing, a good shift in the industry right now. That's why people are excited about the technology to learn more about AI machine learning. The other good things that are happening right now is the wide availability of analytics. We have more sophisticated platforms today. We have the ability to look at the data at a more granular level than past. We can save the data as much as we want to because the storage has become cheaper. So good accessible data. And the other side is the compute power. So look at the story of mainframes. If you try to train your algorithm on a mainframe, it would take months and years. Now look at the compute power of your handheld devices. We have very low cost, high power GPUs and GPUs widely available now. So we can really make machine learning training stages more practical. With that said, I want to comment on the landscape of open source. And Linux Foundation has always been playing a significant role in the world of open source. And I'm a huge fan. So in the space of orchestration, if you look at a few of the orchestration open source projects, we have been really involved with them. And we have been doing a number of POCs. ONEP being my favorite one, have done some POCs this year, looking forward to do more POCs next year. And I feel that there is a huge promise in that project to deliver the orchestration solution that we are looking for, a 5G in our future. Similarly, there are a lot of open source projects for AI and machine learning that you can contribute and you can participate. The notable one here is ECUMOS that is leveraging a lot of open source projects. And once you start doing a contribution in community and start learning some things out of it, you really feel that open source is going to drive this roadmap with all the community members' participation forward. Taking one last stab at summarizing what we have really learned today. And this slide is the last slide before you are able to get your coffee break as well today. So what we learned are three things. Number one is the cloud native applications. We really need the cloud native applications to make automation faster, automation possible. And the next thing is that we cannot live with just writing static policies. We will need to write dynamic policies where we have to do the proactive and predictive solutions towards our use cases. That's where we will need to create AI and machine learning algorithms and work on them. And the last and the most important thing is that open source adoption and participation will help us get there. Thank you so much for joining us this evening. I still have four more minutes. If there are questions, I'll be happy to take them.