 Fantastic to be here at the OpenShift Commons Gathering Data Science. It's a very, very interesting era where we are starting to take a closer look at how data and AI is going to transform a lot of our experiences. I'm Ganesh Harina, I'm with Verizon, Verizon Media. I've been doing data and AI for a very long time over a decade closely. Interesting paradigm shift that I started to see, we were building platforms which were very heavily AI-driven on the cloud, and we're starting to see application demand where we have to start to move these capabilities onto the edge. So throughout the presentation, I'll be citing our experiences in terms of how we look at these applications, how we solve these applications using frameworks, platforms, and so on. But most importantly, I feel very, very blessed to be part of this ecosystem where I am experiencing how the world would be transformed through AI for better experiences, performance efficiencies around healthcare, and then so on. And when you start to take a closer look at it, moving forward five, 10 years, robotic arm surgery is going to be very, very normal. And what that means is a doctor from New York can perform a surgery on a patient in Los Angeles. To me, this is fascinating. And interestingly, when you take a closer look at what's required for all these things to happen, robotics is important. Virtual reality is very important. And artificial intelligence is the foundation for this capability. And most importantly, we being part of Telco, 5G would enable to converge these technologies to make this capability a reality in years to come. But when we start to ground ourselves and then take a closer look at where we are today, what we are trying to do with ML and AI, a lot of applications that really required massive data on the cloud, applying AI to understand various aspects of the network was one of the area that I was very, very focused on. But looking forward, industrial automation is a space where we are starting to understand big capabilities and solutions to the right. On the left, autonomous cars, I'm fascinated. There's a long way to go, but the autonomous car today can look at the car in front. But what needs to happen is to be able to really connect to 5G capabilities and apply AI to plan the entire route. And that's in play as well. And these are like fascinating changes that we all are living through. And interestingly, the shift has been accelerated. But the way how I summarize my experience, any application that we would actually touch, feel, see, would be powered by AI. But it's also equally important that aspects like AI bias should be taken into account when designing these applications. Now, to summarize how the application shift has happening, when you take a closer look at any machine learning application, I'm sure we all know there is an aspect of model training, which is very compute intensive, and there is aspect of inferencing. And in today's world, very easily, we deploy both training and inferencing on the cloud and have this ML AI experience directly from the cloud. But if there's one shift that we are actually starting to see, the demand of near real-time inferencing, and now we are talking about inferencing in milliseconds, we are talking about inferencing in milliseconds at massive scale. You're talking hundreds and thousands of inferencing happen that needs to happen within a very short duration. In order to accommodate this, we are starting to see a paradigm shape and that is moving the inference capability very intelligently and seamlessly from the cloud to the closest location where the need is. So some of the application, if the inferencing is of the order of 10 to 25 millisecond, it's just an estimate, then ideal, you deploy these inferencing onto the CDN edge. VVMG, we have CDN edge in 160 location. We are already in the process of enabling these CDN edge with intelligence through a platform called Leo, which I would cite in a few minutes. And most importantly, there are a lot of applications which really need inferencing near real-time at massive scale and most importantly, highly reliable. In order to accommodate the factor of high reliability and also the aspect of millisecond inferencing, we have to start moving inferencing to your box is what I call. Now, an important paradigm shift when we go back and start to understand evolution of internet in the very, very beginning, it used to take fairly long for pages to download when we accessed yahoo.com from Sydney, but magically capabilities like CDN was enabled to cache content geographically in different locations. And this technology happened behind the scenes where a sudden change in human experience happened in terms of using the internet. Everybody started to have consistent experience of internet and CDN is magic. So today, when we start to take a closer look at how we want to deploy applications, enabling the CDN edge to be able to deploy ML applications is very, very critical. And there's a transformation or change that's actually happening in this area as well. Now, what are the applications that are really being discussed right now and why really we would need inferencing to happen so near real time and what exactly is a big problem? There is another very important paradigm shift that we all I'm sure started to notice. Up till until now, a lot of ML applications were actually primarily driven by signals from sensors. They're very two-dimensional, they're records. And there are billions of records. In fact, the platforms that our team really operate build applications, we ingest 100 billion records every day. But it's very easy even to operationalize platforms which can ingest and process 100 billion records because you have that luxury to be deployed on the cloud. And most importantly, the inferencing aspect is on a two-dimensional record. And the shift is the video content from where we have to pick up intelligence, apply machine learning to surface insights and solve the problem. That's another huge paradigm shift. And it's no exaggeration when I take a closer look at a lot of applications that come our way when we are starting to work on, majority of the applications are camera-driven in space of factory automation. And what we are seeing right now is an example of factory automation where you have video cameras which is observing the assembly line. And these feeds would be fed to platforms like Leo where you'd have applications which can understand the video signals, inference and alert if there are issues alongside other sensory signals like temperature, current and other things. So factory automation is a space or area where we are continuing to invest a lot in building applications. And I call it a two-U box. We have to deploy a two-U box. We need a platform like Leo. We need applications staying closer to the edge that way we have that reliability both in terms of high volume inferencing and also ensure that it is seamless and it's actually working in a factory environment. And 5G private definitely is going to play a big role to connect all these different sensors, cameras and so on and route signals and video streams to a platform, a centralized platform which can ingest and apply artificial intelligence and start to surface insights to improve efficiencies to avoid error near real time without any material loss. And this is an area we Verizon are starting to heavily invest. I'm sure many of you know Verizon already has a company called Skyward which was acquired a few years ago. They are into helping fly drones. Now, knowing Verizon has tens of thousands of cell towers having technologies like drone and computer vision so on, it's very timely that we talk to build applications instead of people climbing on the cell tower to understand issues with the towers and connections and so on. So, fly drones to understand the issues around those cell towers. One, it addresses a lot of safety issues. Two, it addresses a lot of, sorry, there's a lot of cost efficiencies attributed as well. And most importantly, with computer vision, you really see a lot of insights where you can take corrective actions near real time and you're continuing to invest. And this is kind of a very vertical application. Today, you solve it for cell towers. You can retrain it to monitor oil pipelines, buildings and bridges and then so on. I personally am very, very fascinated about the mission that we embarked on. We are very, very early on though. There's a lot of learning here, but I'm sure in months to come, we'll be able to operationalize products like what we are discussing right now. And it really requires edge capability. The video streams coming near real time, inferencing on the edge and then being able to provide surface insights to the person who's really conducting the survey of the cell tower or an antenna. Now, how can we solve all these things efficiently is the term that I would actually like to use. When we take a closer look at the next generation application, pretty much every application would have an aspect of machine learning attached to it. But the very interesting difference between the application that are powered by machine learning and traditional applications is the machine learning applications are not static. I can't say the release is complete. This is an awesome application. You guys go ahead and then use it. We really have to start to monitor the model and have a process in place to really retrain the model to make it more meaningful, relevant and accurate on the ground. And that's a non-trivial problem. And that's where we need to have an ecosystem that supports the next generation, building and deployment of next generation application. ML-based applications can be transactional. I can't say I've deployed the application and I can't walk away. I need to provide tools and capabilities which can be used to ensure that these applications are meaningful over a period of time. And that's very important on one side. On the other hand, be able to distribute the workload, the training workloads on the cloud and the inferencing workloads on the edge. In simple terms, I call the pink boxes and the blue boxes were deployed on the cloud. Now, eloquently, we have to separate these pink boxes to the closest edge, which could be a CDN edge or a 2U box, which would empower you to build applications like a drone vertical inspection, applications like factory automation and then so on. So we are very heavily invested in operationalizing the capability of platform which helps empowers us to build edge application seamlessly. So what you're seeing is a very high level blueprint of the platform Leo where the pink boxes are taken care as part of the model inferencing and application deployment. And this application deployment has to be end to end. We should be able to run UI. It has to be secured. And this to me is a paradigm shift. We all talk about distributed infrastructure. Now we are talking about a distributed application where the same drone inspection, the same factory automation has to be deployed in multiple locations and in many cases, it has to be integrated on the cloud to make it work very, very seamlessly. And it's a fascinating time where the demand for infrastructure is changing. The security posture is changing. We just can't say we have an awesome cloud infrastructure in multiple locations. It's micro clouds and these micro clouds have to be connected to the parent cloud primarily because your application loads are distributed on the edge and on the cloud with seamless interconnect. And what you're seeing is a reflection of our view about a year and a half ago and today what you're seeing is real. So Leo is a glue between various technology infrastructures, platforms and integration between data sensors and so on which will enable and empower to very different applications like drone inspection, factory automation, digital twin that has been operationalized for Verizon's own good within Verizon. And I'm sure we all have our own strategies but I'm very excited and encouraged to share the success that we're actually starting to see about understanding the needs of the edge platform and ironing out the capabilities that are actually needed on the edge. Now, in a nutshell, when you take a closer look at Leo, you can build an end-to-end application on Leo which can ingest data, which can apply inferencing at massive scale on the edge and be able to deploy any machine learning model and most importantly, this is container based. So what that translates to is it can be deployed on any edge platform but as I was mentioning, it's very important to have a seamless interconnect to the cloud because it's just only portion of your application and a lot of the training needs to happen on the cloud and there could be compliance policies where you have to purchase the data on the cloud and this data has to be shipped onto the cloud for various reasons and most importantly, a fascinating approach of building models. This is called distributed model training which can be consolidated on the cloud, can be approached through platforms like Leo. Now, at a very high level, for us, when you take a closer look at one of the capabilities that we would need on the edge, data management is super important, be able to ingest data, all forms of, and kinds of data, high throughput and so on and it should empower us to build end-to-end applications with UI, very secure and so on and most importantly, the security posture has changed because you have a 2U box sitting somewhere, physical security becomes important, application security becomes important too. These things have to be factored in, this which is beyond Leo, but we need to have a strategy to address all aspects of security and Leo does address application security, we would have to depend on edge enablement capabilities like OpenShift as well in this case to ensure that it is seamless, we can control or manage the container seamlessly on the edge and also provide a very secure environment to deploy edge applications and most importantly, have a strategy in place where you have components where you can deploy models, seamlessly manage it, monitor it and most importantly, perform nearly all time analytics too and everything that I have said is part of Leo, it's operationalized and we have been very, very successfully been using within Verizon and interestingly, though it's very, very early, Leo has become the not-star edge architecture for Verizon Media Group as we speak. Now to conclude, we are starting to see a new influx of application, I call this as next generation application and these applications, each one of them would be powered by AI, there's no doubt. They're poised to enhance human experience and efficiencies and health and safety and so on but the paradigm shift from the infrastructure perspective is we have to understand and identify the components that have to be moved closer and closer to the edge. It could be a CDN edge or a QU box. Now, I think with that, the way how I would like to summarize a lot of the stories and experiences that I have explained, it's going to be very, very interesting as we move forward. Primarily, as you start to take a closer look at building ML and AI-based applications, it's complex. We have to find ways to simplify this through a platform strategy. We need to have strategy and partnerships in place where we have control on the edge and technologies like OpenShift definitely will put us in a very, very good situation to have a very controlled and manageable environment taking into account. It's very, very distributed too. And most importantly, how are we going to build, test, deploy, keep the environment very agile that way it's adaptive, adaptive too. So taking all of these things into account, very, very early on, we have our own experiences. Very happy to learn your experiences to connecting offline. And also, I'm starting to look up to consortiums like a neuro system. I'm really excited and happy to be part of it. And also I feel very blessed to be part of an ecosystem like this. While we bring in what we know primarily from experience perspective in terms of solving problems on the edge, building ML and AI applications for Verizon, Verizon Media and other enterprise customers that we are starting to work with. We're here to learn as part of the ecosystem and become more and more efficient as we continue to build our next generation applications which are Envision would change a human experience which would improve efficiencies. And also most importantly, I am excited about the security posture, improving security posture and also health and safety too. So with that, I sincerely thank you all very much for this opportunity and look forward to sync up with you offline as part of consortiums and then we can take it from there. Most importantly, stay safe. I'm sure you're all going to have fantastic and terrific 2021. Thank you.