 I'm your host, Arpa Joshpura, head of Networking and Edge and IoT here at the Linux Foundation, and welcome to O'Neill, as we call it. Before I get into the specifics of my presentation and my thoughts, there are three housekeeping notes that I want to highlight. Number one, this is live, and please ask question answers. It's a rare opportunity when you get that in front of executives. So it's on your top right chat. We will answer as many questions as possible. So that's number one. Number two, this is closed caption in 16 languages worldwide. So there's a widget below your session screen if you want to change the language. I can speak four, so I'll try and confuse the AI system. But hey, no, no, don't do that. And then the third thing is, if you get confused or have any issues, there's FAQs under the Events page. There's a lobby and chat. Just reach out to LF staff, and they will get to you. All right, so I want to quickly get into what O'Neill is and how it's different. It's three days in three separate time zones, optimized for EU, North America, and Asia. And it's with almost 30 executives, thought leaders from the world, covering many, many topics across the enterprise, across the edge, cloud, telcos, et cetera. And we have also incorporated lessons learned from the past one year of these virtual events. So first, I want to thank the LF Events team for opening up ONIF to the global participating community. For those of you who have been attached with Open Networking Edge Summit or ONS or ONES, we used to have something called ONIF, Open Innovation Forum, which was an invitation-only event that was kind of limited in participation. But with this opening up, we are now in the hundreds and thousand people that can actually participate in a live session and hear the leaders. The information that you're gonna be provided is not too visionary. It is really focused on what's going on, what you can benefit on both personally and professionally. And obviously I wanna really thank all the executives that are participating today to take the time. I know it's a busy time for all of us and just taking time and talking to the ecosystem and communicating their priorities is extremely valuable to this community of Open Networking Edge. Format's gonna be fairly straightforward, 10 to 15 minutes of either presentation, live Q&A, quick questions, quick answers, and if we can get to everything, obviously follow up on that. The teams and the presentations will go into what is possible today and in cases of Edge, some new applications, some use cases, some of the displays or what the 2021 priorities are for the community. So with that said, let me get into a two-slide overview of people who are not familiar with the Linux Foundation. I doubt, but it's a good refresher. We're more than Linux. We host multiple technology projects and multiple market disruptive collaboration entities, whether it's security, networking, cloud, automotive, blockchain, Edge, web, et cetera, et cetera. So we're the home of many, many projects to foster and build this open collaboration. Obviously, 2,000 plus members from 41 plus countries, lots of developers. Now you can say not a single vendor or a community member can actually hire so many developers, but they're all working for you all and some of them are on the call. But this is where shared collaboration is really beneficial to everybody and especially on these critical open source projects. The stats may not be visible in your screen, but feel free to download it on the LF website. Millions of pull requests, millions of changes, lots of load bales, securities, scans, et cetera, et cetera. So that's Linux Foundation, very excited to host and participate in this open networking and Edge executive forum. And so with that, what I want to quickly cover are a report card and a 2021 set of priorities for our technology and our projects, which kind of lead into an ecosystem and production deployment. And then I think the most interesting thing that's happening this year is that different vertical markets are pulling these open source projects and starting to use them in their vertical industries. And we've not seen that in the past, other than maybe the networking and the telecom and maybe the clouds providers, but all of a sudden we have different vertical markets relying on that underneath open pipe and that open plumbing that has been created over the last five years. So very excited about that and we'll talk about that. And talking about which industries, I think continuing through my keynote that I did last fall, we have a whole set of industries whether it's telecom, automotive, motion pictures, banks, FinTech, public health energy, these are all 100 plus year old industries. And they've just transformed through open source with very rapid innovation. You may want to download this white paper on the Linux Foundation site that shows how, why and what of each of these industries. And we're going to zoom in on networking Edge, but you can see the flavors of other industries being covered in several of these presentations. What's happened is in the last 10 years, we got a whole set of technology disruptions, right? Whether it's virtualization, disaggregation, automation, cloud native, et cetera, that led to projects under the Linux Foundation, specifically in networking and Edge, utilizing these technologies, creating a community and then providing code and open interoperability for most of these. Where we are heading is in the next five plus years is putting this end-to-end open source software solutions to use in vertical market, right? Whether it's Edge, whether it's intent-based, policy-based and using the entire ecosystem and not just one or two projects. So that's what is very exciting this year. And when you talk about these projects, this is a view of a set of sample projects that can actually create a full end-to-end open source ecosystem through collaboration from the mobile networks to residential broadband networks to small office, medium office to large enterprise, right? Which have their own data centers. And you come in through the access, get into the Edge, come through some version of cloud, and then in the core you have a whole vertical stack that does data plane exploration, network control, management orchestration analytics, and a whole set of new functions that build on this for various applications. This is a very powerful kind of phenomena where you can actually put it all in perspective. And I think you may be aware of two projects, Magma being an open mobile core, which Linux Foundation is hosting now, and Anuket, which is the combination of CNTT and OPNFV that was announced this quarter. Both of those are extremely powerful projects that make up the end-to-end stack. So if you look at beyond these projects, what's more important? And I talked about Anuket, but I want to point out this circle that really accelerates the deployment. So how do we take projects that we host, which creates software and gets it into products from vendors and system integrators and end-users and push it into production? And so the backend of this cycle needs to be accelerated. And what is seen in the backend are open compliance and verification, open interoperability and testing, and open training and certification. You cannot have every end-user test and interoperate multiple things based on standard set of objectives, right? So this is where projects like Anuket comes in, projects like LF edges at Crano come in, where there are a set of blueprints that I'll talk about in a bit. The one thing I also want to emphasize is when you talk about end-to-end, all the way from RAN to Edge to Core, there's a set of umbrellas that we are going to sort of utilize to create what we're calling 5G super blueprint, but things like the LFN end-to-end demos that have been historically shown at the ONES will be augmented by the community to include the RAN and the Edge and the Core and kind of bring it all together in a very simplistic manner so that end-users can allow for quick rapid deployment. What we have seen is six months of interop and production testing have been cut down to six weeks. So significant changes in the deployment cycle because of the backend interoperability and blueprints. So then where are we heading and what's new? And I think this slide is an extremely important slide because what has happened is you take all these end-to-end open source projects and for those of you who are not familiar, they look like a bunch of logos and acronyms, but I showed it in the architecture diagram and these projects have all of us and started being used in major segments. So give you some examples here, enterprise networking with the private LTE, private 5G and private LTE kind of a use case and a blueprint. The enterprises are starting to use these projects to get workloads across clouds, right? In a multi-cloud hybrid environment and you'll hear from some of the industry leaders on how they do it. There's end-to-end visibility and monitoring that an enterprise needs because now the last mile is no longer a last mile with an edge and a centralized data center in a public cloud. How do you kind of look at your own workload and get monitoring and visibility? So some of the largest of the large enterprises are helping the underneath lying projects to move forward on that. We've seen governments in the various regions, right? Whether it's North America, APAC EU, building on open and secure 5G. For example, we announced a project with the DARPA DoD on utilizing LFN and LFH to create an open source framework for the government to use in 5G applications, IoT and Edge. And that's all being built off the open source trains that we've seen. And then we have seen projects that drive global connectivity in developing countries and in kind of countries that just need basic connectivity at a very low cost and kind of very simple open manner. We've also seen some of the unique industries in the broadcast sector come in and demo spectrum sharing, whether it's based on ONAP plus or NSC, the ATSC and you'll hear from Madeline later on. In the conference, she's the president of this ATSC 3.0 and it's a huge undertaking on how the underneath pipe and platform that we as a community have created gets used. And then of course, Edge and IoT is the hottest market vertical that has been driven. And you will see so many use cases presented in ONIF this week, but to point out a few industries that have really taken up the challenge, manufacturing, energy, oil, gas, retail, home, automotive, fleet transportation, et cetera, et cetera. These are sort of hot. And in order to facilitate some of this, we're seeing a big definition and a big alignment of Edge resources in the community. So this slide, if you wanna take a screenshot, please go ahead and do that because people have been using loose terminology on Edge for quite some time, you know, thin edge and thick edge and far edge and near edge and you know, whatever, and they're all relative terms. And what the community did as part of State of the Edge research, which by the way was announced today that the latest copy of that 2021 is just out for download, please do that. You know, it's a very, very detailed and exhaustive analysis there. But in that, you can see that there's only two types of edges from a terminology perspective. There's a user edge and a service provider edge. A user edge is dedicated and operated by the user or service provider edge is shared and as a service. And if you double click that, there's not one implementation but there's a constrained device edge, right? Whether it's microcontroller or embedded, right? Very tight constraints or kind of a gateway type smart device edge or an on-prem type edge which could have multiple, you know, secure locations. And then last mile separates it, puts it on the base station and or smart central offices. And then anything centralized is kind of not edge. You know, it could be an IoT application but not edge if it wakes up every month and dumps some sensor data. And then we have a whole bunch of projects to support that, which I want to point out a couple, you know, edgex foundry being, you know, a framework. And then you will hear from say, Fledge and Eve and some of the other projects later in the conference today as well. And then from a blueprint perspective, we have what is called a crano. And this is very important because you can actually utilize the use cases and blueprints for doing things like connected cars or augmented reality classrooms or cameras or cloud gaming or a whole bunch of things that are important in the various parts of edge. And you will hear these use cases as we go through the presentation and people today. So I want to wrap up with 2021 priorities for us in the community. What we are hearing from the community is we need to focus on complete solution stacks to integrate and fully operate the edge stack, edge and networking stacks are very critical. As part of that, we're going to focus on speeding up the backend interrupt deployment process and then finally help these vertical markets start embracing and utilizing and building on these open source cloud stacks. So with that, I think what I'm going to do is wrap up right here and I am going to show you what wonderful treat you are in for in terms of the number of speakers, the quality and the depth of the top executives today and tomorrow and day after tomorrow. They comprise of edge IoT and users enterprise service providers, cloud vendors, SIs and more. You know, we could not ask for a better leadership roster to come in and present their ideas. So with that, again, I want to remind everybody, if there are questions, you can put them in the Q&A top right, they'll get to me and I will ask the presenters on questions. We'll try and answer as many as possible. We won't, we'll try not to take a break between presentations, but if you can sort of send those questions while the presenter is talking, that'll be fantastic. So with that, I think I am at the end of my presentation and what I would like to do is answer one of the questions that just came through, which is, what are the low-hanging fruits you see in the vertical markets? The first low-hanging fruit, I would say, is the each vertical market should focus, should not focus on doing their own lifecycle management, plumbing, infrastructure management, abstractions of interfaces, et cetera, et cetera, right? They all can focus on their differentiation, whether it's hardware or software and leave the lifecycle management, the plumbing as we call it, right? The integrations and the APIs to a cloud or an enterprise to the open source world. So that, I think it's a great question. That's where most of the LFN, LF Edge and LF projects are focused on, non-differentiating code at the middle of the pipe. So thank you for the questions and I think we might be out of time. If we get more, we will answer it as we go on. But with that, I am going to introduce the first keynote presenters here. For which we have Tom Arthur, who is the CEO of Dynamic and Chris Bainter, who is the VP of Media at Flare Systems. They're going to talk about a unique application, right? Remember, I told you, Edge is huge, IoT is huge. And there's a whole set of applications that are coming up that are being built on open source projects. And these applications provide a unique sense of edge computing resources close to you. So one such use case that is built in the community by the community is fascinating to learn. So with that said, I'm going to hand it over to Tom and Chris. My name is Tom Arthur, I'm with Dianomic and maybe Chris, we could start with you introducing yourself in Flare Systems. Sure, my name is Chris Bainter, Global Business Development at Flare Systems. And I represent our industrial side of the business. And I'm Tom Arthur, co-founder and CEO of Dianomic. And actually this slide at the lower right almost says the whole thing. So Dianomic, OSISoft and Google started a project and contributed it to the Linux Foundation Edge Group called FLEGE. And it is focused uniquely on the industrial IoT Edge. And for those that have been in networking a long time, I'm an old guy actually, you might remember 1980s, 90s, when there were multiple protocols, multiple physical layers like ArcNet, Starland, Ethernet was just coming on board. And then you had protocols like DeqNet, Apple Talk, SNA, TCPIP was just starting out. In the OT space, in the industrial space the exact same chaos exists. You have numerous protocols that are proprietary, semantics and data mappings that don't make sense. This guy's pump is defined different than that guy's pump. So we're building an open source project to enable industrials on the Edge to connect to any machine or any sensor, move that data, process that data and integrate that data with any industrial system or any cloud. And the users of that are the data scientists. I call them the no-code engineers. So this is a great mechanical or chemical engineer who doesn't know how to code and how do you enable him to write these applications? Because tomorrow they have this conversation they call the fourth industrial revolution. And the fourth industrial revolution is like the self-driving car but it's becoming the self-driving factory. So what's happening in industrials that's making this occur which is what's happening really to everybody on the Edge but specifically in industrials, sensors like Flares are getting way more powerful, way less expensive, way more accurate. At the same time, obviously the cost of computing on the Edge is going way, way down with incredibly powerful computers. At the same time, you have these new workloads. You have machine learning. You can run DNNs out on the Edge and tomorrow you're going to have predictive maintenance on machines. You'll have the entire factory with predictive maintenance or even more importantly, you'll have OEE applications that are optimizing processes on a batch-by-batch process. And then you combine that with the ubiquity of networking today. So whether it's 5G or private LTE networks or the newest Wi-Fi networks, the ability to connect all of this Edge, run these new workloads, take this new sensor data and completely automate a factory is happening. So what exactly is Fledge and what are we doing on the Edge specifically in the industrial space? So in the industrial space, like I said earlier, there are literally hundreds of protocols that frankly I've never heard of. Things like Backnet, things like DNP3, IEC 104 use it by energy companies in Europe, CAN bus, how you connect to a car. So all of these proprietary protocols don't have common semantics, don't have common data mappings, don't have common ways to add meta information and ingest the data. The other major problem you find in industrials is the sources of the data is quite unique. So for instance, if you're to stick a 40 kilohertz accelerometer on a piece of rotating equipment, you're producing probably up to six gigs of vibration data per hour. And you are not going to send six gigs of vibration data that says, hey, my pump is doing just fine to the cloud. So this is a classic opportunity to do edge processing. You have to do edge processing, actually signal processing. So the first service is Fledge can connect to any machine. It can actually do signal processing on the Edge. It can ingest computer vision, it can ingest a PLC and take that data. And then the second feature in Fledge is the ability to aggregate that data in context. Tomorrow you'll have a PLC connected to some machine and then you might take a FLIR camera, point it at the same machine, you might stick an accelerometer on that same machine. So now I'm getting data from a PLC, from a camera doing computer vision and an accelerometer and all that needs to be in the context of the health and welfare of that machine. So Fledge can aggregate that data and put that data together in context. And then the next major feature you need of course is buffering because sometimes as we all know the network goes up and down. So you don't wanna lose that data. And then Fledge can actually filter that data. So you can filter that data coming into the Edge from the machine, from the accelerometer like doing RMS, root mean square or fast forward transfers or doing high pass, low pass filters. And then you can also filter that data as you send it to the cloud. So you might have to compress that data, you might conditionally forward that data. So you need to be able to filter the data on the in and the out. And then Fledge can actually also operate events and rules on the Edge. So you can look at data and do simple transformations like convert Celsius to Fahrenheit or you can do something as complex as running TensorFlow Lite and doing the latest DNNs and doing machine learning out on the Edge. So Fledge is naturally already connected to these machines and the streaming data from the sensors and the actuators. So it's also the natural place then to stream that data through a new machine learning model that might predict the wear and tear on the inter-race bearing until the repairman, not only should you go check the machine, you should actually take a new bearing with you and replace it when you show up. And then the last part of the integration that clearly our pit was also talking about is you need to integrate this data. And in the case of industrials, they have tens of millions, if not hundreds of millions of dollars in CAPEX expenditures, they've spent on their SCADA systems, their DCS systems, their database historians, their quality control OEE systems. So together we call this the ISA95 stack. So that stack needs to receive that data. But also they are also looking at the cloud and putting new workloads in the cloud. So Fledge enables them to send the data they need to send to their historian. At the same time, they need to send that data to their data scientist using the newest latest coolest tools from Google to build machine learning models with. So you probably already have in mind this slide that, okay, so Fledge can connect to a single sensor, process that data, transform that data, send it to an on-prem historian. But as you can see on the second panel here, it can also connect, as I said, to a PLC, to a FLIR camera and do an accelerometer, aggregate that data. And it can send that data once again to the historian, the OEE system, but also to the cloud. And then Fledge is also designed to talk to another Fledge. So you can deploy these in hierarchies or meshes and run the workloads at the appropriate level across this infrastructure. And then forward that data to your ERP system sitting in Amazon. You can then send the data, the ML data, to your data scientist using Google Cloud. You can send it to Kafka then to send that data to your ERP system. And you can send it to the on-prem stuff south of the firewall that your operator uses every day. And then last, the reason this is a really important open source objective is really going back to what I said. If you're around in 1990 and when TCP-IP was just taking off, you know, it was I, it was the standards committees that solved these problems. But when you look today in 2021, these real big problems in computing, when you have fragmented diversity and it's at the plumbing level right here, that is perfect. And today for open source projects, you can solve that problem, let's not reinvent the wheel. So when Dianomic and OSIsoft and Google created the Fledge stack and contributed to the Linux Foundation, where it's the best place to run governance and everything about a serious open source project, we contributed it there and we started encouraging not just technology companies, but industrials themselves like FLIR. One of my favorite ones here, you can see on the middle corner there, RTE. So RTE is the largest grid operator in the EU. And so they then have looked at the code and there's another open source group under Linux Foundation called LF Energy. And this is a group of energy companies and technology suppliers building open source projects to solve new problems in green energy. And so RTE and the LF Energy said, hey, Fledge is the right architecture for us. And so they've taken it in, they call it Fledge Power. And now they're adding these South protocols to Fledge, to connect to their RTU server, which is the SCADA system running in the substation. So RTE is contributing IEC 104. They're now writing a 61.850 along with Swiss Grid. Nextcom is an advent, is a gateway provider and they contributed J19, which is the way you connect to a caterpillar tractor. So the community's taking off and then we're super excited to have FLIR using it as a well. And with that, I'll give it to Chris to talk about how FLIR is using Fledge. Yeah, thanks Tom for running through that. And I think you did a great job in explaining the structure. For FLIR in the next few minutes, I wanted to talk about our evolution in how we serve our customers, focusing on the industrial space, specifically. When we look at those types of customers, FLIR is best known for thermal infrared cameras. And if you're not familiar with infrared in a couple of seconds here, it's basically a way to get hundreds of thousands or millions of points of non-contact temperature measurement accurately. And so it provides both qualitative context of where hotspots are and there may be indicative of problems, but also if you wanna get actual temperature measurements and do trending over time. While we're associated mostly with thermal infrared technologies, actually 49% of our business comes from non-thermal sensing types. So we also offer vibration as well as acoustic emissions and other types of sensing technologies. In the past, we offered these technologies to our customers. And a lot of times they were the fixed mount or man portable and people use them to collect data and then they would interpret those results based on experience and training and then decide how those drive their decision support. Where FLIR sees the biggest opportunity based on market themes at high level, but resonate across utility, oil and gas and manufacturing is aging workforce. You got workers that have been doing this for 30 years. How do you replace that decades of experience? And our answer is, and what we're probably all here looking at are to create more connected solutions that easily plug into existing systems that our end users are using and also helping move from data to decisions. And that gets into kind of the second column here in terms of interconnected sensors. Tom mentioned it, but being able to aggregate data across multiple sensing types and plugging that into however the customer likes to work. So instead of FLIR trying to say here's a specific software and it works, but you got to use our silo solution set. We want to be able to integrate and work however the customer works. And then of course getting a decision support. So when we take a closer look at that, just a couple of examples to get a little bit more into the specifics. Common use cases for FLIR's thermal infrared cameras. Our machine condition monitoring, you can see that top image I believe is a pump motor. The second one, maybe not as well known, you can filter just specific spectral band and visualize gas leaks. And so you can not only detect them and alarm on them, but you can also quantify them. And the last one is more of a process automation control where here we're maybe less concerned about the absolute temperature measurement, but more about being able to visualize it and look for potential problems where in the visual, we wouldn't be able to see that contrast. So common use cases, again, used to be individuals carrying around cameras doing it, now being able to automate those and aggregate that with the other types of data tied into maintaining these different use cases, which FLEDGE empowers and working with Dianomic really has opened up some possibilities there, not only to make our products more plug and play with how our customers work on the far right in however they want to work, it's not tied to a specific software or historian or CMMS system, but really making it work in all those scenarios. And then of course, getting into the analytics and decision support and being able to push that to the edge. Tom talked about the network load and how that goes up and down. So it's not feasible to take hundreds of thousands of points of 14 or 16 bit pixel depth and stream that real time across networks. It just wouldn't make sense. So being able to do that on the edge becomes essential for a lot of our customers. So as we look at the last slide here, kind of key benefits for our customers, I mentioned quite a few of these, but just to kind of recap them, really the core is to help them lower integration costs. This is making it again, plug and play with how they work. Empowering no code engineers. I think I heard that terminology the first time from Tom recently, but makes a lot of sense. In the past, if you wanted to do some custom work, you had to really know how to program and use our SDK via Fledge, we now make that an easier or available to people that may not necessarily be deep software programmers and then being able to move from data to decision support. So a couple more benefits here, but really our evolution and how we think in our solution that we offer to the market is driven by these customer needs and Fledge is an essential part to that as well as our partnership with Dianom. So I'll stop there. Thanks, Chris. Arpit. Thank you. Thank you, Chris and Tom. There are a few questions if you want to address them right away. Yeah, well, the ones on the GPUs, absolutely running especially computer vision out on the edge. So we've got applications where we're using Google Corel, which is the TPU and we have other examples where we did digital twins for HONDUS Pro Racing Team using Xavier, of course, which is the NVIDIA product. So in those kind of workloads, absolutely that hardware is helpful. Yeah, I think the question was what do you see the role of XP use to be? So thank you for the answer. Also, do you see process control, implement controls at the edge, any different from the old school PID controls and what's here with FLIR? Yeah. Well, in Fledge's case, we're not real-time control. We're on really, not really, we're on any variant of Linux. So Linux is where we run on, but it is bidirectional. So if you have something that's slight sub-second, you definitely have bidirectional data and consent commands both direction, but it's not intended to be the next generation PLC. Got it. And then finally, is this more secure? The general question. Yeah, the general answer is, well, it uses Linux. So we use elements on a per data stream. So you're using HTTPS and SSL. You can use the root of trust that you want. Usually the North or South connection might have its own security. So for instance, a Pi server uses Kerberos. So you need to issue your token from the Pi server to connect to that securely. OPCUA, which is an emerging protocol, especially in factory PLCs, it has its own security definition. So you would use the security of OPCUA and you would use Kerberos to talk to Pi and you'd use your root of trust that send the data to Microsoft. Excellent, very good. Well, thank you very much, Tom and Chris. This was amazing. So we really appreciate you doing this on us. So thank you. All right. Okay. All right. Let's, you know, one thing I want to emphasize here is that, you know, we had last year, people were like presenting, is edge real? Is, you know, how big is it? What kind of applications? And what a difference a year makes. You know, today we're talking about actual implementation and deployments with very specific use cases and end users building off open source projects. This is the power on speed of innovation in open source. So we're really excited. And you will see lots of examples coming up throughout the Open Networking and Edge Executive Forum that will, you know, highlight some of these use cases. So again, thank you. I do know there might be more questions coming in. We'll probably answer them offline, but let's move on to our next presentation and, you know, moving from all the way at the edge and IoT and all the industrials to cloud, right? And, you know, we'll be toggling between, you know, IoT Edge, cloud, enterprise, telecom, throughout the presentation, just to keep you, you know, on your toes. But let me introduce the next presentation. We have George Nasi, who's the global VP for telco media and entertainment industry at Google and Google Cloud. And then we have Amal Fudke, who is the managing director for the global telecom industry solution at Google Cloud. I think both of these folks are well known. There is no introduction needed. They have not only influenced the telecom and cloud industry for the last, I know, 30 years, sorry to, you know, age you folks, but they have really made a big dent in the last couple of years by allowing the collaboration between telecom and cloud to come together. And instead of, you know, three years ago where, you know, it was competing, we are now in a collaboration spirit. And that has really spun off all of these activities across the industry. So with that, I'm gonna hand it over to George for talking about, you know, how he sees the world and 2021 specifically in the multi-cloud era specifically, you know, and obviously, you know, with a Google view of the world. And then of course we'll take questions. Amal and I will be chatting as well. Good morning, good afternoon, good evening. Hello, everyone. And thank you for joining me today at the Open Networking and Edge Executive Forum. I hope you are all keeping safe and well. It is a privilege to be here and present to you today. Let's go to the next slide. I wanna do some quick housekeeping. Before I start my presentation, I want to remind everyone that what I'm going to share with you is Google Confidential and Proprietary Information. It is shared under NDA. Please do not take screenshots or post on social media. Let's go to the next slide. All right, I'm George Nazzi. I am the vice president of telco, media and entertainment industry solutions at Google Cloud. And I'm here today to talk about how Google is partnering with the communications service providers to accelerate their growth and help them transform into digital services companies. Let's go to the next slide where we cover the topics for today. As we all know, we're living in disruptive times and the importance of service providers have really come home. They have truly become the information fabric of everything we are doing. So today I want to spend time with you to cast a spotlight on the communications industry, highlight the challenges and opportunities that the CSPs face and share with you how Google Cloud is accelerating the CSP transformation. Let's go to the next slide. Let us look at this from an industry perspective first because we're seeing an epic disruption. So where are we today? Communications service providers around the world are facing similar challenges which have only been amplified by the COVID-19 pandemic. Let's look at the following five challenges. The first one is stagnating top-line revenue growth. So even before the pandemic, CSPs have faced stagnating revenue growth. In 2019, the CSP revenue was projected to rise by 1% which is actually a decrease from the 3% growth that was achieved the year before. And while CSPs responded extremely well to the pandemic, research shows that 92% of customers say that their CSP exceeded expectations in managing the crisis. And the demand for the CSP's core product which is connectivity increased by 70% in data usage. Unfortunately, the CSP's have not been able to translate usage demand into revenue. And as we look ahead, we continue to see challenging economic conditions which will pace further pressures on the CSPs. And many operators are now expecting revenue under performance of two to 5% compared to their pre-pandemic plans. The second challenge, increasing CAPEX investments to build out future networks without a clear business case for monetization, especially in the 5G world. Now in a typical operator, a 1% hit in revenue requires a 4% to 8% reduction in CAPEX to offset that. Let's think about that. Against the backdrop of revenue stagnation are increases in capital expenditures as operators continue to upgrade their 4G slash LTE networks while also rolling out 5G networks. On 5G alone, network buildout costs are projected to reach 872 billion over the next 10 years. Third challenge, increased network demand. At the same time, amidst the pandemic, all of us have been working, studying, socializing, working from home, and the result is increasing pressure on the network. In fact, fixed and mobile operators have experienced as much as 60% more internet traffic than before the outbreak, highlighting the ever-increasing need for resilience and scalability of the network. Fourth challenge, the customer experience is really at the breaking point. Now, although 92% of customers say their CSP exceeded expectations in managing COVID-19, CSPs have still historically performed poorly in customer experience. In fact, in the last five years, no leading telecommunications cable or satellite company in the Americas or Europe has sustained a net promoter score and that promoter is the measure of customer loyalty. None of them have managed a score above 50%. A threshold that indicates a company is beloved. So none of them have been able to kind of score above 50. The fifth challenge is the new ways of working and collaborating. According to BCG, 40% of employees will utilize remote working models in the future, which will also require CSPs to continue to reinvent how their employees engage and collaborate remotely. Let's go to the next slide. I just covered it from an industrial perspective. Let us look at this from a CEO perspective and look at the top challenges the communication CEOs face. So in the face of these disruptive headwinds in the industry, CEOs and their teams across every part of the CSP organization are searching for ways to overcome these challenges and are asking the following five or six questions. How do I create new network-enabled revenue streams? How do I expand, transform and operate the network so that it profitably handle explosive usage? How do I cost effectively delight customers and how do they want to engage with us? How do I continuously innovate the customer experience? How do I effectively manage my increasing complex systems and processes? And how do I evolve my workforce to thrive with agility, with innovation and with collaboration? And as you see below each of these big questions, there are a series of other questions. Actually, they are answers, but these answers could also be questions. For instance, how do I monetize 5G? How do I evolve to an SDN and autonomous operations? How do I deliver personalized experience at scale or stronger customer loyalty? How do I utilize analytics and AI to enhance customer experience? How do I build agility, monetize my IT estate and make my workforce more productive and innovative? Let's go to the next slide. Now with disruption and challenges come opportunities and Google Cloud is hyper focused on helping CSPs. Not only transform, but to accelerate their growth, to accelerate their competitiveness and to accelerate their digital journey. And I believe we truly bring a number of unique attributes to our partnerships. Let me list six of them. The first one is really about network and security leadership. Did you know that Google Cloud operates on one of the best connected and fastest networks on the planet? We share the same infrastructure and global network that powers Google products like YouTube, Stadia, Gmail. It's one of the largest privately owned networks and our infrastructure is protected by experts in information, application and network security. Why is this important, you might ask? Because it helps our customers connect to their end users no matter where they are in the world. Second, it's the solutions that support hybrid and multi-cloud strategies. Through Anthos, we empower the CSPs and the developer community ecosystems to seamlessly build and scale new applications across any environment. Whether it's modernizing the core IT systems, running the network management functions or deploying 5G and Edge applications. Our multi-cloud and hybrid cloud solutions let CSPs build and run their applications wherever it makes sense with no vendor lock-in. Thirdly, open source. Why do we do this? Google Cloud is committed to an open environment and we have used the ethos to build our CSP solutions. As you know, we've invested in Kubernetes and open sourced it. We are a platinum member of the Linux Foundation Networking. Number four, data leadership. Google is also a leader at storing, processing and analyzing data at scale. We are leaders in applying machine learning and AI in innovative ways in our consumer products. We have tailored our AI and machine learning capabilities to solve real business challenges such as improving customer care while lowering operational costs. We've also utilized AI to help automate and optimize how we run the network in a lean way. Number five, expensive ecosystem of partners. We embrace collaboration with partners from large established partners to new innovative ones. We have a rich ecosystem of partners that just keeps growing. And number six, finally, we bring the best of Google. We leverage Alphabet and Google's assets and capabilities to bring the very best of Google to help CSPs digitally transform. Let's go to the next slide. In addition to bringing the very best of Google to our customers, we are hyper focused at an industry level and have developed a comprehensive portfolio of CSP specific solutions to help service providers accelerate their growth. Our solutions are focused on helping CSPs transform in four key business areas, monetizing the edge, engaging customers with data-driven digital experience, service provider network evolution, and monetizing core systems and workforce. Let's go to the next slide. I also want to focus on our ecosystem. At Google Cloud, we recognize that it takes a village to bring about tangible change and we truly embrace collaboration. Our partners include OSS and BSS vendors, collaboration and technology vendors, network equipment providers, global system integrators, technology vendors, and edge ISV partners. And I'm excited about Google Cloud's commitment to continue to grow our partner ecosystem. In fact, at the end of last year, we announced that we are teaming up with 30 plus launch partners and popular industry application providers to deliver more than 200 partner applications at the edge, all on Google Cloud, and all working together to help our CSP customers harness the potential of Cloud and 5G. Let's go to the final slide, please. I really want to thank you for joining me today and I hope you enjoy the rest of the forum. If you would like to understand more about how Google is partnering with communication service providers and the broader communications ecosystem, please contact my colleague, Amol Fatke, who's also on the Linux Foundation Networking Governing Board and he will be more than happy to answer any questions you may have. Thank you very much. Ah, talking about Amol, right? Like welcome. And again, thank you, George. I do want to sort of highlight the fact that your vision of laying out the challenges, what Google Cloud and Google have done to support that, and then how you're building an ecosystem to accelerate the adoption very, very fantastically laid out. So while I also have Amol here, you do want to kind of take it one level deeper and kind of ask questions that have been coming in, specifically, what are your thoughts on kind of the multi-cloud strategy as some of the telcos are putting de facto open source projects like ONAP for orchestration and automation and then they sit underneath like a Google Cloud with Anthos and things like that. So in general, maybe you know, utilize some of those questions. Absolutely. So first of all, nice to be here. Nice to see you again and nice to see the team. Excellent conference and I'm looking forward to hearing some great feedback on that. To answer your specific point Arpit, as George said, we have really started looking at an Anthos based multi-cloud strategy. And for us, Anthos is more than just a sort of software orchestration there, right? It's about combining the whole journey to cloud native Arpit that we see happening over the next two to three years. And that cloud native journey already started in the IT world and is now really getting accelerated in the network world. So as that happens, we fully expect to work very closely with other members already on LFN and ONAP, but also others that are about join because ultimately they're all trying to do the same thing which is, hey, how can we help the end clients, the CSP clients to radically transform the way they look at network and IT systems, right? So for us, the move to cloud native is really like an accelerant. And that's where ONAP, other projects under LFN, the edge project under LFN and so on and so forth would actually give us tremendous value because we are expecting pretty much all of those to get onboarded in a sort of cloud native world. So that really is our main ethos, Arpit. Very good, very good. Again, for those of you who joined us late, I had mentioned there's a Q&A button on the right-hand side of your chat, so please send them over. There are a few questions on more high-level strategy on Google Cloud. Is it becoming less of a hyperscale cloud platform and more of a network service provider or is it augmented strategy on kind of doing both together? I see, definitely not a network service provider. We basically want to collaborate with our CSP clients and partners and friends, and they are the ones who are ultimately going to offer network services. Think of our cloud platform as George described, is something that we built over the last two decades to support all of the Google first party services, whether it's search or email and others. And as we built those, we actually solved a lot of the problems that happen when you drive applications that scale in a cloud native world. So it was actually more about how it can be externalized, all of that innovation that Google has developed over the last two decades. Work with open source and see if we can actually put it in the hands of our CSP clients for them to then monetize their infrastructure as well as transform their network. So it's all about giving the infrastructure with our Edge compute and our Anthos and ultimately let service providers dictate what applications and services they want to put on top. I think there's a similar question but a slightly different nuance, which is, is your network connectivity private, meaning guaranteed, or using the public internet protocols to achieve it to us or is it a hybrid? And I think, go ahead. Yes, so as far as our core network is concerned, we absolutely have this at a guaranteed level because obviously the cloud applications that are running are fairly mission critical in many cases. So we absolutely guarantee very stringent SLAs there. Now, when it comes to access to our cloud, whether it's through interconnects or through partner networks, obviously that uses whichever protocols that are out there. We ourselves have our own products that allows people to connect securely to our cloud. But then we'll also partner with CSPs and we are already having other technologies like SD-WAN and so on, that are used as connectivity mechanisms to get into the cloud. But once you're on the cloud, we have the largest privately owned infrastructure planet wide, and we basically let me set into it. Okay, very good. I've got a couple more. What is your support for CNFs or cloud native network functions and multi-connectivity? I think it's in the same line of, you know, on-app over-anthos with CNF extraction, but, you know, might as well repeat that. Absolutely, absolutely. As George described, our ecosystem of partners is growing, whether it's edge partners, there is about 30 of them that have now got onboarded on Anthos. We have also looked at specific partners like Nokia that we announced publicly a couple of months ago, and we are actually working with them for their cloud native 5G core functions to get onboarded. And there is a lot more in the pipeline update. So there's some exciting announcements to follow in the next few months. Very good. All right, we'll just have to look at that. All right, last, probably we have time for one more question, which is, you know, CSPs have definitely ARPU challenges. Does Google help increase their ARPU or give some ROI, you know, specifically because, you know, obviously it's built on open source. Now George could answer it or you could answer it. I think you're on the roll, so just go ahead. Okay, I think it's basically three things. One, we definitely help them drive ARPU by offering them edge solutions on our edge compute platforms. And that's where they can really build, you know, retail manufacturing specific vertical solutions to improve ARPU. But then we also have our AI capabilities and analytics that help them drive customer experience that then ultimately also drives ARPU. And then finally, the whole open source Anthos driven capabilities are very important for them to radically transform the TCO as far as networks and ideas. So that's the three ways in which you would help. Gotcha, gotcha. Okay, so maybe one more. Yep, I think Jill's saying one more. All right, so are you reviving verticals to run on Google platforms? And in general, what's the assurance that cloud platform providers will kind of not withdraw the vertical efforts that typically goes on on APIs and things like that? I think it's a longer question, but how much vertical support, how much of a vertical market support, whether it's APIs or whatever is envisioned in the bigger scheme of things. We absolutely remain committed to that. I don't think there's any strategic intent to sort of move away from that. If anything, the whole notion behind Anthos Multicloud was to open up an API platform for partners to come and innovate on top, right? So definitely there's no intent at all to move away. And then of course, we can take it offline as a longer conversation. Yep, exactly, very nice. Awesome, man, you could answer like so. And thank you, by the way, we have a huge number of people online. So we are really excited about everybody joining and asking these questions. It's a rare opportunity. Trust me, I've been on several conferences virtually and we don't get a chance to ask questions to the topics. So don't miss this opportunity. And if they don't know, they don't know, right? Like they don't know everything. All right, so George and Amal, thank you very much for doing this. Really appreciate your support with the open source ecosystem in general, open source projects on the networking and Edge. And thank you for doing this. Thank you for having us, it was a pleasure, thank you. All right, so we moved to the, from cloud back on the Edge and we have a very exciting presentation or I would say a fireside chat right there. We're gonna discuss Edge application and IoT application on how do you take, say cloud networking concepts and thoughts into an Edge or a distributed Edge like moving all the way into sort of closer to the application. So we have the next speakers. We have Saeed also who is the CEO and founder of a company called Zedeta. And they are one of the contributors of Project EVE and we have Steve Mulaney who I think everybody should know. He's the CEO and president of the Atrex as an end user. So without further ado, we're gonna have Saeed and Steve walk us through some of the priorities, challenges, uses and kind of how they managed to get this going in the open source world. So take it on Saeed. All right, excited to be here today. My name is Saeed Visal, the CEO and founder of Zedeta. So with me today, Steve Mulaney. I'll let Steve introduce himself in a second but maybe a few words for the folks that don't know our company Zedeta. We're an Edge orchestration company that is addressing the distributed Edge. Our product is partially based in open source and we'll talk a little bit about that today and enabling customers to make Edge computing effortless. Distributed Edge, as we all go talk about that in Fireset today, it's basically any Edge computing outside of your traditional data center environment very close to IoT devices and sensors. So we've seen our key verticals being in industrial energy, retail and telco, a high number of remote distributed locations processing a lot of data before they send it off to the cloud. And as Arpit mentioned, we're a founding member of EllaFeds, we donated Project Eve for the ones who are not fully familiar with Eve. Eve is a lightweight secure open source Edge operating system that's based on Linux. It's not a distro, it's probably the opposite. It's a very opinionated and modular stack. We took out everything that we felt didn't fit in the Edge and focus a lot on security and basically trying to build here what people call the Android of the Edge. So Edge computing is an extension of the cloud. We see customers basically process data before they send it off to the cloud and therefore networking to the cloud is very important. And that's why I'm here today with Steve Mulaney who's the CEO and the president of Aviatrix. Steve, why don't you introduce yourself? Yeah, so before we get on Tom Arthur who I knew from the very early days of client server when he was at Novell and brought up that, yeah, you're a synoptics guy. So I was an Ethernet engineer before 10 base T so I know what CSMACD means. So I've been in networking for 35 years and I've seen the last transformation of mainframe to client server. We rode that the last 30 years and then I've had a great career. I was at Palo Alto very early. I was the CEO of Nasirah with Martin Casado. We created OBS, Open V-Switch and we ended up becoming part of VMware. And so kind of rode that wave and then, you know, Said I retired, I was done. I was never gonna work again. And for about five years I was living the dream and this thing called cloud brought me back at Aviatrix and I saw that, you know, about two years ago, I said, okay, every enterprise about seven years ago said, okay, we're gonna move to cloud and for five years they did nothing. And I noticed about two years ago, they all on the same because they moved like a herd. They all decided, okay, now we're actually moving to cloud. And I said, my God, the transformation is happening. This is gonna be 10 times bigger than client server. It's gonna happen a thousand times faster because it's all software. And the center of gravity two years ago completely flipped from being on-prem to in the cloud. And first thing people do, that's just how networking works is you fix the core first and then it moves out to the edge. That's how it works. It doesn't, you don't fix the edge first, you fix the core first. The core is cloud and it's gonna start moving out and that's why I'm excited to be working with you and, you know, as the world starts moving out to the edge. Right. Well, we got a couple of topics today to go through. One is open source. The second thing is cloud. Love to kind of, you know, talk a bit through more in that than networking in between clouds and then finally to the edge. Let's start with open source. Steve, what are your thoughts on open source? What, obviously OVS was, you know, you guys contributed and created that early day but what's your thoughts on open source and how it relates to standardization? You know, you know, Martin was famous for this and insertion is nine-tenths of the law, right? And you've got to insert. When you're at the edge, you can't come in with a proprietary thing in many times. So, that's why we created OVS and it was, you know, a very successful way of networking out at the edge. And I think you're doing a very similar thing with Eve and so forth and you look at the distributed ads that you're, you know, this is millions of sites. People don't want a proprietary stack that locks them in into somebody's hardware. So, you know, open source gives you that choice and that insertion, particularly at the edge where the numbers tend to get massive. That's at least what I've seen and we saw it at this era. Yeah, I mean, I think one of the things that I find interesting because maybe I've not 35 years but, you know, a little bit less but still a long time. I think when I started networking, everything was about IETFRCs and IEEE standards and everything had to be sort of very well-spec especially if you want an interoperability. And to me, it feels like with open source, we basically now write the code, write and writing the spec, the code becomes the spec. And in some way, I think open source has replaced a lot of the traditional, you know, standardization efforts and probably enhanced interoperability to your point where it becomes easier. Now, you use OVS everywhere you use it, you know, you're gonna probably, it's gonna work, right? It doesn't matter who leverages OVS in their specific products, for instance. I agree, I agree. So, you know, in terms of the IETFRCs and the work you guys have been doing, it has been a lot about cloud networking. Can you maybe spend a minute just talking about what's the problem we solve for networking in the cloud? You know, obviously the cloud providers have also networking capabilities built. So what is it that you're seeing as you talk to customers that IETFRCs can bring to the table? So I think the thing that I see the most is this is a business transformation. This is not a bottoms up transformation. This is where the business, the CEO, the board says, we are going to digitally transform because we have an existential threat to the survival of our company. Fear of death is a pretty big motivator for people. And it took them a while and they talked about eventually they said, we have got to move to the cloud. And when that happened, you know, you know the way IET people think it's all about architecture and it's all about where is your center of architecture? And the center of gravity of architecture two years ago flipped from being on-prem center of gravity to in the cloud. Once that happens, cloud becomes an investment, on-prem becomes an expense. Expenses go down by a factor of 10, investments go up by a factor of 10. And then once you define what your architecture is, what do you do? You push it everywhere. You don't want five different architectures. When we switched from mainframe to client server and that became the strategic architecture, mainframes were quarantined off in the corner. Same things happening right now. It happened two years ago. And then the first thing they fix is at the core. And that's where we kind of fit, you know, with AVatrix is everyone went to the, they thought it was initially was all about, you know, coming at it almost from like an SD-WAN conversation, which is like, so the cloud service providers would, this is what they would tell enterprises is we do everything and anything you'll ever need for networking and network security. And the enterprise, oh my God, that's wonderful. That's so good. So all I got to do is get there and then everything's taken care of, all inclusive, baby. All you got to do is get there. They're, oh, this is fantastic. And then they get there and then they realize like all inclusive things, there's one sailboat, one surfboard, you know, one mask to snorkel with. One buffet. You know, a buffet with shitty food, you know, and they go, this is not at all like the brochure, right? We've all been there. That's what the cloud is like is it's crap. And so it's basic level services, a lot of complexity. Nowhere near like the brochure and the enterprise goes there and they go, oh my God. And it gets even worse. Now all of a sudden I'm in multiple clouds because the business decides what clouds I'm gonna be in, not me as IT. So I got all this complexity, all the clouds are different. I've got half the staff because my leadership said, oh, it's cloud, it's easy. And so they go in and they realize and by the way, the boats burned on-prem so no going back. And I'm in here and now what do I do? And so I want the DevOps mentality, I want the automation, I want to terraform everything. I want the simplicity and the agility of cloud, cloud native. But yet I need the visibility and the operational controls, security controls, performance controls, day two troubleshooting that I had on-prem because I'm BNY Mellon. Like a government for God's sake, not even a bank. So it's not just willy-nilly spin things up and the cloud run around naked, drink at the kilo. I have controls that I have to have. How do I get the combination of both of that? You can't get it from the cloud service providers, you can't get it from the existing on-prem networking vendors. You have to go to somebody who's cloud native but also understands the visibility and control that you need. That's AV Matrix. Excellent, thank you. Awesome. Well, let's talk a bit about the rise of edge computing. I think from my perspective, I'd love to get your thoughts too. I mean, what we've been seeing as we talk to customers and I think Tom Arthur also mentioned this a little bit is that in the cloud-born and the traditional workloads, the processing of your data and the storage of your data in one location, right? You generate the data in the cloud, you process it in the cloud and you store it in the cloud. What we've been seeing as part of edge is that the data generation part is moving to the edge of the network. Cameras, sensors, devices, all these things are getting there. And I think you and I talked about this in the early days of Zedita too. I think it's almost like the network traffic is flipping. And we used to have a very download-centric world where we consume video and we consume information on our mobile devices. But as soon as you start generating tremendous amount of data at the edge, you can't upload it to the cloud. So we've seen this desire to put processing out to the edge. I don't know what you thought you have there and how you've seen these trends in the past. Yeah, you see the same thing. And again, enterprises run workloads where they run best. And they'll bring the infrastructure to where the data is. Exactly as you said, if I'm generating lots of data in a distributed area, guess what? I'm going to bring the compute. I'm going to bring the storage. I'm going to bring the networking there. I'm not going to bring the data to where the other stuff is. Why? Because it's not going to be latency. It's not going to be performance. It's not going to have all the things that it needs to have. So I definitely see that. And I see kind of cloud mentality moving out to these edge points and having that architecture. But absolutely. People are going to run, put infrastructure where it runs best. And they're not going to have a single data center. They're going to have multiple centers of data that are distributed, not just in the core of the cloud, but out of the edges as well. Excellent. You know, I think the other thing that we've seen and obviously we've been working commercially on a few customers as well, is that the edge is so diverse, right? I mean, when you touch the IoT world, it is very fragmented. It's not anymore like you can create an old data center, one common. I mean, every data center was different, but the applications in the data center were probably all the same, SAP, Oracle, Windows SQL Server. I think that the interesting thing with the edge is that in every edge environment, just sort of hitting that IoT verticalization and fragmentation. So customers need different hardware. We've seen customers trying to deploy embedded computers, gateways, even sometimes servers. They run different types of applications from Fledge to Edge X to, you know, Azure IoT Edge, Greengrass. And also we're seeing, you know, different clouds being used from that point of view. Any thoughts you have on how we can connect better for the audience, the clouds and the edges, you know, from cloud networking point of view? I mean, we have customers that have 10,000 VPCs and growing in multiple clouds. And, you know, we think, okay, that's a scale problem, but it's actually relatively compared to what you're doing, Sayid, with the data. It's actually an easier scale problem because these are really simple VPCs. They're very homogeneous. They don't have humans trying to hack into it physically. They don't have RS-232 ports. They don't have, there's no different hardware. We define the instance. So it's actually more very much a cloud way. The issue that you have is you're in that interface between the physical world and the cloud world. And that's where all the complexity is. You know, by the way, you have all the scale, but you have all, like you said, the different boxes, the different sizes, the different cameras, the OT interface to the IT world. And that's always where things are complex. And at the same time, we was trying to scale it down to, you know, Raspberry Pi devices. I mean, in column super nano data centers. I mean, and so that is a significant challenge, but from an enterprise or from a customer perspective, I want that all integrated into one network. I don't want multiple architectures. I don't, I want one architecture, I want one network, and I want it to seamlessly integrate together all the way down to that absolute exploded nano data center that you're enabling. But you're right, that's where a lot of the complexity is and you need a very flexible platform that can handle all of that because you don't want to have to have 15 different end points that you're trying to manage. I mean, that's, we've all learned from the Googles of the world and Facebooks of the world is, you know, you want consistency and commonality and everything you do. That's how you scale. It's got to look the same. Yeah, we got to move from pets to cattle. I totally agree. So we got a couple of questions came in, Steve. So one for you is how can the telecom operators support the, you know, the data wherever it sits? What do you think the role the telcos is in this broader scheme? I think 5G is going to be huge. So I think that's going to be a big play for them. Yeah, I'm saying you agree. I totally agree. And that's what we're seeing, private 5G. And also I think finally the ability to mix private wireless networks with public wireless networks. I think that coming together is going to be great. Another question for you. So the question is why do you suppose the center of architecture flip from on-prem to the cloud? What did the cloud guys do? Did they under deliver on their promises at this point or over promised earlier on? What happened to have enabled that transition from your point of view? So if you went to IT 10, 15 years ago in your business unit and you said, hey, IT, can I? They'd say, no. Well, I haven't even finished my question. It doesn't matter. The answer is no. Well, let me answer it. Okay, let me ask it. Okay, yes, okay. So the answer, and if it isn't no, it's what year do you want in it? So what do you think the business units did? People like me, water goes around an obstacle, right? It hits an obstacle, it goes around it. That's what it does. It seeks the least resistance. So I have a credit card. I'm not afraid to use it. I'm going AWS. That's how this whole thing started because it happens in organizations. When you keep getting told no, and I don't even get to answer the whole question, you get going over it, you go, people go around you. That's what happened. And so agility, a customer once told me, it was about a year ago, it was a great phrase. He said, agility is now part of the definition of mission critical. It didn't used to be. Mission critical meant, oh, it's going to take years. No, no, no. It has to be this. That's what changed. And that's the cloud mentality and so forth. And in order to do that, you got to go to the cloud. And so it was a business decision where the business people drove this. As it should be, IT is a service. When I say jump, you say how high. That's how it should be. It shouldn't be no. It should be no, but this is what we should do. And oh, by the way, I want it now. That's why it shifted. Okay. So I think we're almost at time here. So I think I'll take one more question that somebody asked to me. The question is, to take on previously local and non-cloud connected endpoints, like in manufacturing, energy, et cetera, are you seeing those solutions change to cloud native or basically only integrate using gateways and adapters? And I think that's a great question. And so the way we've seen it, to me, it's always been the future of applications at the edge is cloud native, right? The applications are going to be written in the cloud. They're going to use cloud modern technologies like Go and all the containerization and everything else. And I think the challenge really at the edge has been is how do I deploy the software? How do I orchestrate, deploy and run it? So this is, I think the first order problem is really how to make that as easy as in the cloud. So in our conversation with customers with how we explain Eve is by leveraging Eve in your architecture, your common consistent environment where you can deploy apps the same ease and the same flexibility and agility as in the cloud. Now having said that, what people don't always realize is like in IoT and Tom talked about this, there's already hundreds of millions of dollars often invested in specific verticals on SCADA and local software that runs on-prem. It's analyzing and processing data today. So the ability to fuse that new software and old software together is one of the reasons we built Eve is with virtualization as the starting point, containerization built on top security. So it's actually not just gateways. We see customers deploy greenfield gateways but we also see customers trying to mix old workloads and new workloads at the edge. And the ability to support all that diversity is why we needed to come with a new quote unquote operating system. Let's see. I think we have one more question we can take and this one is for you, Steve. Is any idea on new more tools to improve the agility that you just mentioned? I don't know if it's more, I don't know if it's more, I mean, I would say yes, there's, you know, it's probably us and everybody else that's delivering software in the cloud. It is all about agility, velocity, visibility, control, doing all that, it's all about that kind of a thing. So, you know, I know we, every release we have more and more tools that will help people make it simpler, make it more automated, make it easier to get the visibility and the control and to such that ultimately what you really want is ultimately an autonomous infrastructure that self-optimizes its stuff, itself for the performance, security and cost of your applications, which is what you care about. And you want that for the entire infrastructure. In the core, as well as out at the distributed edge and you just, and that's where I think the world is going and that's where all this software and software defined things are gonna get to where it's APIs that will be controlled and shared among all the different layers such that, no, it's gonna take a while to get there, but ultimately that's what you're looking for with the application, the infrastructure optimizes itself for the performance, security and cost of the application. So, we're making it simpler. Always the same thing, visibility, control and security, always. Yeah, we make it so simple. And I think with software defined and cloud, we can actually do that. We never could have done that on Pram. And again, this might take us 10 or 20 years, but that's where the industry's going. Okay, great. Well, we're at time here. So, first of all, Steve, I really wanna thank you for joining me on the meeting today, on the session. And with that, I wanna thank everybody for dialing in and back to you, Arpit. Oh, thank you very much, Stephen. Say that I do want to sort of ask or make a couple of comments while you're on. First of all, you know, between, you know, Martin, who, you know, I go a long way over the last 10 years, he has keynoted several low NDSs in the past and your vision, right? You know, we really thank the, like the industry thanks your thought leadership. I mean, we need people like you to drive us and kind of go to where we should be going. And I love your, you know, summarizing, you know, what I just heard, right? Which is, you know, open source is important as you get to scale. You need the simplicity of cloud with the operational control of on-prem. Edge is different, right? But consistency and, you know, openness is key. And then tools and technologies like 5G or, you know, a whole set of agility tools are critical, right? Because, you know, IT has to not just follow, but lead and, you know, no longer about, you know, ghost IT and things like that. So I think the discussions and the points you covered, Said and Steve, extremely relevant to the audience. Obviously, you know, we didn't go through slides, but if you need to have details, you know, project Eve and is on the website under LF Edge, the data is on the website, you know, Steve's new company that he came out from retirement is on the website, Aviatrix. But anyway, feel free to follow up and really thank you and appreciate you doing this for us. Thank you. That was great to be here. All right. With that said, I think we're coming to a wrap on the first half of today. Let's take a, I would say 30 minute bio break. We'll be back around 11 or five Pacific. We have some amazing presenters and now we're gonna go into the end user enterprise and telecom service providers, right? So and we'll have, you know, Jonathan Spitt from DARPA actually tell us what's going on there for 5G. So come back in, you know, 30 minutes, take a bio break, coffee break. You know, we're gonna serve you cookies, but from your own kitchen, so please do that. Thank you.