 Welcome to day three of Keynotes. Good morning, good afternoon, good evening, good night, as I always say. I'm personally very impressed with the quality of content and speakers, so really thank you to the community for making this fantastic event happen. We're getting some very good feedback on social media and good to go. So today I'll introduce speakers as they come along, but we have some very cool innovation leaders speaking and you absolutely want to hear what they have to say. Let me start off by saying and introducing our first speaker today. This is Sarah Papu-Yuano. She is a leader. She's a co-founder and president of Edgeworks and without any further ado, please welcome Far. Thank you for the introduction. I'm incredibly happy to be here today to talk about edge computing. So I'm going to get to my first slide here and today I'm going to talk about the rise of edge computing and really accelerating real world applications. We all care about edge computing, which is why we're at the conference today. But really what we've been asking ourselves and the markets been asking us, our investors have been asking us, when is rubber really going to meet the road? When are we actually going to see these real world applications starting to come out? So a little bit about myself. Who's Far? P and why am I listening to her at 9 o'clock on a Wednesday morning? As our pit mentioned, I am the co-founder of Edge Computing Company called Edgeworks and I've been building the Edge through the startup since 2017. So I've been doing this for a little bit of time now. We're the creators of the Eclipse Iofi project. It's an Edge project that opens for us under the Eclipse Foundation. I also sit on the board of directors of the Eclipse Foundation and have for a couple of years and practice that I was an investor before, after my years as an operator. And spend the last 10 years looking at the next technologies to invest in. So I will see spend time investing in cloud computing, big data, storage and that journey, AI, that journey led me to edge computing because it seemed natural to me that all that data that's being generated outside of the cloud and data center wasn't going to be shuttled back to the cloud for processing. And so we were going to need a new way of doing this. So the question I get asked most often in anyone in the Edge industry who could probably test this is what is the Edge? But the real question we should be asking ourselves is where is the Edge? And I'm going to take a contrarian position on this. You know, I'm sure we've heard about, hey, this is the mobile Edge, this is the telco Edge, this is the enterprise Edge, this is the IoT Edge. I'm not even sure I've got those labels correct, but I know there are these distinct edges. And I'm here to say that I don't think that that's the way we should be approaching the Edge. In my view, the Edge is anything that sits outside of a cloud and as a part of a cloud Edge continuum. And the reason for this is if we think about edges in these siloed ways, the mobile Edge, the IoT Edge, the enterprise Edge, then we're going to be starting to build stacks that way. Our architectures are going to be designed for those pieces of hardware. Our software stacks are going to be designed and we're going to start to be building these silos. When in effect, most companies who have complex infrastructure have things running on a variety of different things. They don't just have things running on a telco Edge and the cloud. They just don't have things running on an IoT Edge and in the cloud. And I have customers that we work with that, I'll take a car company as an example. They're running things on their cars. That would be maybe considered the sort of endpoints. They have things running on roadside units. They have things they want to run on the light structures. They have things that are running in micro data centers. They're aggregating them at colos. They're working with the cell towers and carriers to run things on the cell Edge. And then they're putting all that data back in the cloud. For them, the Edge is incredibly fluid. And it may be today that they may want to run things at the cell tower because for certain reason. And then they may want to orchestrate that closer to the endpoints because of latency or whatnot. If we build these edges differently, then that's going to be incredibly difficult to do. If we can make the edges look seamless, then they can move these things around with no problem at all. And so that is the view that I see for the Edge. So how do we take these two things that are incredibly different and make them look seamless? When you think about the cloud, we think about deploying a one 5,000 node cluster in a very homogenous environment with the expectation of HA networking versus at the Edge where we'll see 5,001 node clusters in an incredibly heterogeneous environment where networking is incredibly varied. You have different network topologies that you have to deal with. The expectation is that Edge nodes will disappear from time to time. And minutes, hours, even days, and that's not considered out of the ordinary. So that leads us to the rise of what I like to think of as Edge native computing. So at this point, I'm going to ask you guys to tell us. What this basically saying is that Edge native should extend cloud native. And what I mean by that is, we all know what cloud native is for seeing at this conference anyway. And it's incredibly important to how we build cloud computing. Cloud native really focuses on the central question of how we deliver applications, the delivery of elastic scalability on the cloud, because I know you guys know what that is, the development methodologies. What Edge native does is really embrace those principles, but extend that to the Edge. And what I mean by that is on the surface, if you use cloud native principles at the Edge, you can make the surface look exactly the same for the developers, but the complexities of the Edge, the disparate hardware, the network challenges, the location, you can abstract that away and really focus on how do I bring those primitives forward in a way that allows me to take advantage of the goodness that they provide without worrying about the complexity. So the development methodology and the tooling that we use is still very much the same. And it may help for me to contextualize this with an example. And I'm going to use Kubernetes because of course we're sitting here as a part of the CNCF and I know everyone here knows what Kubernetes is, whether you're technical or not, at least heard of it. And so I actually did a Google search and found this diagram from an Edge computing company, and it's not here to pick on them, which is why I didn't put their name here. But this is pretty common here where they're trying to take Kubernetes and bring it down to the Edge and anyone who's spent any time in the Edge industry knows that all of the companies are rushing to think about how do we take Kubernetes to the Edge? Ultimately, Kubernetes was built through the cloud. Kubernetes was built really well for a one 5,000-dode cluster where we really don't care where things are running and we expect high levels of network connectivity. And so basically what this company has done is taken those principles and really brought it down to the Edge and said, okay, well, I'm going to basically build a mini-cloud. I'm going to take an Edge, which is typically thought of as maybe 5,000 one-dode clusters, create a three-node environment, provide HA, which is not really considered really common in Edge environments and try to use Kubernetes to orchestrate. The thought is right, but the execution is not really taking advantage of Edge native principles. If you're to think about it from an Edge native standpoint, what you'd like to be able to do is still leverage Kubernetes, but you'd like to orchestrate it in a different way. At the Edge, I care about what type of hardware I'm running. So perhaps if I'm taking an AI microservice, I may want to care about, well, does the hardware have an AI accelerator like a GPU? So maybe Kubernetes wants to go and look for that. I care about latency. Perhaps need it within a certain level of microseconds from a certain endpoint. So maybe I can tell Kubernetes to orchestrate for that. I care about location. Perhaps I need to run it on a certain factory floor for security concerns or for GDPR requirements. So what we want to do is obviously allow Kubernetes to think about these attributes and take Kubernetes and create an Edge-aware scheduler such that we can extend Kubernetes down to the Edge. So cloud developers don't have to learn an entirely new paradigm for development, but they can take advantage of what is useful at the Edge. So why is everyone rushing to take Kubernetes to the Edge? Is it just a fad? Is it just the thing to do? Or is there a real rhyme or reason behind it? And the reason is this. We've spent the last 15 plus years investing in the cloud. This entire foundation, the Cloud Native Computing Foundation, is entirely built around the cloud. And we've got a lot of tooling there. We've got a lot of developers there. Everything has really been focused there. But the data is now coming out from the Edge. And the data growth there is going to be exponential. And most of the data growth over the next 30 years is going to be coming from there. So how do we bridge that gap? In order to do so, we really must unlock the power of cloud developers. Edge developers don't actually exist today. I mean, the only Edge developers are the ones who've been working on these products for some time now and we now consider ourselves to be Edge developers. But cloud developers, there are so many of them. And we need to leverage them to build and deploy Edge applications, which is why we want Edge Native to be a corollary to Cloud Native so that they can leverage existing tooling, existing principles, existing methodologies. So why is there so much confusion in the market today? And why haven't developers gotten it yet? And so I can say this as a sitting board member of an open source foundation and a major contributor to an open source project that we are being incredibly stupid about this, ourselves included. And we're adding to the confusion. I still believe that the future is going to be built through open source. We've seen this proved out in the cloud. We've seen this proved out in the internet. We've seen this proved out in big data. This will also be done with Edge computing as well. And how are we contributing to this confusion? We're asking ourselves, which is best? And I get this question all the time. I really like IOFOG, but I need to go out and see which one is best. I really like Edge Expendry, but I really need to go out and see which is best. I can't be a part of the Eclipse Edge Native project because our companies decided to take a bet on LF Edge. I can't be a part of LF Edge because my companies decided to take a bet on the Eclipse Edge Native. And all of these projects sit across all of these different foundations. And they all operate and solve different things. Some focus on the far edges, and they all sit as a part of the continuum all the way up to the cloud. Some focus on development and deployment, and some focus all the way up to operations, and they're sitting at different things solving different problems. The question that is being asked, which is best, is akin to asking, well, which is better, Kubernetes or Terraform? Which is better, Kafka or Hadoop? Now we know enough about big data to know that's a silly question, right? And often we're going to use both Kafka and Hadoop because they solve different things. Neither one can do all of solve everything, which is the same in all of these projects. Neither one can solve all of these projects. Neither foundation is going to solve all of these projects. But what we've been doing up until now is focusing in on it. It's like, okay, how do I position Ioflog and how do I promote Ioflog? And I'm going to pick an Ioflog because that's a project that we've contributed to. How do I push that one and say that this one is the best? And that's really not what we need to be doing. What we need to be doing is showing the developers how to build easy-to-use interoperable building bricks. Cloud developers today now know if they need to build an application. If they need what pieces of open-source software to use. Okay, I need Hadoop. I need Kafka. I need Kubernetes orchestrate. I need Terraform to automate my infrastructure. They know how to put all these things together. And we as a foundations have not made this easy for developers. If I were to ask my kids today which Lego built brick is better, the long one or the short one, the big wheels or the small wheels, they'd look at me as if I'm crazy because in order to build out the things that they want to build, they're going to use all of these pieces. And that's pretty much the questions we've been kind of posing to the developers and this is the things that they've been going through today. So I provided a sample stack from the cloud edge continuum to show kind of how these pieces will work together. For the cloud, obviously, we have cloud VMs and Kubernetes is a great orchestrator for the cloud. We as a company love it. We've actually taken IO fog and I'm not here to promote any one piece of technology, but we've taken IO fog and created an edge-aware scheduler. I really believe that open-source technologies are stronger together. So we teamed up with Scupper, which wasn't originally designed for the edge, but did great with multi-cloud and building a mesh. And so we combined this together to build an edge mesh and there become a major contributor to the project. On top of that, we have Ditto and Hano, again, edge projects. On the edge stack, we could take any hardware node and project E would be a great layer to sit on top of these edge nodes and take advantage of what the nodes have to offer. We like to think about it as kind of like VM work for the edge. Maybe they would position themselves that way, but they have some powerful tooling that we can leverage. On top of that, using IO fog and again, Scupper to orchestrate the edge and figure out where to deploy these microservices. Edge Exboundary has a number of powerful microservices that allow us to talk to different types of machine and tooling. IO fog can't do that and neither can Eve. Mosquito does great work for translation. Hop it allows us to deploy firmware and do over the upgrades. On top of that, that now allows us to, in putting these pieces together, allows us to now just really focus on, okay, what are the applications that we can build? AI ML applications, IoT applications. These are things developers know how to do. If we can just give them the pieces for the edge stack, they can just go and do what they know how to do best. Bear with me here while I move this to the next slide here. Again, I'm not here to push any one thing, but I want to kind of give an example of how this comes together in the real world. Obviously, the COVID has really hit everybody and this pandemic came at us hard in March and is still sitting around here today. At the time, our company thought, hey, my kids are actually going back to school and they were taking temperatures with a manual thermometer. I thought to myself, there's got to be a better way to do this for my kid who actually really hates having his temperature taken. Thinking about some of the edge stacks that we've built for other customers, we thought, hey, we can do this very easily. Leveraging open source technology, we were able to take this from concept to MVP in six weeks. If it had taken us months or years, this opportunity would have passed us by. If we had to build all of this software's preferred scratch or sit around and try and figure out which is best, that never would have been done. What we did was, and of course, our team has familiarity with all of these different pieces of edge software. We took these open source technologies that exist today and combined these Lego bricks together to create this solution in, like I said, six weeks. Do I have a team of edge developers? Nope, they're all cloud developers. Within six weeks, they were able to create this and say, okay, well, these AI models to detect people do live in the cloud, so we were able to leverage that onto the edge. We were able to zero in on the forehead. We were able to detect masks. We were able to detect symptoms. All of these AI models are not new. If we were able to teach other developers to do this, then they can do this as well. Now we've had some developers come to us and say, well, I don't really care about, well, I mean, obviously they care about COVID, but the use case that I care about is being able to check people in. I want to use facial recognition to see when people come to work and when people leave, because in other parts of the world, they'll sign people up and other people will show up to do the job. They said, can we use this developer stack to deploy these different models? It's just a matter of swapping out these pieces. Sure, why not? In the last two months, fires have been a big issue here in California. So I thought, hey, if we can deploy things at the edge, keeping these edge building bricks the same and just swapping out these different models, couldn't we just take an inputs to now detect fire, do video over AI over video to detect when a fire breaks out to take inputs such as dryness, humidity, and start to detect, hey, these are potential fire hotspots. I always say I care about this now since my kids haven't breathed fresh air in the last couple of months, but we want to be able to combine these building blocks such that everyone can do this, all developers can do this, and it's at the point at which we can do this that we'll see these real world applications really emerge and the rubber to hit the road. So to really summarize the point I'm really trying to make here is that we really want to unlock the power of cloud developers. Without the developers, the promise of the edge is going to suffer and we're going to be suffering this hype cycle of death. And I know we hear this a lot. I know we've tried to pretend that people aren't skeptical about the edge. They are. And I think that it's premature. VCs have kind of written this off and said, hey, I'm skeptical about the edge. I think they're going to be proven wrong at some point. Once the developers come, the edge will come. And we saw this with big data too. I was an investor in big data, and we all rushed to big data, and then all of a sudden things weren't materializing. And it took a number of years to the point where people really made it easy for developers to develop big data applications that big data took off and we're starting to see that explode. Same thing will happen with edge. It's just a matter of getting the developers to there. In order to do so, we must make it as simple to develop for the edge as we do for the cloud. Again, by leveraging edge data techniques, we can enable developers to use the same tools, same techniques, and use the same developers. There's tons of cloud developers today, and they're the ones who are trying to get more data up into their cloud environments. So we want to leverage them. We want to be able to work as a community to detangle the confusion about edge development, open source, not add to it. I know we want to promote our projects and we want to promote our foundations, but a better promotion would be figuring out how these pieces work together so people can actually use them. There's no better reward or there's no better accolade than seeing developers actually use the technologies that we built. And no one little bit can solve the problem. If I were to like give my kid one brick and say, here you go, here are your Legos. I think you'd be very disappointed with what I brought home. Having these different Lego building blocks allows us to build different sorts of applications that allows us to solve unique problems and challenges. Some challenges are going to have broad appeal. Some challenges are going to be unique to certain organizations. We want to be able to give those Lego bricks to people and allow them to create these applications. And we need to do all of this to drive long-term adoption of revenue and market growth for the edge. Ultimately, people are waiting for the edge market to appear and it's the developers that are going to drive us there. So, you know, I'm going to leave you with this final departing thought. The edge really depends on developers and it's incumbent on us to help them get there. And that's the goal for what we're doing today at these foundations is what I hope for the future. Thank you. Thank you Farah. This was amazing and I like your two call to actions and kind of statements here. Obviously developers are great. We are in a developer-centric conference here. So, you know, perfect venue for this. I also agree that, you know, edge is a horizontal construct across the different markets, whatever you want to call it, telco, cloud enterprise IoT. And then I think the call to action on collaboration across both foundations and projects is extremely critical, but even more critical to look at it from a lens of use cases and actual deployment scenario. So, really appreciate your thoughts here. Let's move on to a key lightning talk here or a keynote from one of the well-known entities of open source software, right? I think everybody knows Tom Nadeo from Red Hat. He's the technical director there and he has been participating in pretty much everything open. So, let's hear on what he has to say about sort of the 2020 focus, priorities and vision. Tom? Well, good afternoon, everybody. Welcome to ONES 2020. My name is Tom Nadeo. I'm technical director at Red Hat and I wanted to welcome everybody to the conference. We've got some exciting things going on for you all to check out over the next couple of days. I've been just a little background on me. I run a couple of teams at Red Hat around telco partner engineering and most interestingly for this conference networking at Red Hat and specifically we've been working on a variety of the projects here at LF and LF Edge, LF networking and so I'm really excited about the various things that are going on at the conference today. So, what I'd like to do is highlight a couple of key areas that I've been thinking about and that perhaps you can sort of think about as you go through the different sessions over the next couple of days. These are areas that have come out of myself and my teams working on very specific projects and for various different customer deployments around these areas. So, things to just sort of look forward to during the next couple of days. So, my first area I think of interest is to look forward to and there's a lot of really interesting things going on today the evolution of Linux kernel networking and Linux networking. Here in this area, we've seen an evolution of Linux networking over the last couple of years pertaining to supporting a variety of different technology, technological changes, but specifically to support virtualization. So, we've seen a lot of hardware offload types of additions and various other things to support virtual machines and now most recently we've seen a lot of work going on in container networking and that's particularly where my teams have been spending a lot of time recently and there's a variety of really interesting presentations on the topics. For example, Maltis and Service Mesh which actually gets me into my next topic which is multi-cluster federation again related to, very closely to container networking multi-cluster federation and multi-cluster connectivity in general is a very interesting topic these days specifically because generally speaking folks don't use a single data center a single geographic location for compute storage, et cetera and being able to federate and mesh those things together in a way that is appropriate for microservices is an ongoing challenge that you'll see a lot of presentations on including one that I'm doing with a colleague of mine. Further down the road another area to focus on is operations and specifically operators. I think we've done a lot of good work at the LF over the last few years on projects like OPNV and even ONAP that have tried to bring in the operator perspective in a perhaps a non-developer targeted way and that's been very good but I think we need to do more work on this and I for example, I'm moderating a panel during the conference with a variety of network operators to really to get their perspective and that really should be something that will be interesting for everybody to take a look at because ultimately the folks that have to use the technologies we build should have a really good say in how they're constructed. Another interesting area is the evolution of container networking. Again, I mentioned earlier Maltis and Mesh there's a variety of other interesting things in the space too that you'll see during the next few days and that's a very interesting thing because container networking is pretty much the hot topic these days especially as it pertains to some of the new 5G VRan and Edge and MEC types of use cases and deployments that we're going towards. And then creating solutions in fewer Lego blocks is another sort of area that I think we need to look at. I'm sitting on another panel this week about the various Edge projects and Linux Foundation Edge and here while there's been a lot of really good work in this space, one of the concerns is around having too many solutions if that's possible. And so look there and hopefully participate to help us narrow that area down. And then a couple of other really important areas for me are better project governance, really getting involved. I mean we do have certain projects where there is a lot of heavy developer involvement and governance, but in a few others there's a lot of folks that are not as closely involved in the day-to-day development as we really could have and that would really improve the project. And then finally, I really look forward to supporting developers even more so than we are today. Linux Foundation and its projects traditionally have been very developer oriented and supported and I hope we can continue in that vein. And you'll see a number of presentations this week from actual developers developing the various projects and I encourage you to talk to them, meet with them and help support them and maybe also jump into the projects yourselves. So with that in mind, again, I welcome you all to this year's ONES 2020 and I hope you all have a fantastic time. Thank you very much. Thank you, Tom. I like your idea on call to participate specifically for developers and projects so absolutely an important concept. I would like to add one thing on the kernel networking side of things. For those of you who are not familiar with a project called DENT, that was launched earlier this year. It's a NAS which really builds on the kernel native networking, right? So it's using Switch Dev and it's used for retail NASs and data center type NASs. And this is really, think of it as requiring no abstractions, like you keep on abstracting and then you lose performance and increase cost. So this is like the native way of doing NAS. So I just want to add one thing there. But let's move on. The next speaker is again, well-known thought leader from Intel, Rajesh Gadiar. He's the VP of Data Platforms Group in CTO. So without any further ado, let's have Rajesh. She's going to talk about the edge as well. My name is Rajesh Gadiar and I'm the vice president and CTO for Intel's networking business. It's a real pleasure to be talking to you at this O&ES event today. 2020 has been a very eventful year so far. I hope all of you are staying safe and doing well in the middle of this pandemic. Things seem to be improving recently. So I'm hopeful that we will return to normal operations soon and I hope to see you all in person at the next O&ES event. 2020 has also been the year of 5G. The foundation we have made as a community over the last few years in virtualizing the network infrastructure and the 5G field trials we have led over the last couple of years have prepared us really well for the rollout of 5G services. Now a key inflection point for us is the transition to cloud-dative approaches to build and deploy network applications. The designation of hardware and software with NFT positions us really well for cloud-native. An approach that was pioneered by cloud service providers and now is being increasingly adopted in the network infrastructure. In the cloud-dative model, network and edge services are built as microservices to deliver better scalability, agility and faster innovation. Today I'm going to focus on the edge as a key business opportunity fueled by 5G. This picture here shows the key drivers of edge computing. The next generation of services such as industrial and factory automation particularly AI-driven automation video and video analytics related services, private wireless in the enterprises all require low latency determinism, end-to-end security and quality of service. Now this can only be achieved if the computing is done closer to the application of service and this is exactly what an edge delivers. One way to think about it is bringing the cloud computing to where the application is and where the data resides. The virtualization of network infrastructure and the standard server-like infrastructure in the wireless radio access networks and telco central offices also open up the possibility to do edge computing at these locations. Now some examples are content delivery networks or CDNs and cloud gaming where you could host a GPU cloud closer to the consumers. Another option is to deploy edge microservices in telco regional clouds or even public clouds. Applications that are not highly latency sensitive can run in these clouds. In fact, the cloud native deployment architecture with microservices allows us the possibility to make use of any of these locations based on the application requirements, cost and other considerations. Gartner predicts that by 2025, 75% of data will be generated outside the traditional data center of cloud. So the key takeaway therefore here is that edge computing is really as much about flexibility and scalability with which you can schedule workloads at various locations in the infrastructure based on cost, latency, quality of service and a host of other requirements. Location is a consideration for sure but in the longer run flexibility and scale play a much bigger role in edge deployments. We look at the challenges of deploying edge applications. First, the diverse nature of applications and services at the edge require heterogeneous computing resources such as AI accelerators, GPUs, smartnecks. Second, the edge needs to be dynamic and built with cloud native practices why? So we can respond to the dynamic nature of today's applications and services. Third, the edge needs to support key performance, latency and quality of service like I said before and the needs of these new edge applications and services. Many of these applications that run at the edge are real-time applications and have stringent requirements for latency and determinism. So now let's look at what I believe are three key areas that I'm showing on this page that help us deliver the edge services with a cloud native approach. These are also the areas that Intel is putting a lot more emphasis on. First and foremost, the infrastructure on the hardware for the edge. Now we have driven significant innovations in our latest generation Intel Xeon platforms for both network and edge applications. Our vision quite simply is to make a standard server a best-in-class network applications platform and deliver a scalable, programmable and intelligent infrastructure for edge services. In particular, for edge, we have developed a modular plug-and-play system architecture called Converged Edge Reference Architecture or CERA. Second, the application platform and how do we provide that easy button? So what Intel is doing here is we are focused on providing the software and tools that make it easy for application developers to build and deploy your applications with a cloud native approach and to facilitate this, we have built an edge software stack called Openness, Open Network Edge Services software that provides a number of platform optimizations in the form of microservices with REST APIs. Third, the orchestration and automation and the whole CICD the continuous integration, continuous deployment approach to create, deploy and manage edge applications with scale. Now in this area, we have been working extensively with Kubernetes and Kubernetes community and improving the networking and other capabilities in Kubernetes for edge applications. Now one of our key efforts in Kubernetes is what we call Enhanced Platform Awareness or ETA. This allows us to expose key capabilities and new features in our hardware platforms including real-time telemetry data that can be used by Kubernetes controllers such as placement controllers to place workloads with intelligence for best performance, performance for what considerations. Next, I want to zoom in and spend a couple of minutes on Openness. I made a reference to Openness earlier the Open Network Edge Services software toolkit created by Intel in collaboration with many of our industry partners. Openness is a modular architecture and is intended to provide you a software platform with pre-built services to make your job easier as a developer of edge applications. So what does Openness give you? First, it abstracts network complexity so you can choose across many different data planes, continuous network interfaces technologies. Second, it provides a number of services for cloud native deployments. In particular, it has support for cloud native ingredients for resource orchestration telemetry and service mesh technologies. Openness has built-in microservices for data processing like I said, multi-access networking, telemetry various kinds of platform accelerators especially for media and video processing application, security and so Openness provides many of these optimizations for hardware features for best performance and the return on investment. In many ways, this is the easy button for building your edge application. Now if you're a developer looking to build edge services I invite you to download Openness at Openness.org and play with it. As you begin to use Openness please reach out to us and provide your feedback. You'd love to work with you and continue to enhance your capabilities in Openness for your needs. We are ushering in a new era of distributed computing. We will see many innovations and new services in coming months especially as the 5G ramp happens. So I really look forward to working with the community to drive this next phase of innovation. Stay safe and see you soon. Thank you so much. Alright, thank you Rajesh. That was insightful and I agree that 1% of the data created is going to be outside the data centers. Farah said the same thing you're saying it. I think a lot of analysts are saying edge is kind of four times the market. So really excited about this whole phenomenon edge. Let's move gears into something more deep more detailed in terms of the infrastructure side of things and I want to introduce a couple of key speakers here but before I do that let me introduce what they're going to talk about. I think I mentioned it at my keynote it's a new project called ODEV and it's very exciting times and we're going to spend a little bit more time on understanding what it's going to bring to the Linux foundation and Linux foundation networking from a telecom perspective. But the next keynote speakers are Jonas Arng who's the chief technologist and Martin Halstead who's the distinguished technologist at HPE. So without any delay please welcome Jonas and Martin. Hello there so hi there I'm Martin Halstead and as was previously discussed I'm a distinguished technologist within Hewlett Packard Enterprise and my primary focus is on technology within the telco domain so I've been alongside Jonas I've been working on this ODEVM project and an associated HPE product that falls out of that for well over a year now so in this presentation we're going to talk about the business problem in terms of what is ODEVM address what the project approaches in terms of the ODEVM community the status of the ODEVM project and then HPE's product line the resource aggregator for ODEVM you know what that is and how would we use it in terms of go-to-market so when we talk about ODEVM ODEVM stands for open distributed infrastructure management and what do we mean by open distributed infrastructure management we feel that it addresses a number of issues that affect primarily telco like use cases but also equally could apply to enterprises that have very distributed data centers you know with infrastructure that are pretty heterogeneous so there are a number of telco infrastructure management challenges that we've looked to address with ODEVM and these consist of at least four different areas ranging from increasing numbers of distributed data centers so what do we mean by that as network functions are virtualized within telco environments that's typically started from the core data centers of the operators but as we're now moving and as is a primary topic for this forum outside of those core data centers more to to the edge of the telecom operators networks in terms of metro central offices, radio access networks etc. X86 based workloads are being placed out on enterprise IT infrastructure effectively each of the points of presence for an operator becomes a miniature data center and each one of them we're going to have their own sets of infrastructure meaning that you exponentially scale out how you're going to try and manage all of those infrastructure points of presence so that's been it's become more and more a major headache for the operators in terms of that infrastructure as it's deployed inside those data centers those points of presence typically it's very heterogeneous and what do we mean by that well the infrastructure that you deploy in an edge deployment for areas like a central office or radio access network it's fundamentally different typically from the infrastructure that you deploy inside a core data center and that's based on a number of factors principle one of those is cost but also things like environmental constraints in terms of those points of presence further out in the edge could be fairly small narrow depth and then the component sets that would be deployed in that infrastructure would be typically heterogeneous as well it would come from a number of vendors and when we talk about vendors as well obviously we as HPE are pretty prevalent in those environments in terms of IPT infrastructure but also our competitors are there as well so the environment is extremely heterogeneous but from an operator perspective they want to manage that compute that storage that Ethernet switch fabrics using the same tool sets if they possibly can but the thing that we find though is that we look at those tool sets there's typically minimal alignment across them so the way that you would manage compute infrastructure is typically very different to how you would manage storage and networking and that's based on numbers of different protocols that are involved data models for managing that infrastructure and then vendor specific implementations as well so you have this mix up of different protocol sets, vendor implementations data models that make it extremely difficult from an orchestration perspective service management to manage that infrastructure so the types of solutions then that exist to try and manage those infrastructures again we find that those are typically closed, i.e. single vendor solutions that may manage maybe one or two different vendors compute solutions, maybe one or two Ethernet switch fabrics the same with storage nodes but the management life cycle of those closed solutions there's managed per vendor infrastructure management solutions tends to be entirely down to single vendors so you end up with brittle architectures that become extremely difficult to scale out and support different vendors, different protocol sets, etc this kind of leads to a lack of physical infrastructure management consistency that's particularly prevalent in the telco space so as you can see in this diagram you have a number of upstream COTS OSS service resources management stacks ranging from say open stack Kubernetes, own app, etc or even the operating systems themselves they would need to have bespoke integrations then to various physical infrastructure management solutions typically coming from single vendors and those in turns then would have integration into the physical infrastructure itself based on protocol sets like Redfish GNMI Netconf, Yang and vendor proprietary data models and protocol sets so you have this mesh bespoke integrations so what we have looked at from an open source and open perspective is how can we simplify infrastructure management and truly make it open so we didn't want as a vendor as HPE to go and build our own infrastructure management stack we wanted to work with the open source community which is why we approached the Linux foundation in terms of looking at tool sets that allow for truly open infrastructure management so in terms of our approaches a project for that I'll now hand over to my colleague Jonas Arndt that will take you through the Odin project Hi guys, first a little introduction I'm an architect working for HPE in the telco segment as Martin doing very similar stuff so Martin actually presented a bunch of problems and we have been looking at those and you know our interest is obviously to sell HPE gear and HPE products at the same time we realized that if we could level the playing field and make sure that things like operators could use equipment from different vendors it will be a good thing for the whole industry so we set out a few objectives when we formed this project one of the objectives was obviously that we want to have a standard a line way of talking to the stack and we picked DMTF redfish for that because redfish has been around for quite some time it's adopted in the industry and it has evolved and is dealing with a lot of the stuff that we want to do with that so not only the API is on the top but also the model it should be pure redfish and that's one of the principles of the project other things that we wanted to do is that we want to make sure that northbound clients didn't have to worry about vendor specific implementations so that's another key thing and we also wanted to be able to just hit this stack and know about everything going on and the integration and we very early started talking to the DMTF redfish standard body about introducing extra things that will allow support for that so you can hit the northbound interfaces and understand exactly what's in the data center where it sits in what rack, what aisle and what it is and so on and so forth so once we started looking at this a little bit more inside HPE we started coding a little bit before it became open source but we always did that with a mind that we will actually open source this and we would do it together with partners so looking at the bigger picture here on the top then you could have a bunch of different type of clients that will actually be able to communicate with the stack using standard redfish API calls and expect then that the redfish model that is presented is aligned with the standard on the southbound side like Martin was pointing out we have a lot of different protocols there are BMCs that speak IPMI redfish, different versions of redfish depending on what vendor it comes from perhaps different versions of redfish even if there are two different server models from the same vendor so all of these things we wanted to that has made it very difficult to do standard documentation, automation we wanted to sort of try to remove from the equation so that's why you see on the southbound side that we have some different type of equipment and different type of protocols so if we take that a step further then and look at how the ODIM stack looks like today you can then see that on the northbound side we have these type of redfish APIs there is also a box here saying integration but if the northbound client does not speak redfish there is obviously a need to do translations and things of that nature on the southbound side you can see that we have something called ODIM resource abstraction layer and this is what we refer to internally at HBS the plugin layer and it will allow a vendor or some contributor to develop a plugin that will translate from their server, switch what have you into this redfish model so for instance if you are a vendor and you have a server that speaks redfish but perhaps it's an older version of redfish your plugin can then do translations from that older version to the newer version all plugins will also actually have a leg on an event bus being able to forward events up to the stack because in the stack we have the event service that northbound clients thank you to subscribe to looking at the stack here you see there is a lot of different services and they are all redfish services there is the account service, the event service aggregation services it's a late contribution in the redfish spec 2020.2 part of that is something we have worked on together with partners in DMTF like Intel and others so next slide then how does this look in the ecosystem then so obviously in the project we will have hardware vendors we will have operators and orchestration vendors and others and we will all work together to weevil in the features we are looking for and while doing that we will also run into shortcomings of things like the spec so we have obviously a relationship with redfish and we try to influence them to implement things that we feel the industry needs in the future and sometimes that is easy to do sometimes it takes a while also we have other projects that could consume them like CNCT ONUP and CNCF and others that will also have some requirements so it's important for us to understand those and see if we can satisfy those what making changes to stack right then a little bit on where we are in the project it's a fairly new project it's formed in June I think and it's an unfunded project what that means is that we're hanging straight off LF we don't have an umbrella community that we are part of right now that has some disadvantages and some advantages for instance you don't have to pay any membership to join as long as you're part of Linux foundation at some point in the future we will take a decision if we will fall into some umbrella organization or what we will do right we have a TSC formed already and we have meetings every Wednesday it's 9am at mountain time so anybody can dial in to just participate it's open for the public just looking a little bit at the contribution that we did when we formed this project so obviously there were some seed code that went in and dark blue things here is representing what HP contributed obviously we set up some of the services here as you can see and the model and we also contributed a plugin generic redfish plugin that anybody can use for redfish equipment or as a template to develop your own plugin for your specific equipment there's also pending contributions here AMI is looking at contributing composition and we know Intel is working on a plugin for unmanaged racks okay then what are we doing right now in the status the project is formed TSC is up and running we have the contributed code like I mentioned we're waiting for some other contributions as well we have a website open that you can hit it's odin.io from there you can get to the wiki and you can see the meetings going on and we have a mailing list as well and GitHub obviously you can download the code and anything HPE and others are doing now to enhance this further is all in the open so when there are features created you will see feature branches and you will see pull requests and all this type of stuff so with that I think I hand back to Martin to go over a little bit the offerings that HPE has in this space Martin thanks Janice so just this next section we'll cover off then a product line that we have which is basically our distribution of the odin project in terms of what we call HPE's resource aggregator for odin so this is part of an overall HPE telco value proposition so we as HPE as I get some majority within the industry would appreciate have been in the telecom space for an awful long time and have been in areas like NFV network function virtualization since the beginning in the formation of that so as part of that in terms of deploying industry standard IT infrastructure in the telecoms environment we do a whole bunch of work in terms of things like telco compliance for OS and driver tuning we produce blueprints of compute storage and ethernet switch fabrics for various parts of a telecom operators environment we have infrastructure tools kits go to market propositions with something like HPE GreenLake but also as part of that then distribution our own distribution of odin and so that distribution of odin would become essentially part of our telco blueprint evolution so we have a number of blueprints which are which are made up of HPE infrastructure as you know as example models of how you would build fully validated NFVI stacks for a telecoms environment and as part of that in terms of the automation of that infrastructure we want to ensure that it's fully open so those blueprints cover core and edge deployments and then in future core and edge models each one of them would have an odin resource aggregator deployed as part of that so what we end up with then is a framework for HPE's open core to edge 5G infrastructure and that consists of obviously the infrastructure that we have as HPE as well as the supporting management tool sets which are completely industry standard and fully open and sitting above that then we would form partnerships in terms of areas like composition management and monitoring a couple on this slide there in terms of open sources as well as with companies like AMI American Mega Trends but then also layering on top of that you know various partnerships for virtual infrastructure management, SDN solutions and then obviously the virtualized network functions themselves which would come from the major network providers as well as the ISVs that exist within the telecoms environment so hopefully that gives you a good overview of the odin project and HPE's offerings in that space as well so I'll hand back now to the moderator, thank you thank you so we are in the final keynote and on a home stretch okay sorry stretch at home everybody I shouldn't have used that word but that's what we are so let's go ahead and I will walk you through my thoughts on where the next three years are going alrighty oops that is alrighty where's the slides I'm sorry I will there you go again we are in California I don't see any earthquakes here but let's see that should come up well at least we are showing that this is live so that's good let me start off by saying that in the in the next 10 minutes and I wouldn't bore you to detail what's happened is you know you've seen a whole bunch of good solutions good open projects and good ecosystems coming up and I want to emphasize where we are going to head in the next you know three years so let me open it here locally and see where we should be going and I think the slides will come up you will always get a copy of these slides as we move forward let's go ahead so the first thing I want to emphasize is that in terms of 2020 priorities we have three and here you go perfect thank you alright so perfect no time left the first one is there's a seamless telecom and cloud integration going on and it's at your fingertips right so we need to realize this and it's a very important thing so let me show you how it's going to happen with google joining with Microsoft you know major participation in LF networking how does this vision get realized and if you look at the simple diagram I showed it even becomes even more simpler alright you know with the oran the crane on the left hand side but in the core all of a sudden you know the carrier core the public cloud private cloud all of that becomes a very simple stack and that stack if you start from the top you have your applications VNFC the cloud native application sort of standardized OSS BSS Mano type layer with analytics coming to a controller with you know VM based open stack running on Kubernetes which becomes the infrastructure layer choose your data place plain acceleration of your choice you know Fido, DPDK etc there's a lot of options there and then you know integration with public cloud players like you know Azure and Google etc and again there's not exclusive to these folks I'm just giving you one example here there are plenty of public clouds you know including vendors and operators that have offered this so this is kind of a very simple cloud plus telecom vision enabled by open source collaboration so if I kind of double click this a little bit the integration points become even more simpler right you have the northbound which is all standard based so this is the OSS BSS services right mephantium form have done years of work there and all those APIs have been coded up in on-app already in the external API project so keep that existing infrastructure and keep in mind this is brownfield deployments you know I always joke right there's very few greenfield deployments and the moment you deploy your first greenfield it becomes brownfield there of all the hype around greenfield but here's the reality you know 99% of the deployments will be brownfield on the left hand side in terms of designing the network right standardized to Etsy and Mac descriptors right packaging onboarding heat, Tosca, Yang templates etc and on the southbound side obviously you know a huge huge focus on Etsy Solz, GSMA3, UPP etc and then those things running on Amtos if you take Google as an example and you know the collaboration of that under the learning foundation networking end-to-end testing by you know opnfv and the CNTT merged entities so clearly a great set of solutions now exist and what we need to understand is how do these solutions kind of go and deploy themselves in more of a use case manner right you heard our keynote speakers today and you saw them talking about use cases so let me talk about how some of the most important use cases that are driving open networking so in networking you see the projects here you can see Oran for example the focus of Oran yes while it's an alliance the software portion called Oran as C as hosted by Linux foundation is focusing on use cases in 5G end-to-end network slicing quality of experience optimization white boxes etc if you could look at the core and cloud which is kind of the blue color here on apps use case focuses on slicing 5G obviously across cloud VPNs or a touch close loop automation, Bolti, VIMS, VCP etc and Nomadic broadband we have talked a lot about the merged project opnfv CNTT all standardization and then obviously Kubernetes as that underlying multi-cloud hybrid layer so very exciting use cases that are being deployed as part of these projects and as far as head right like the Lego blocks are coming together in in these use cases if you look at the edge the edge is even you know getting more harmonized and more unified right so you start off with the sort of the device edge things like anomaly detection things like surveillance, things like on-prem DevOps at scale right if you're talking the project Eve or if you're talking IIoT things like predictive maintenance and condition based monitoring for turbines, transformers pumps etc and then of course you know integration with AIML things like TensorFlow coming in right or maybe look at IoT frameworks right abstracting the IoT from life cycle management through microservices things like building automation, industrial process controls, smart cities, water, retail etc and then of course if you're talking the service provider edge which is in purple here you're talking blueprints and use cases like the radio edge cloud, the telco clouds the connected vehicles, AR, VR classrooms enterprise automation even private LTE services which are very very critical this year and next and then the public cloud edge interfaces so lots of good exciting use cases are being driven and we want to make sure that the teams focus on and we accelerate the projects to production cycle and then the final thing that brings everything together is networking and edge are enabling a whole set of vertical industries right I opened ONES with this keynote and this white paper that we have published but this is not the ending this is the beginning telecom and automotive they are working together to bring you the best of low latency connectivity with 5G with autonomous driving and vehicle 2x motion pictures with all these visual effects that come in which are software based they are not only running through the network but now they are running at the edge of the network with virtual shootings and virtual effect processing right energy energy sector is really where telecom was 5 years ago they are at the beginning of this open source revolution where the exact same orchestration containerization needs to happen at the grids at the edges of the grids at the core distribution sites etc and they are working on you know again cyber security from an open source perspective so very very powerful energy distribution financial I don't need to even talk about that I mean we all know that networks are the underlying underlying frameworks for financial and then of course the latest is the public health and all the apps that are running through so high level story here is edge and networking are building blocks of vertical industries and we need to just continue promoting them so to wrap up today and you know the keynote sections I have my observation obviously I'm going to keep it to 5 but the 5 observations that I'm sure you all agree with me that you know listening to the keynotes and the session number one network is even more important in the new world you heard Andre you heard Alex you heard Justin you know it's not just the AT&T's and the equinex and the Deutsche telecoms but you've heard so many operators talk at the conference network is even more important in the new world so please participate in this project because it's all being built on open source second 5G cloud AI edge and IoT they are key and they are technologies but as I've always said technologies come out if you remember in the last 30 years we fought protocol wars we fought technology wars but that's no longer the case right you build these build solutions, build things that are value so these are key in building the next generation of solutions what I'm most excited about is that markets are collaborating versus competing you've heard it from so many of our ecosystem players it really really you know brings out too many options I would say for the enterprises right because now if you look at end users whether it's enterprises, governments countries they have a vast array of options using quote what I call the plumbing layer or the infrastructure layer which is all open source space and now all of a sudden they have the cloud like connectivity latency and storage right at their fingertips so very important for end users and I think we've heard from so many of the vertical end users in the conference and we'll hear more in the coming months as well and not to say the least collaboration is the way to go open source collaboration is the way to go you know we can't emphasize it one company one vendor, one operator, one system integrator cannot do it themselves you know in terms of dollars of time to market as well as in terms of making it completely secure and making money right and I'll get you I'll leave you with a deadly secret which is participate we lost your audio here but that's fine I think I was talking about a secret here if you can hear me again what I was saying is alright man you you when I was like talking about a secret so all the people who have stayed with me okay I'm going to repeat it so the secret here is that vendors system integrators and members and operators who have participated in the open source phenomena they have received an unfair share of the deals in 5G and beyond so keep participating keep supporting and with that it's a wrap for me a couple of quick two key updates one is this afternoon you know the sessions please join them but this afternoon around 3 o'clock to 4 o'clock Heather Kursky our VP of Ecosystem will be hosting grab a beverage of your choice type thing community happy hour no judgment there and then there is a co located event by LF edge tomorrow so do sign up because there is a limit our capacity has almost been reached you know while we we in a virtual world there's no room capacity there's a limit to the number of bridges and things like that so if you're not signed up there's a LF edge event tomorrow but with that I'm going to sign off and wishing you all you know to stay safe enjoy the rest and don't forget to fill out the survey this is very important right we this is our first virtual open networking and at summit which the global audience from 75 countries and we want to make sure that your experience is as pleasant because we're going to be here similar you know next I don't know three to six months at least but let's let's stay safe thank you very much signing off now