 Welcome back. Our next speaker needs no introduction. He's the most respected and sought-after keynote speaker. President of AT&T Labs and CTO of AT&T, please welcome Andre Fuch. Well, hello, everyone. I'm excited to be here with all of you again. I remember this time last year as we continued to reel from a global pandemic and convene virtually for the first time. As technology leaders, it is our job to be forward-thinking and innovative. It's been said that in technology, things are always five to ten years away. And as we introduce new capabilities, businesses and customers adopt them when they're ready. For many, however, timelines have been shortened. Digital transformation and automation that would have taken companies years to execute are now here in part due to the pandemic. And we now are managing more data consumption than ever before, as millions of people, devices and machines are connected. At AT&T, our global network experienced well over 40% year-over-year increase in data traffic on an average day during the pandemic. Demand for fixed and mobile data continues to grow at a rapid pace. What may be a surprise is that uplink is growing faster than downlink. As the number of sensors and devices grow, the pool of data will only get larger and more difficult to manage. Creating connection is our mission at AT&T. We take an integrated approach to fixed and wireless broadband connectivity to serve our diverse customer segments. Over the past five years, from 2016 to 2020, we've invested over $110 billion in our wireless and wireline networks. AT&T 5G+, over our millimeter wave spectrum, is now available in parts of 39 cities across the U.S. 5G over sub-six low-band spectrum now covers more than 250 million people in nearly 500 markets across the country. We will begin deployment of C-band spectrum later this year with plans to cover 70 to 75 million people by the end of next year and more than 200 million by 2023. AT&T's 5G non-standalone core is fully virtualized and running on common off-the-shelf hardware. We are currently developing and testing our standalone 5G core to enable cloud-native network functions and network slicing. While both mobile and fixed broadband usage is growing, we're seeing an increased dependence on the fixed network as it provides the performance and capacity customer applications are seeking. While these trends have been influenced by the pandemic as employees work from home and students learn from home, we expect these to continue. Consider these new trends we're seeing. We see the average household today having 13 connected devices and we predict this will more than double within the next five years. Similarly, we see video streaming consumption, which is about three hours on average today, also doubling in the near future. We're also seeing a significant shift from HD to 4K video content, which is no surprise as most new devices, cameras and gaming consoles come standard with 4K support. We expect to see data consumption increases 5X in the next five years. For these reasons, we have a bold fiber expansion plan. We currently offer broadband internet via AT&T fiber to nearly 15 million customer locations in 90 plus metro areas in the US. We plan to reach 30 million customers locations by the end of 2025. This integrated broadband strategy allows us to extend our combined wireline and wireless footprints to meet our customers' needs. So, an open architecture is the path forward to meet this growing demand of our network and emergence of new technologies. Openness, disaggregation and interoperability are crucial to realizing the full potential of a 5G world. Openness will enable new innovation and improve our ability to provide the best network to our customers. The ORAN Alliance and the ORAN software community are making significant progress in aligning the global industry with 29 service providers and over 270 additional contributing companies. At AT&T, we have continued to emphasize OpenRAN as we enhance our 5G network and look beyond. Empowered by principles of intelligence and openness, the OpenRAN architecture is the foundation for building the virtualized RAN on open hardware and cloud with embedded AI-powered radio control. Multi-vendor deployments enable a more competitive marketplace and give network operators greater flexibility to draw on the innovations of multiple suppliers to upgrade their infrastructure with the latest technology. Our strategy is to adopt and implement OpenRAN into our network as the technology becomes available. As one of the early pioneers in the adopting and open architecture and software-defined networking in our transport, routing and wireless core networks, we have seen the tremendous benefits that this can bring. However, it's harder to realize an open architecture in the RAN because of its highly complex and distributed nature. So where are we in our open architecture deployments? AT&T is conducting live trials leveraging our O-RAN work that includes open interfaces, APIs, and VRAN technologies. We are seeing significant progress in the areas of RAN intelligent controllers and service management orchestration. We're also seeing tremendous progress with our open hardware platforms through our open disaggregated routing program. We have deployed these platforms at scale with over 19,000 cell site routers and 10,000 enterprise customer routers. I'm very pleased to say that we are finding these platforms to be more reliable and flexible in providing new capabilities faster than ever before. I'm also proud to announce that we have begun upgrading our core MPLS-based backbone with our OCP-based hardware design. This new open hardware is based on Broadcom's Jericho 2 chip set and uses a highly scalable design we call distributed disaggregated chassis that enables massive horizontal scale-out. We use the DriveNet's network operating system software for our core use case and we're on track to migrate more than 20% of our core backbone traffic to this platform by the end of this year. We have another program to refresh all of our Edge platforms supporting our broadband peering, Ethernet, and mobility services as well. This program uses the same Broadcom Jericho 2-based open hardware that we are using for the core backbone, giving us tremendous flexibility in how we manage and scale our many deployments. The Edge program uses Cisco's iOSXR network operating system software. These programs collectively demonstrate the real power of our open disaggregated design. Who would have imagined five years ago that we could have two different software stacks running on a common hardware platform? We're also making great progress on our open-rotom efforts. We already have 100-gig traffic links in production today and later this year we will begin deploying 400-gig as well. In addition, we have created a programmable platform for service introduction and management. For example, we developed a software-defined multi-layer controller that optimizes the utilization and reliability of our core network and allowing us to rapidly configure layer 3 through layer 0 capacity in a just-in-time model. Now let me talk a little bit about some other areas we are working on at AT&T. Of course, no surprise, we have a lot going on in the cloud space. We've been on a cloud migration journey for quite some time now. But most recently, we announced an industry-leading strategic alliance with Microsoft Azure. This allows AT&T to leverage commodity hardware for our mobility solutions. It also enables a programmable platform for service introduction and management. There is a need to define a virtual cloud architecture to support edge workloads from any hyperscaler on a common edge platform. Along with this, there is a need for a common set of APIs to provide visibility and control for workload placement and real-time operating performance and cost. Such APIs would enable enterprises to automate selection and optimization of cloud and hyperscaler workload placement and resource consumption. We are improving the accessibility for our customers and the developer community to leverage these new network capabilities. On the API front, we are now extending that platform through open APIs to our customers to make it easier for them to meet their changing network demands. These new 5G network APIs are in trials now and expect to be in controlled introduction by the end of the year. We also have numerous wireline network APIs also in the design phase. The goal in providing customer access to these APIs and data about their consumption is to enable innovation and new solutions powered by AT&T's network, both wireless and wireline. Here are a few examples of APIs we're building for 5G. Network slicing, security, monitoring, KPI prediction, device management, location for IoT, fault in recovery, and data that can be used for optimization performance, cost management, and of course power management. And on the wireline front, we are excited about the potential for segment routing capabilities to enable more direct control of flows for our fiber-based customers as well. All of these capabilities are helping drive our network to become a much more customer-centric network. The future network will no longer be something you set up in advance to simply provide access to your application. It will become part of the application. The artifacts you use to design and deploy your application with compute and storage will now include wide area wireline and wireless networking. The tools and templates you use to run your business will have just as much flexibility for your virtual wide area network as they do for your virtual compute and virtual storage. Your intent will be mapped directly into wireless and wireline functions that will allow you to flexibly handle the wide set of devices and customers that use your services. All with the security level that you need from private secure intranet for remote workers to entertainment over the global internet. The network is virtualized and API enabled to the point that it can be customized for you, by you, and on your time schedule. This will become the network for you. I want to thank you for your opportunity to speak with you today and I hope you enjoy the many great sessions of ONES this year. Thank you very much. Thank you. That was very insightful. Our next speaker is Rob High. He's a VP and CTO at the Edge Computing Division portfolio at IBM. He's been a fellow in the past and Edge Computing has always been one of the most important verticals, markets, whatever you want to call it. Let's see what innovation is expected in the world of Edge Computing. Please welcome Rob High. Hi guys. Thank you for having me here today. I want to talk a little bit about what's going on in the world of Edge Computing and really kind of the advances that we're seeing here that we should all take stock in. We've been talking about Edge Computing really for some time and I think we all have aspirations about what we believe that's going to be and how it's going to evolve but already today we're beginning to see the use of Edge Computing in a variety of different scenarios including things like production quality, production optimization, customer experience, all really very profound scenarios. We're seeing enterprise organizations over and over again begin to employ and deliver on their own needs to deliver value to customers but using Edge Computing to do that with. Another one that I really want to highlight is sort of the advances that are occurring now in the open source and around standardization. Of course we've seen in the past the work that the Linux Foundation has been doing on things like Open Horizon and SDO which is now also referred to as FDO and Edge X Foundation, etc. But Aura is the one I want to highlight. This is now where the open source organizations are beginning to take a focus on industry specific capabilities and Aura in particular the open retail reference architecture is an effort that was put together jointly by IBM and HP Inc. Intel, we have other vendors now participating in this as well, TIPCO, etc. For example, whose purpose is really to create a foundation on which retail applications can be delivered more efficiently into retail scenarios. Whether that's point of sale, whether that is inventory management, whether that is again customer experience, there's a whole merging space around digital signage, etc. All which can get an advantage from Edge computing, but where what we really need to be able to do is make it possible for retailers to be able to take advantage of the vendor applications that are coming into them and get as much efficiency out of the underlying infrastructure as possible in order to enable that to lower their own total cost of ownership. So that's what Aura is about and I encourage everybody to kind of take a look at that project as well. But I'm going to also highlight one very specific customer that we've been working with and this is around the Mayflower Autonomous Ship project, which was begun by Promare and Marine AI, largely in response to a call that went out to the public for how best to celebrate the 400th anniversary of the Mayflower voyage. And Brett Fenoff, who's the CEO and founder of Marine AI, was in attendance and he was responding and said, look, rather than sort of memorializing the past, we ought to do something that will somewhat commemorate the future and will in its own right be an event that 400 years from now we might be looking back and say, well, that was a momentous point in history. And so he embarked on the idea of creating an autonomous ship, a ship that would sail literally from Plymouth, England to Plymouth, United States, Plymouth, Massachusetts and the United States, entirely on its own, fully autonomous, making its own decisions about how to navigate around obstacles and how to fulfill the obligations of what we call the coal regs, the collision regulations or the maritime rules and how ships should interact with each other in busy sea lanes and so forth. To be able to do that, of course, it has to have the ability to recognize, recognize other ships, recognize shoreline, recognize hazards that may be in the water, including debris, but also marine life, mammals, whales and so forth, that populate the oceans and that it's important to protect and navigate around. So that was the endeavor. The ship now has been built, it's actually in doing sea trials now. We expect it to take its voyage here this fall, sometime right after the hurricane season ends. It actually made an earlier attempt to cross the Atlantic in the spring and encountered some physical problems with the manifold, the exhaust manifold on the engine compartment that caused it to have to turn around, but actually it turns out the original Mayflower journey 400 years ago had a couple of false starts too, so it was kind of maintaining that tradition. But nonetheless, the team is very confident about what the ship is going to be able to do. But to be able to do that, obviously it has to have compute. It has to have compute on board that allows it to not only recognize all these obstacles but also to make decisions about how to navigate around them. And that can't be done by presuming that all that is going to be shipped back to the shore, over a satellite network which by the way is very low bandwidth and can also be somewhat intermittent. It had to be able to do that locally and to be able to do that we created an edge computer. The ship itself is in fact an edge computer, it's an edge device. And we worked with them to not only enable them to deploy the navigation workloads but also other payloads that are necessary to support further science, scientific activities around marine research and collecting data that might be useful for monitoring for global climate change, etc. So, it's a great project. I encourage you to go out to the MAS400.com website and learn more about the MAS project, the Mayflower Time and Ship project. There's information about the technology, information about the mission and you can actually go to a portal to see where it is and how it's progressing. So, with that I want to thank you and turn it back to our leaders here. Thank you very much. Our next speaker is another CTO and I'm very pleased to welcome Vanessa Liddle. She has been one of the open source evangelists in the past and now she's a global CTO at a company called Interdynamics and they use open source and open networking in general to build a business model. So, let's hear what Vanessa has to say. Please welcome Vanessa. Hey folks, Vanessa Liddle here coming at you from my very messy home office outside of Toronto. For those of you who haven't met me yet, I'm a global CTO over here at Interdynamics Systems where a global solutions provider and systems integrator based out of the Great White North, otherwise known as Canada. Today I'm not going to go into one of my tech deep dives or go on one of my usual rants about the state of the tech industry. Today I'm going to talk about how roles and responsibilities are evolving in our now cloud native world, particularly how this impacts service assurance, SLAs and uptime in a hosted infrastructure scenario. With the audience at this conference, I shouldn't have to explain in much detail what cloud native is, but before we dig into things, I just want to talk about how I think about what cloud native is for the purposes of this discussion. According to the CNCF definition, cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure and declarative APIs exemplify this approach. Now, I will evolve this to be true, but that definition addresses it at a high level only. What does it actually mean for an application to be cloud native? When I discuss cloud native applications, I'm referring to apps that are stateless, are split up into logical pieces or microservices, leverage service buses where applicable, are horizontally scalable, and of course, run well in a container. Now that we have that out of the way, let's talk about what I'm actually after today. How does this new application paradigm and the infrastructure it runs in impact availability, and more specifically, who is responsible and for which components of that availability. In order to understand that, we first need to look at the way things used to be. We all remember the traditional hosting facilities, the whoosh of the raised floor, the man trapped security doors, the rows and rows of locked cabinets and cable trays. Up until a few years ago, an enterprise could host its infrastructure in one of these facilities and be guaranteed redundant power, HVAC, and a network loop with five nines availability. When you needed to expand to more capacity, you simply purchase and racked up more hardware or bought bigger, faster servers. So let's talk for a minute about these traditional hosting facilities and how things worked. We have our three key players, the application vendor, the hosting provider, and the customer. Each had specific roles and responsibilities that had to be fulfilled in order for a service to be available. These players also played specific and different roles in application scalability, disaster recovery and fault tolerance. The responsibility on application vendors was somewhat minimal. Their app could be coded in any way they saw fit, and so long as you put it in a big enough server, it would run just fine. The customer was responsible for purchasing another server or load balancer if they wanted to scale out the application, or have it recover from a disaster or an app level failure. The customer also had the responsibility for overall availability. With the application vendors getting off scot free in this older model, the majority of the heavy lifting responsibility rested on the hosting provider. Having to provide the necessary redundant network loops and power connections. The hosting providers were partly responsible for disaster recovery and fault tolerance by providing multiple facilities in which hardware could be hosted. But at least the hosting provider didn't have to worry about app scalability in this model. As you can see in this traditional model, the majority of the responsibilities sat with the customer. But now it's a whole new ballgame. Today we have hyperscaler clouds that offer all kinds of wonderful multi-region options with data synchronization and retention and a few automation tools to boot. It even goes so far as to offer the Kubernetes frameworks and runtimes to push your apps into. However, unlike the infrastructure hosting providers of yesteryear, while the hyperscaler is now taking on the responsibility for the hardware completely, the five nines availability SLA is no longer off offered. That responsibility has shifted. Application vendors no longer have it easy in order for their apps to be available scalable recover from disasters and be fault tolerant. They must be coded to be cloud native in these infrastructures. The holy grail of this is to have an application that is truly stateless and broken up into logical components that can each sit in their own container cluster and scale independently. This is no easy feat for most traditional applications. This means a complete rewrite of the whole thing. This is especially true in the telecoms industry. Most mobile core and supporting apps are traditionally written to run on big iron as monolithic chunks of code. But now with the rise of five gene edge computing applications need to be cloud native or as close as they can get in order to compete. The responsibilities of the hosting provider have relaxed somewhat. Cloud providers need to have stable data centers that provide the facility for virtual machines and containers. But the application must play nicely inside Kubernetes to take advantage of the horizontal and automatic scaling features. Hosting providers have no skin in the game with regard to scalability now. If an application requires more resources, it can simply consume more cloud resources, either an existing footprint or scaling horizontally as additional worker nodes if that's supported by the app. And this can be done automatically. Similarly, the customer has to simply have a stable architecture that accounts for multi zones in order to provide a reliable service. Most of the heavy lifting is already done. The customer has to write the checks to the hosting provider and the app vendors. Do we notice a theme here? It's all about the apps in the cloud native world. Sure, the customer now has to take on the task of building automated CRC pipelines and refactor their operational processes to take advantage of cloud native upgrade ability. But that stuff has no bearing on availability. It's important to note that traditional hosting facilities still exist today. Those customers that have apps that require them or aren't yet ready to embrace a cloud native future. Most of these facilities are going through a bit of a pivot where they now offer fully managed infrastructure or provide cloud hosting in order to compete with the rapidly expanding hyperscalers. Traditional hosting isn't going to completely disappear overnight, but the industry is demanding more flexibility and hosting options and cloud native is not a fad or trend that is coming. It is very much here alive and well. It's only going to get bigger and better going forward. Okay, so this wouldn't be a proper Vanessa little lecture without one so maybe I will go on a bit of a rant here. The most important action for application vendors is this is now your responsibility to build apps that play nicely in a cloud native environment. Stateless apps that use service buses and intelligent data manipulation are the new norm, and we all have to get on board. If we expect to gain anything from the promise of cloud native computing. Customers are responsible for stepping up and undergoing that internal operational and cultural change that is required before they can expect all of the wonderful automation ease of use and cost savings that the cloud native revolution is promising. So that's my rant for today. Feel free to reach out if you agree or disagree or would like to discuss anything that I mentioned here today. I'm always open to have a healthy debate with anyone. And until then, stay safe, be kind to yourself and others, and enjoy the rest of the conference. Thanks a lot. Learned a lot to actually. All right, our next speaker is even Lego. He's a network strategy director at orange, orange is one of the top contributors for LF networking with large scale deployments of projects like on app. And he has been associated with a 2G, 3G, 4G, 5G, 6G journey. And, you know, we always like to listen to our end users on what they bring to the table and what their challenges are. So please welcome if. Hello, ladies and gentlemen, I am pleased to share with you today some convictions we have and also the learnings we have made in using open source in orange networks. In fact, orange is a carrier service provider, orange is a telecom operator that is active on the residential market, both mobile and broadband in Europe and in Africa Middle East. In addition to that, orange is active on the entire globe, on the wholesale and on the enterprise market. When it comes to the use of open source in networks and what I would like to share with you today is first our convictions, what we believe on open source, but also some lessons that we have learned in the past usage in the recent test and and deployments that we have made in open source in our networks. But also some requests we may have to the community, some pleas, some outcome and feedback we have on that usage to the open source and the LFN community. First, to develop a bit our convictions about usage of open source in telecom networks. And in fact, one of the questions is what is specific to telecom, what is specific to our networks that at the end make that we need specific entities like the Linux foundations networking, specific streams to address these needs. And for that I would like to come back to the roots of our business. What we do is that we deliver connectivity, we deliver global reach to our customers. And that means that we deliver simple services over complex technologies. It means that we deliver global reach and global interconnectivity. And it also means that what we deliver is resilience, trust and security. And all these different requirements, they have led to what is our business today. And our business is based on standardized solutions because this is the way we do that global reach and that interconnectivity. And it also leads to the way that we build complex networks out of different blocks. And these blocks, they come from different suppliers. They have standardized interfaces and this is the way we build networks that can evolve and that can provide the resilience and that can provide the 24-7 service delivery to our customers and at the same time be able to evolve and to increase their capabilities. Having said that, what is the relation of that with open source? Open source is just the new way within the industry to develop these network functions that are software, mainly software, and that still realize our basic needs of resilience, of trust, of security and of open APIs and interoperability. So, in fact, what are these functions that we believe can be developed in open source? That does not address all of the functions that we use in our networks but some are much more adapted to be developed in open source. And this leads to the second part of what I would like to share with you is what do we believe in orange is really suitable for open source development and what have we done so far? What can we share as the experience within orange of usage of open source in our networks? So, what are these network domains? What are these network functions that are the most suitable for open source? And today I would like to develop one of them, which is automation. Not meaning that others are not of interest, typically what is happening around the telco cloud is of great interest but today I would like to focus mainly on automation. Why? Because automation is one area where we believe there is a lot to be done for which we will have great benefits and by benefits I mean benefits, economical benefits. There is high value in automating our networks and our networks have been for decades mainly human centric. All our processes have been based on human centric operations and we are in the process of moving that to automated processes. And this is where open source can bring real value. And when we mean automating our networks, it's not only about the new networks, typically it's not only just about 5G but it's all elements of our networks including the ones that are considered more traditional that really benefit from using automated processes. And typically this is where it comes to the usage of ONAP. We have been investing in ONAP since quite some time and we believe that there is high value in using ONAP in automating our networks and again not specifically 5G but one thing for example that we have been working on and we have developed is the usage of ONAP on the transport layer. And just to share with you some outcome that we have reached in the past few weeks and months, we have made some tests in our commercial network and that was done in our network in Egypt of using ONAP to automate the deployment of IP networks. So these have proved to be successful. Successful in the sense not that much that we have additional services or additional capabilities but just it has proved very successful in an easy way to handle our IP network and in an economically efficient way. What we have been doing in this Egyptian network is not using ONAP as the full automation capability but we have coupled ONAP with the existing vendors systems and by the combination of that we have been able to gain lots of time in our processes and this is of great value. And the next steps will be after that initial test to deploy that in our different networks and this is what we plan to do along the year 2022. If you are interested in that test, in that outcome, in the lessons we have learned on automating our IP networks, this is something that we will share with you and typically there is a webinar planned by the LFN in the coming weeks so please join that one and we will share some information and we expect that we will continue to develop that use case within the LFN in ONAP. This is in fact not rocket science, this is not so fancy, it's not 5G related but this is typically the type of case that provides value to our operational teams. In addition to that we have another project based on ONAP which is that we use ONAP in our experimental networks so we have a network that is not commercial one but a full 5G, full software, cloud native, automated network and we build on ONAP, we bet on ONAP to automate that network. And with that network we have different learnings, it's not at all the same level of maturity this is not planned to be in our commercial networks in the coming months but this really prepares more for the future and we really believe that there is a need for an overall orchestrator like ONAP to be able to operate such network. We have been deploying that network in the past months now that network is up and running we have used ONAP to manage and to deploy the different network functions the open run, the 5G core network and we have already learned that there is still a bit of maturing within the industry to be able to use that type of solution in large scale in large networks and we are ready and we are contributing to that maturity maturation of these solutions and we have also some other learnings for example one I would like to insist a bit today is the one on security we have also learned during that deployment of a full software cloud native network that the security aspect is pretty complex and that we need to have specific developments on security and that needs to be embedded in the different functions and also in tools like ONAP. I share with you these results that we have obtained on our experimental network not only because we believe this is one of the first cases of real usage in large scale of these capabilities but also because we want to share with the industry these lessons that we have learned in that and we want that to be used in the future development of ONAP and other open source in open source communities and this leads to my last message in fact to the please or to the request I have to the community the first one which should raise a bit what I said initially is please keep in mind what are the basic fundamentals of our industry that we need to have solutions that are reliable, that are secured and that are operable we operate large, very large networks, very complex networks all the new functions we introduce they need to answer this type of needs and that need to be taken into account in the initial design of these open source deliveries one basic example is we need to be able to monitor all elements of our networks including the open source element and for that we need what is being developed to be really manageable and we need to have monitoring solutions we need to have logs that we can really work on, we can re-operate that may seem really basic but this is things that we absolutely need in order to be able to deploy that into our commercial networks and my second message to the communities, my second plea is keep in mind our needs of resilience, of trust, of security and this is what leads to the way we work which is basically by a test and learn we go step by step we want to ensure that anything that we put in our commercial networks is something that we will be able to operate in any conditions and what I mean by that is that at the same time as we are very ambitious in all the different open source communities when it comes to the future networks we need to develop use cases that are much more short term that are limited in scope and typically this is the case of what we did on the transport network but on which we can a bit more easily or at least we can deploy in our networks in a more shorter time so what I believe is that through this approach of very ambitious long term solutions and step by step short term gains and deploying the functions step by step in our networks we will be able to make the best use of open source in all our networks Thank you Oh wow, that was fascinating I really appreciate the journey and the focus on open source Thank you very much All right Our next speaker is Saeed Dasal He's the founder and CEO of a company called Zedda Startup here and I know in the west coast we are getting close to lunch but what better title to use just to make you all hungry than saying you know edge is eating the cloud so that's the topic and please welcome Saeed to the stage Thank you Hello everyone My name is Saeed Dasal and I'm the CEO and founder of Zedda I'm excited to be part of One Summit 2021 and in this keynote presentation I want to talk about why the edge is eating the cloud Edge computing is arguably one of the hottest topics in our industry while hyped extensively in its early years it's now becoming clear that edge computing is real, inevitable and happening as we speak Why is that? Well, the center of data gravity has recently been in the data center and subsequently in the cloud but as we connect billions of devices to the edge of the network to collect data from the real physical world the amount of data these devices generate is surpassing anything we've seen before In many cases, especially in industrial and other verticals the data has always already been collected but now it's getting connected which is why the shift is going so fast This inevitability when combined with the impossibility of sending all this data in its raw format to the cloud means one simple fact If we can't get all the data to the cloud then we need to move the analytics and processing part of the application to the edge of the network Analysts are predicting that by 2025 75% of all data will be processed at the edge of the network and that by 2024 nearly a quarter trillion of dollars will be spent on hardware, software and services at the edge These are staggering numbers and make edge computing one of the largest and fastest shifts we have ever seen in our industry As edge computing takes off, the architectures needed to enable it are also becoming more clear We have learned that the edge is not one place or layer and as a matter of fact we have learned it's a continuum that spans from field devices, assets and users all the way back to the cloud and centralized data centers In particular we've seen the rise of the distributed edge What is the distributed edge? Well the distributed edge where edge computing is performed very close to the devices, assets and users The edge compute runs in environments that are not typical data centers and instead is deployed near or embedded in vehicles, well sites, machines, production lines, wind turbines and other OT assets The hardware itself often does not look like a typical server and a whole set of new challenges is introduced in management, security and support for many different applications and devices While the promise and need of edge computing is better understood than ever there are some clouds on the horizon that come with deploying edge computing According to one analyst, the lack of standards or broadly accepted architecture for edge computing will ensure that 85% of enterprises will deploy multiple incompatible technology stacks through 2022 This is not unusual for new emerging technology but this is where I think just like in the cloud or the open source community can play a big role in creating common standards and architectures for the edge Common standards and architectures help make edge computing effortless and intrinsically secure They make it possible to run legacy apps along new modern cloud native apps enable zero trust security as well as deal with the scale of the edge and provide an open architecture with no vendor lock-in This is why two years ago we at Zedida donated Project EVE to the Linux Foundation and became a founding member of LFX Project EVE aims to standardize the edge or in other words if the edge is the last cloud that we need to build then Project EVE intends to be the Android equivalent operating system of the edge Project EVE is a secure universal lightweight Linux-based operating system purpose-built for the edge It has been engineered to run on any type of hardware from a Raspberry Pi all the way to a high-end server It is also engineered to run any application at the edge from an old-school Windows-based industrial application to lightweight Kubernetes microservices Anyone can get started online and download the code, compile it and install it on their preferred edge hardware We at Zedida can help you try it out or if you're interested provide you an enterprise-class software-as-service orchestration solution along with an incredible ecosystem of edge applications and hardware partners to deploy in production environments and at scale So is the edge eating the cloud or will the cloud eat the edge? In some ways it's a bit of both In the end as we need to deploy apps close to the source of data we want to do this the cloud way and therefore edge is an extension of the cloud paradigm At the same time the scale and diversity of the edge is something we have never seen before Imagine every machine or device in the world connected to the network and to the cloud and generating a tremendous amount of data The business model on the edge is all about data and the monetization of infrastructure will be very different to what we've seen in the cloud The edge is driving new ecosystems and use cases and the new environments will require new architectures and technologies that take them further from where the cloud is today What is clear, no matter who's eating what, is that this is an incredible market opportunity and an exciting time for anyone in our industry Thank you for your time All right, great. Thank you very much. All right, next we have Brent Schroeder, who is the Office of CTO of CTO and a technologist from SUSE And he's going to talk about everything open. So without any delay, Brent, take it away. Thank you Hello, I'd like to welcome you to the Open Networking and Edge Summit. I'm Brent Schroeder, head of the Office of CTO at SUSE I thought it would be interesting to unpack a few of the main findings about Edge and open source from a research report SUSE recently had done about why IT leaders are choosing open The purpose of the survey was to better understand how IT leaders are using technology to fuel innovation and to identify the must have technologies All in a time when COVID has disrupted economies, industries and organizations We surveyed 800 IT leaders in businesses with 250 or more employees and published our findings in a report called Why IT Leaders Are Choosing Open The pandemic has really forced the acceleration of change in the way companies in all sectors and regions do business from years to just a few months This applies to many Edge scenarios as well in nearly every industry including retail, manufacturing, healthcare and many others 72% of IT leaders expect Edge computing solutions to be broadly adopted within the next two to three years Edge computing is being used to deliver automation, improve efficiency and enhance quality All of which help to improve an enterprise's bottom line Additionally, through the creation of net new business interactions and agile use of information, Edge computing can also be used to improve customer experience and personalization Or to create entirely new business opportunities Knowing that the Edge revolution is coming, 67% of IT leaders are concerned that they may fall behind competitors in taking advantage of Edge opportunities And they know they can't rely on the traditional way of doing things to deliver more faster IT leaders I've talked to are more willing than ever to change the way IT works Including both technologies and the processes used to manage and deliver both infrastructure and applications Most companies are turning to open source to enable the innovation necessary to respond to the growing pressure There has been a pretty dramatic shift the past few years from healthy skepticism to viewing open source as a leading enabler In fact, 85% of IT leaders see open source as the enabler for delivering innovative Edge solutions and I couldn't agree more If you think about the complexity of Edge scenarios, Linux and Kubernetes enable companies to build highly optimized solutions tailored to their needs Whether it's about small footprint devices, unique communication requirements or extreme scale And using open source technologies, one can build and manage solutions consistently from the Edge to the data center to the cloud This flexibility would not be possible using proprietary legacy solutions So for the IT or business leader, it all comes down to supporting corporate objectives usually rooted in business metrics These include driving new revenue opportunities, optimizing resource efficiency or improving their customers experience Open source is now recognized as enabling these business outcomes And it's up to us as the collaborative communities to continue to keep pace with the demands of business and IT organizations in achieving these objectives And with that challenge, I look forward to engaging with both vendors and customers alike throughout this summit Thank you for joining us Our final keynote is one of the most important end users in the industry And to give us insights into the use of open source by the Department of Defense and US government agencies We have one of the top architects and project leader for the operate through portion of DOD 5G program, Dr. Dan Messi Dr. Dan Messi is very well known, 100 plus publications, 25 plus years of experience Major cybersecurity expert, we're really honored to have Dr. Messi here His goal is very simple. He wants to securely operate through 5G networks And you know, Linux Foundation and US government have collaborated on making sure that this does happen And it benefits the world globally. So please welcome Dr. Dan Messi I'm Dr. Dan Messi and I'm here to talk about securely operating through 5G networks So I'm part of the Department of Defense 5G and XG initiative And what we're trying to do is we're trying to reinvigorate US telecom dominance And that's been critical to the industry, to national security, to a bunch of just the US in general And so we're hoping to kind of get us back to be major players in that We've broken our overall initiative into three pieces We have an Accelerate Peach, which you may have heard about, which is putting a lot of interesting 5G applications on bases around the country We've got a few highlighted here in the slides. We're doing smart warehouses that utilize 5G connectivity in Albany, Georgia and Coronado, California We're looking at spectrum sharing at Hill Air Force Base. We're looking at augmented reality, virtual reality And I'm proud to say that we even have Colts or cellular and light truck, which we have deployed in use right now in support of Operation Afghan Welcome So a lot of interesting stuff trying to get DOD to better use 5G We also have the piece called Innovate. I'm skipping to the third one there. Innovate is saying 5G is clearly not the end We're going to have 6G. We're going to have 7G. We have lots of stuff still to come And so we're not focused on just getting DOD into 5G. We're focused on getting DOD into that whole next G concept And finally, the middle piece here is what we call Operate Through And that's the piece I run and the piece that I'm mainly here to talk about. So I'll tell you a little bit about what we mean by Operate Through Okay, so for every other infrastructure, we know how to operate through existing infrastructures If I was to talk to anybody about DOD deployments, you know, U.S. military deployments We don't start off by going in and saying, you know what, first we're going to build a road system and a bridge system and a rail system And then we can start to operate in a region We're going to make use of the existing bridges, the existing railroads, the existing road infrastructure, the existing power infrastructure, water infrastructure, etc. When it's there, right? We're going to be able to operate through, as you can see, you know, the image of our armored vehicles here crossing an existing bridge Now, of course, we have the ability to span a river anywhere in the world And we're always going to have that ability, but we don't start our operations by saying, let's first go in and build all the bridges we need We start by saying, what infrastructure do we have that we can operate through? And that's true for power, that's true for water, that's true for logistics, that's true for all sorts of stuff So what about COMS? Right now, we are able to bring our own communications anywhere in the world and we will always have that capability And one of the most common misperceptions about my program is we are not in here saying we're going to replace that We're going to continue that and we're going to make that stronger But we also have a ton of interesting 5G infrastructure, other infrastructure emerging around the world So could we operate through that infrastructure just as we use a bridge or we use a host country power or we use the existing water system or something like that So that's what we're trying to do for operate through So, with operate through, what are some of the key things that we're trying to achieve here? Well, we're thinking about the world in terms of different types of 5G networks, some of which we bring, some of which we use when they're in place We're talking about lack of security assurances, we can argue about whether that's lack of security or just lack of an assurance But we need to improve the security so that I can operate with them in a trusted environment And in the end, I want to get to the point where I can provide guidance and the ability for mission commanders to go operate through 5G infrastructure wherever it might be Either bringing their own or relying on other infrastructure that's already there So why can't I just do that? Sounds like a great idea What we should do is we should all be operating on name your favorite commercial 5G infrastructure Well, here's a little quick picture of our commercial 5G infrastructure We've got our 5G RAN or the Radio Access Network part We've got this cool 5G core, the back end sort of connecting everything together One really nice thing about 5G, which I think is incredibly important, is we have this 5G mech or this computing we push to the edge Which I think has a lot of potential And then we connect into a larger world and so there we go, we could run that in many places And industry is putting literally billions of dollars into this work, so we should leverage it Only what we have right now is a lack of commercial assurances I'm not saying the commercial infrastructure lacks the security we need I am saying definitively we don't have the assurance, we don't have the process, we don't have the way to validate that's true or not true And we think there are places where it clearly needs to be improved But what I'm going to say instead of saying, oh we lack the appropriate security It's going to say we lack the appropriate assurance So can we get those security assurances that would tell me whether it makes sense to operate on, name your favorite 5G commercial provider We also have risks from untrusted supply chain components Whether we're building our own network or relying on somebody else's commercial network We're not only using the network itself, but the network consists of many many components as everyone at this event knows very well And in those components come with varying degrees of supply chain risk and varying degrees of trust It'd be great if we said every commercial provider, every in the world works from a DOD trusted foundry But clearly that's not the case and it will never be the case And so in the end what I have is I have an inability to leverage these indigenous 5G capabilities wherever I go Maybe because they actually aren't appropriate to use, maybe they are appropriate to use and I just don't know how Maybe it's somewhere in between and it can make them appropriate to use So that's what we're trying to do in our program Sorry let me get this slide to advance here, there we go So we worry about a bunch of threats I'll just talk about these briefly, I'm not going to spend a lot of time on the threats But obviously we worry about our confidentiality, integrity, availability for everybody who's a security person here Our good old CIA triad Confidentiality, I want to make sure my secret data stays secret Integrity, I want to make sure that I know who the message is coming from that hasn't been modified So avoiding replay attacks, that sort of thing Availability, I want to make sure the infrastructure is actually there Doesn't mean no good if there is no infrastructure So all those pieces I want to come together And I'm throwing in one that's particularly important, which is observability Which fits under the other pieces But I'm going to call out specifically because as I'm using this untrusted 5G network somewhere in the world I'd like to know how much information someone can gain about me just based on the fact I'm using it So even if I have perfect confidentiality, all of my data is encrypted from point A to point B And I have perfect integrity, nobody's modifying my messages Just the volume of traffic that I'm sending to my colleague Dr. Roy Could reveal something about my operational tempo So I worry about that observability, we're looking at all these sorts of things And that's one of the threats I'm looking at Alright, so where do I want to operate? Well, I want to operate in three different environments I want to operate in a black box world where the network is as the name says Just a black box, the network sometimes unreliably delivers message from point A to point B I'm an average user on the network, I've got my phone, my sensor, my device, I'm just a user I've got a UEE, maybe the network will deliver what it does, what it needs to, maybe it won't And so what can I do in that kind of environment? So that's one extreme, I'm just a user on the network The other extreme at the bottom here is a tailored environment In that case, I'm bringing my own network, I'm bringing my own colts Like I'm doing for Operation Afghan Welcome I'm bringing my own equipment, my cellular online truck I'm actually driving in the infrastructure I need and relying on that infrastructure And there I have full control over the code, the components I can do what I want in the RAM, the mech, the core and so forth So that's one approach to doing it The other in between sits a cooperative commercial network So if I work with, again, name your favorite 5G commercial provider It's certainly not a DOD provided network, it's not something I own and can operate It's also not entirely a black box We could work with that provider to say, hey, could we add a little bit into the mech Could we enable a few additional features from the 5G spec in terms of security So that's our cooperative network and we're looking at sort of this in these different regions So the black box in the tailored environment, black box I might be using anywhere in the world The tailored environment I might bring with me anywhere in the world The cooperative environment is something that might work in the U.S. or in allied regions And possibly in a few gray or unknown regions Where I work with the 5G provider to add that security assurance To add those features that I need to make it workable for critical infrastructure DOD operations, other national security related things All right, so those are my operating environments I want to operate in all three And the last thing I'm going to say and then I'm going to give you just a few interesting what I think are fun examples So I want to do this in a zero trust environment Again, a little bit buzzword heavy, but in this case I think the buzzword very much makes sense If I'm operating on that black box or that trusted cooperative commercial environment I can't really rely on our good old-fashioned largely obsolete castle and moat kind of defense The old castle and moat defense, the way I approach security is I have the inside of the network, that's where the good guys are I have the outside of the network, that's where the bad guys are The firewalls, the IDSs, everything keeps the bad guys on the outside of the network Whether that's ever been true or not is an open question But it certainly is not true in the 5G case On that commercial provider network, where's the outside and the inside? So instead what I want to do is I want to move to this zero trust approach And there's a lot of definitions of what zero trust are, some great documents by NIST and other folks there But a few key principles that I'm thinking of when I say zero trust is I want continuous monitoring to detect misbehavior or malfunction Because I'm on who knows what network, who knows what's going wrong there I want dynamic authentication and authorization so that I'm not just saying, yep, Dan was allowed on the network once And now we trust him to do whatever, we want to keep that dynamic and we want to keep that appropriate to what's going on I want segmentation, so I definitely don't want a big outside the moat inside the moat But I do like this idea of microperimeters, right? On the device, not on the device, in a small, so I definitely can have these microperimeters And finally I want to push that end security, whether it be encryption, access control, whatever it might be As far as I can to the end devices, because that helps me build my zero trust model Last thing on zero trust is, and I certainly know this and many of you who've worked on secure systems probably have shared this belief We have a challenge of sometimes our systems are very secure and very hard to use What we actually believe we can do, if we do this right, zero trust should actually make it more available, not less So I'm looking for solutions to do all this, but also make it more user friendly, make it more useful So I'm not stuck as, you know, I've occasionally had some days like this and I'm sure some of you had as well Where, you know, the security system has prevented me from getting in to do my work If we do it right, we can actually make sure that security system makes it easier to do the work And makes the work more available rather than the other way around Okay, so that's the big picture of the program I've got some slides that, you know, you can see that are online just to kind of talk a little bit more about what is a black box environment But I've only got a few minutes left and I want to use my last few minutes on one really interesting challenge that I think is particularly relevant to this group So here we go. So this is my last slide. This is what I want to kind of end on In all these environments, whether I'm operating in a cooperative environment, I'm bringing my own I'm particularly interested in what I can do in the radio access network Remember this 5G network consists of a ran, that radio access part, a core, the mech Lots of cool stuff I can do on the core and the mech But let's talk about the ran for a minute So one cool thing with 5G is a lot's going on at the physical layer, at the file layer There's a lot of new ways I can make this work There's a lot of potential new components to integrate I also have a desire for supply chain flexibility so that I'm not dependent on using some device I didn't want to You know, that I may not rely on or prefer not to use So to make that work, I would love to have a ran that I can customize That I can come in and say, you know what? I would like to mix and match piece A from Alice's company with piece B from Bob's company With piece C from Eve's company, might also fit in there too Can I build my own ran? Or do I simply have to buy a monolithic RAM from a really good provider? And the challenge here is, you know, this gets into these issues of open ran Of how much I do, which I think are a very good fit for this conference to talk about You know, the trade-off between having a monolithic defined system You know, a cool RAM that I can buy from a lot of really good vendors Or something that I can kind of build myself And I'll leave you with a question I don't have an answer to But my end question is sort of, you know, what I'd love to have with that Radio Access Network Is high performance, high security, and a high level of flexibility and customization You know, and in the classic computer science world, right? I'd love to have all three. Is this a, is I can get all three? Or is this a pick two or even a pick one scenario? I can have a lot of flexibility, but maybe I can't get that high performance Maybe I can get the flexibility and the security, but you know, we'll see So what's open and an interesting question and I think a challenge for this group And as well as the community as a whole is how much are we standardizing? How much are we allowing to be open so that any player can come in Versus how much are we locking down because frankly it's a very complex system And maybe the better way to do it is just to say we need one vendor You can pick from many of them, one vendor to do the RAM Or do we have a lot of pieces to do the RAM? So with that, that sort of is an unresolved open question I'll leave that for the for the team and, you know, thank you for Thank you for your time Wow, that is fascinating. I'm so excited to have you on board And we are honored to have helped the cause of securing 5G networks Well, that's a wrap for day one keynotes. Please visit the partner boots There are plenty of awesome demos you can actually see some of these blueprints in action As well as some of our partner boots that have real demos that, you know, would solve major problems We'll be back at 2 p.m. Pacific with tracks and then for keynotes Tomorrow morning again Tuesday 9 p.m. Pacific. Have a great day. Thank you