 in the afternoon session and really humbled by the support that we got, you know, tremendous like, I think over about a thousand people have joined. We're not expecting such a high turnout, but this is really good. It just shows that the community is vibrant and people want to hear from some of the most senior leaders and technical execs. This afternoon or evening, depending on where you are, you know, we have a very powerful lineup. We have some of the top end users of open source who are gonna explain, you know, what they are thinking. They are everywhere from sort of the 18th of the world with Andre or a super graph from Walmart and or Dr. JMS from the DARPA project. And then we also have some of our community players, startups, innovators and, you know, cloud native computing foundation. So very packed afternoon. So without going into and spending too much time, I want to introduce our first presenter and, you know, a company that has kind of taken their, their roots from, from sort of the storage world but moved to containerization, cloud native and then into the telecom as a virtual function, company called Robin.io, it's a startup here in the Bay Area. And from Robin, we have an executive, Mehran Haldipur of VP of Tech Alliance to talk about, you know, how startups have been able to build on this vision of open source and what they have been doing to help some of the end customers off there. So with that, take it on Mehran. Part of this quite an interesting event and share the stage with some of the very important players in the space, so I appreciate the opportunity. I thought maybe I'll start talking a little bit about what Robin is and does if you maybe have the same first slide on the screen. Okay. So a little bit about Robin in terms of background and how we can approach the market and what our mission and strategy has been. We basically actually started an enterprise space for our primary focus has been to enable an open infrastructure for the coming of complex applications on Kubernetes. And as Arpit mentioned, we started providing a storage layer for Kubernetes and extended to a complete and to a platform and provide capabilities to be able to deploy the entire cloud native infrastructure for the entire 5G stack in production today. We've built, we believe on the ecosystem and partnerships and alliances and being opened. We have tried to extend our ecosystem as much as possible and I would say Robin has been quite successful in that space and we'd be able to prove that open cloud native infrastructures could be a vehicle to provide significant value to certain deployment of 5G and Edge. And our mission and objective stated is to be to enable onboarding and live second management of 5G and Edge applications on seamless and easy to manage. If I may get the second slide please. So I wanted to kind of start the conversation by kind of talking about the three-pronged approach to and Robin's vision for this space. We feel that cloud native containerized infrastructure is a very important strategy that should be adopted for the edge use cases. We see an infrastructure can accommodate both CNFs and VNFs on a common managed infrastructure. We think the openness is critical, being able to have an open platform that variety of network functions could be deployed and onboarded on a common managed environment running in Cocoa Bernadis is a critical strategy that would drive softwareization and disaggregation in the core space I think in a more cost-effective manner. And we also think that the key value drivers of the open infrastructures is to be able to also deliver automation and orchestration layer that is hyper automated. It starts from the bare metal infrastructure all of that to network functions and creates a delivery model that seamless and adaptive and can accommodate the variety of applications using open understanding interfaces. So this is our kind of a global view of the mission and an objective that we pursued for Robin. So if I may have the next slide please. So we actually think that edge computing has bring significant promise to not only the operators but other enterprise space as well. We feel a deployment model, this kind of edge deployment model brings continuous availability and distributed data models that could provide much more resiliency to the service delivery infrastructure for operators as well as enterprise customers alike. And we feel that the fact that you have close low latency connections and have the data available at the edge enables a set of new use cases that can deliver value to enterprise customers including faster AI that is location awareness or automated reaction or delivery of services around changes in the infrastructure in a much more seamless way. And we also think that this whole delivery model would create an infrastructure that gives you optimized data placement. You at one hand have high throughput low latency networks that connects edge to the day central data centers that allows you to have more of the data distributed centrally and at the same time because of the network interfaces much more data could be ingested with the low latency at the edge and analyze at the edge. So you can place the data where it makes sense and deliver value from the data where it makes sense. Next slide please. I think one of the questions was to come up with some views around what makes a key success factor for the edge. I think having a thriving ecosystem and this is actually exactly what open networking organizations are doing and enabling is key. As I say, it takes a village is not all about a unified model of that could be bought from a single vendor. There needs to be a lot of disaggregation, open standards, availability to pick the right set of functionality from the right providers and being able to accommodate them in the common infrastructure. We think hybrid cloud and multicloud is important. However, it needs to be married with a unified automation platform. It is essential to be able to place your infrastructure in the cloud on-prem and move workload between them. But you can't do that at the expense of complexity at the expense of doing a lot of manual work being able to have dependency on the infrastructure. I think containerized and open standards allows you to a way to build an application stack that could run well on-prem in the cloud and hyper automation could be placed to orchestrate workload across data centers no matter where they're located. I think this would enable and open some doors about complete orchestrated, well managed infrastructure that extends from on-prem to the cloud and enables the appropriate placement of workload and reduces the cost for the service delivery across the board. We also have a true believer in containerization. We see a lot of that taking momentum in the market today. It is a lot of many of the around providers, for example, are moving to containerized foundation. I think it gives you a better ecosystem, much better efficiency, improved availability. There's a number of benefits that could be derived from building a containerized foundation. It also, the only caveat here is that it needs to be able to run the traditional applications, VM-based applications and so on, as well as containerized applications in a common infrastructure, to be able to take advantage of a container deployment with automation on scaling, healing, orchestration and so on without having to completely remap your application portfolio and restructure how you do services. And I think that would enable a new set of edge applications and a much faster adoption of edge application. I think there's a lot of best practices and IT policies and procedures that have been placed that enterprise customers are used to. There's a whole new set of operational aspects that telco and operators are used to and IOT we bring a third set of needs into the space. So we need to think about, especially around operations and orchestration and management and blend of all these practices to be able to achieve the kind of efficiency that is needed to deploy critical apps and manage them at the edge at scale of thousands of nodes in a seamless span. Next slide, please. I talked a little bit about this but his and that picture seems to be a little blurry, but there are a number of infrastructure decisions that needs to be made in terms of where the data should be placed, where the analysis that data should take place. What are the best things to run on the edge? What are the best things to run on the core? How much cloud participation should be on your infrastructure? But as you see a containerization and the fact that 5G and low latency networking would bring higher support and a lower latency interconnect between the edge nodes and the data center, you're now going to be having the capability of right place the application if you will and its data and do the analysis in the correct location with the optimal result without having to worry about bandwidth, latency, considerations and opens up doors where a lot of new set of applications deployment policies that I think would expand the use of edge and edge applications in the market today. Next slide, please. I think enterprises especially have seen the benefit of CI CD and how it could apply to accelerating time to value on application deployment. We think a challenge that we should all take on and consider is to enable CI CD processes and procedures to more apply to the edge, be able to build a model that fast application delivery could exist in a complex network like this. The same processes that enterprise customers have been benefiting for onboarding and acceleration of deployment and customization and additional service delivery models and do it in a fast manner that CI CD has been delivering as a promise. It should apply to more of it as use cases. I mean, if you start looking at an infrastructure point of view and things about smart cities, smart factories, smart farming, all the way to things like content distribution, there are a large set of opportunities for incremental revenue generation and use cases and to be able to monetize all these and get to a point that we get to a faster 5G deployment and at the same time, more edge applications are deployed. CI CD processes and infrastructure would be an important contributor, I think, to this journey. Next slide, please. I wanted to kind of talk a little bit about Robin belief in open strategy or strategy and what our approach has been and so on. We are basically first started by ensuring that we have a complete Kubernetes based strategy and we don't touch a single code of Kubernetes, anything that we do that depends on Kubernetes, all the APIs are open source, the way our storage talks to Kubernetes, if it's our CNI level talks Kubernetes is standard, any application can run on Kubernetes will run on Robin. We are published our APIs and are going to continue to do so with additional capabilities around MDCAP, our orchestration platform later in this summer. And we try to build a model that different elements of Robin itself for example, could be adopted for different use cases by different partnerships and we build a number of those partnerships to try to expand the openness of the platform. Our storage layer is completely open. We can run on any Kubernetes platform on any cloud. We are making more and more strides toward our cloud-native platform to make it more adaptable to run on any Kubernetes distribution and so on. And we're planning to publish our API. So I think openness, we do understand the need for enhancing certain capabilities of the open source community tools and provide production level support on them and so on. And there's a business to be had to do so but I think keeping the environment open is essential. Dan, it's essential strategy for Robin. I do have a question, often should I just answer it or do you want to wait till the end? I think we may run out of time so if you could wrap it up, yeah. Okay, sure. Can we go to the next slide please so we can then. Okay, so these are some of our business priorities in terms of where we're taking the platform. Enhancing our ecosystem partnership is key for us. We wanted to get to a point that we have more and more from application to infrastructure side up. We're working with the OEMs on hardware on the different edge applications for CDN and other use cases and be enhancing our hyper automation platform to be able to be as open as possible and be available to extend its use case around orchestration and lifecycle automation that applies to a variety of workloads and across variety of infrastructure, including cloud. Next slide. I'll keep going, yeah, sorry. Okay, so these are some of the technology priorities we've taken on and I can make this the last slide. Basically, enhancing our ability to run state-to-lab on Kubernetes has been one key priority for Robin. We have a number of enhancements we've done to create the most highest performance source platform that supports very complex applications, including things like Oracle RAC and SAP to give you an example. Enhancing our ability to run apps on the edge is another priority to be able to extend the network requirements and so on to be able to run applications that require network accelerations, for example, or require things like SROIV and enhancing those capabilities to be able to extend what could run on the edge. We are expanding our orchestration layer to do more hybrid cloud-focused. We think that the answers would be a combination of on-prem and cloud always and we build mobility functions within the platform so we can take an applications from one platform, on-prem and run it in any cloud and without a complete disregard to the infrastructure and the peculiarities of when you run your application and I think that's an important part of this strategy. And obviously being able to run both VMs and containers on common infrastructure as efficiently and orchestrate them together would open a lot of doors in bringing a lot of new applications to containerize infrastructure platforms. And we also think that beyond instantiation and management, building a platform that does data operations is also quite essential in making the code deployment successful. So I think when I pass it on to you, I think I'm... No, I think that's fantastic. And I know there were a couple of questions. Unfortunately, we're out of time. We'll probably take that offline. Just want to make sure that we are on track for the rest of the schedule. So really thank you, Maran, for doing this. Sorry, I'm running a little late. No, thank you. All right, so with that, we move to our next speaker who is one of the well-known leaders in the open source cloud native industry. Our own Priyanka Sharma who's the GM for CNCF. So with that, Priyanka. I am delighted to be here today. Thanks for having me. I hope everyone is doing extremely well. We're all very close to the end of this pandemic and I hope you're feeling that excitement as I am right now. Speaking of pandemic, that's what my talk today is going to be about. So I'd like to start my slides, please. Awesome, great. So today's agenda, we're going to talk a little bit about cloud native in the pre-pandemic era. How the cloud computing and cloud native rose up. You've been obviously hearing a ton about cloud in context of the edge in the previous talk and I'm sure before that as well. We'll talk about how cloud native accelerated and then during the pandemic and finally, what can we look forward to in the post-pandemic era and what that means for you folks who are focused on telecommunications, on computing and the surrounding industry. Next slide, please. So pre-pandemic era, rise of cloud and cloud native. Next slide. When we think about the biggest trends in our industry, the big one, in my opinion, was going from a virtualization to cloud native. In 2001 is when VMs became a thing and that kept growing, it kept getting bigger with infrastructure as a service, PAS and then in 2010 is when open source really started making moves in the world of infrastructure. In 2013, I think it was momentous because with the rise of Docker, we had containers. Suddenly people could break their large monolithic applications into microservices and put them in their own containers. Excellent. And then in 2015, with the open sourcing of Kubernetes from Google, we had an open source container orchestrator to dynamically orchestrate these containers and optimize our resource utilization. This was the beginning of cloud native and the beginning of a really exciting journey. Next slide. The cloud native computing foundation built a global community around the momentum of the Kubernetes project and added many more surrounding it. And we built a critical mass by hosting events and activities and engaging with all parts of the globe. Our strategy paid off when, if you look at these numbers here, these are pre-pandemic numbers of people who attended our flagship KubeCon cloud native con in person. So 23,000 people in 2019. Next slide. Overall, it was a very frothy, exciting time. We were having growth all around, fastest growing open source community in the world. All major cloud providers became Platinum members and we hosted the largest open source conferences. This was unprecedented momentum for any open source community. Next slide. As we grew and then were suddenly hit by a pandemic, there was questions, concerns. What does this mean for cloud native? What does this mean for this ecosystem? Next slide. And to share with you how things have gone, I'd like to talk about COVID in general. Our most hateist favorite topic, I would say, COVID-19. What did we learn? With COVID-19, a lot of trends that were slowly percolating really accelerated. We learned to work from home. Events became virtual. We recently, last year in October hosted our November, sorry, hosted KubeCon cloud native con North America with over 23,000 attendees in just one event compared to 23,000 in 2019 for all of them put together. And now we have one more coming in May with KubeCon Cloud Native Con EU, which will also be virtual. As we look forward to events, we are always going to do hybrid. Virtual will always be part of our strategy. Similarly, restaurants had to learn to take online orders and then on the very other end of the technology complexity spectrum, we needed privacy respecting contact tracing within three months because of COVID. mRNA technology needed to produce a vaccine stat. So things really speeded up for what was needed from technology in this pandemic era. Thinking about all that, do you remember when digital transformation was a buzzword? With COVID-19, digital transformation has become the mainstay of most companies before it was something to aim for, to think about with the challenges of COVID, with the world going virtual, every company has been jumping in. A lot of companies I speak to day to day, they tell me, oh, it is within COVID-19 that we have hired majority of our cloud native staff. It is within this timeframe that we have modernized. The pressures are totally different now. Next slide. So COVID-19, because of these pressures to go virtual have accelerated cloud native. There's generally been a market shift in innovation. Open source has become the mainstay for a lot of infrastructure technologies. People are using open source much more than legacy IT vendors. Even before the pandemic, it was expected that two thirds of enterprises will be prolific software producers by 2025 with over 90% new apps cloud native. Everything like this has accelerated. With the coronavirus outbreak, people, 60% of the respondents of the state of the cloud report said that cloud usage will exceed prior plans because of the pandemic. I mean, we don't need stats and numbers and reports to tell us that. Think of our own lives. As I said, we're ordering food online. We're ordering groceries. We are doing online banking. The more we can do from our computer, from our home, we prefer that. So what does this all mean for the CNCF? Well, next slide. I'm proud to tell you that COVID truly accelerated CNCF. In membership, we have 20% growth today with 612 members compared to 2019. In contributors, these are people who are producing code for the many projects that are in CNCF. We have a 46% growth with over 118,000 people from around the world, 173 countries, plus by the way, who are contributing to our projects. And our projects have grown by 93% to 85 today. We started this conversation talking about Kubernetes and how that brought on the cloud-native revolution. There are 84 other projects supporting Kubernetes in that ecosystem today. Next slide. Telcos and other edge providers understand this value. I'm proud to tell you that AT&T joined us as a Platinum member recently as Dyscox Communications, which is another telecommunications provider in the US. Both are making waves with cloud-native on Telcos and cloud-native on the edge. This momentum, next slide. Why are all these people interested? Why is so much growth happening? At the end of the day, I'm so proud to tell you that it's because of the diversity, the resilience that our diversity of team cloud-native brings. We are today 6.5 million cloud-native developers, which is up by 1.8 million from Q2 2019, 2.7 million use Kubernetes and 60% of backend developers now use containers. The more our numbers increase, the more that people contribute, the better, more secure our software becomes. It also means there's enough talent to go around, to help end users, to help telco providers, to help edge companies adopt cloud-native technologies into their practice. Our numbers across geographies, technologies, genders and other demographics is our biggest strength. Even when it comes to people self-learning, educating, getting bigger and better in cloud-native, the numbers are increasing. Today, 70,000 developers out there are certified as either Kubernetes admin or Kubernetes application developers or Kubernetes security specialists. These 70,000 people are out there ready to help anyone making their digital transformation cloud-native journey. Our diversity-powered resilience is what makes us ubiquitous in today's modern technology world. Next slide. There's a lot of projects that support this, as I said, and you can feel free to take a look at all of our logos over here. The slides will be available to you. Next slide. You see a bunch of projects in our sandbox as well, which is a new type of project that we have released in 2020. Next slide. As I mentioned, the world's largest cloud and software companies are part of our ecosystem, and you can take a look at them there. There's also a link to check out our end users. We enjoy the largest end user community of any ecosystem out there, any foundation ecosystem out there with 145 plus. Next slide. And finally, cloud-native ecosystem is frothy as ever. There's so many acquisitions. There's so many new startups who join us. Check out our landscape and the links provided to get more insight. Next slide. All of this is to tell you that COVID-19 was not a blip in our existence. COVID-19 was a game changer. If you think about World War II, people refer to the period before and after as pre-war and post-war respectively. That's because society fundamentally changed because of this war. I expect that the generations to come will think similarly about the pre-COVID-19 and post-COVID-19 eras. So with that in mind, next slide. What does the post-pandemic era look like? I adventure that cloud-native is going to be the building block of the coming times. Next slide. When we think about what is coming, we have, as I said, everybody is hastened to be online. Everyone's going through the digital transformation as fast as possible. Alongside that, the amount of data generated from our devices, the amount of information available, and the modern compute at the edge is creating information and with ever smarter neural networks and decentralization, we are going to be able to build experiences for human beings, technologies that are nuanced for their specific needs and tastes so that it's a much richer experience. All this information that's available will be harnessed to create the next generation of our technological movement. Putting together big data, putting together compute at the edge, putting together all of this stuff needs to happen all the while delighting customers in the way they're used to. And that's something cloud-native knows best. So the modern era of technology can only be assured in when backed by cloud-native principles and cloud-native people. Let's talk a little bit about how telcos and edge companies are thinking about this. Next slide. Telcos are actively adopting cloud-native today. They want better resource efficiency, better resiliency and availability, and they want most importantly, higher development velocity. Next slide. There are a lot of tools available. There's a telcom user group where folks come together and discuss that leadership and learn from each other. And then we have created the CNF working group or the cloud-native network function working group, which helps define what does cloud-native look like in a network function. And it relies upon the CNF test suite and testbed. And as I told you last time we spoke to watch the space for more information. Next slide. I do have progress to report. The CNF working group is defining what makes a telco application cloud-native. We have best practices that can be adopted by CNF developers and operators. These write-ups of best practices, use cases, requirements, gap analysis, and similar documents are going to be available. And this is the beauty here. The individuals and organizations can choose to incorporate cloud-native best practices along with other standards. A great example would be a company choosing to use both cloud-native and at CNF V specs. We are here to make you succeed, to help you win. And that's what the CNF group is working on. Next slide. When it comes to edge computing, there's lots of challenges we face. There's millions of locations, billions of devices, and really small margins for error and profit. We have reduced control, constrained resources, risky devices and locations, limited connectivity, and delays and disconnections. Well, next slide. Cloud-native is here to help you. Kubernetes was, as you all know, born for massive data centers, but it is now extended just like Linux was for embedded to have projects and support available for this new world of edge computing. I've listed some, oh, sorry, I said next slide. Yes, so Kubernetes, as I said, has been, was born in massive data centers, but is ready for extension into the edge just like Linux did for embedded. There's a bunch of projects that I've listed here that are already happening that are in CNCF that are ready and able to help you as you go to the edge and bring Cloud-native with it. I encourage you to have your developers check these out. I encourage you to get them participating. There's a lot of value here that's just feeding to be utilized by you as it is already being utilized by others. So, next slide. With that said, it's obvious that a lot's going on. I highly encourage you to get involved. Get involved with team Cloud-native today. We have a Kubernetes IoT Edge Working Group that meets bi-weekly. So there's a link here to join the list. I already mentioned the CNF Working Group. It meets every Monday at 1600 UTC and there's a link. Finally, let ourselves get educated and teach each other. I urge you to fill out our Edge Micro Survey. Micro means no more than eight questions. And it's not, you don't have to write the answers. It's all multiple choice. I encourage you to come and fill this survey out so that we can release results of what's happening at the Edge with Cloud-native. Next slide. I truly believe the future is bright. Team Cloud-native is a community of doers. You've seen that with the momentum we bring to everything we do, with the excitement, the passion, the technical brilliance that we bring to everything. That's why we're being expanded into various technology and industry verticals. KubeCon Cloud-nativeCon that's coming soon has so many co-located events that just show off this brilliance of the community. We have everything from Cloud-native Rust Day to Kubernetes AI Day. All these things are happening. People are working on things and end users are benefiting. They are thriving, they're growing, they're contributing back. This is an ecosystem that is going to last. Next slide. If you want to get more involved and understand more of Cloud-native for Telco and the Edge, I have two specific co-located events at KubeCon that I would recommend for you. I would recommend attending Kubernetes on Edge Day, which is on May 4th, and Magma Day, which is about a project that Arpit has probably told you about already, that has just entered Linux Foundation, which is happening on May 3rd. Next slide. To get more deeply connected, to get more involved, you got to be at KubeCon, Cloud-nativeCon. Come here as I expand deeper into my thoughts on the post-pandemic era and the role Cloud-native community is playing, going to play in it. We are going live May 4th through 7th, 2021. And I was able to finagle a code for you from the events team to get a discounted price of $50. So feel free to use this code. Feel free to disperse this amongst your teams so they can also benefit from it. Use the registration link and join us there. I really hope to see you again very soon. Stay in next slide. Thank you very much. Feel free to reach out to me anytime. I would love to welcome you into team Cloud-native and work with you for innovation tomorrow, day after, and ever more to come. Thanks. All right, thank you. Thank you, Priyanka, and that was good. I think I picked up three very important messages from your talk for the networking and edge users, right? Number one, as we move from sort of VM to long-term containers, there's a hybrid model that is running right now both in the infrastructure and the applications. But you want to thank Cloud-native first. So check there. I think the telecom and edge users absolutely love that. The second one is as we have tools within CNCF on VNF, CNF testbed, as well as the working group, I think the collaboration between Anuket, which is LF networking projects, and the integration is very tight and is being worked so that it's seamless across. Absolutely. Yeah. And then finally on the edge, I think several of the LF edge blueprints, Crano, et cetera, they build on Kubernetes, right? So I think the collaboration, I really welcome and appreciate the collaboration between our two organizations in the community. So thank you for supporting that. Absolutely. It's been amazing. We work together, folks, pretty much every week. And the result is that your lives get easier. Exactly. All right. Thank you very much. And again, reminder to everybody, if you need to ask questions, these are live sessions. There's a Q&A button on your session. Please kind of type it in and we will take these right after the session. So thank you very much, Priyanka. All right. Okay. So now we have three end users that are going to talk about what they see in open source and how they see evolution for their particular domain, if you may. I'm just using the word generically. And we're gonna start off first with one of the smartest and the most intelligent person I have met in the past several decades. I mean, ever since the Bell Research Days, Dr. Jonathan Smith, he is at DARPA right now since 2017, I think. And he's also a professor at University of Pennsylvania. He leads several, several projects across both universities and DARPA, you know, DOD, you know, many, many things. But more importantly, he's a technologist and a strategist. And we recently announced the collaboration between the DARPA, DOD, and Linux Foundation to allow for open source to be utilized for, you know, applications and innovation in the government space. And again, keep in mind, you know, this is a very interesting use case. You know, I won't call this as an industry or a vertical like a manufacturer, but it is a real end user that's building on what you all have done. So without any further delay, I would love to welcome Dr. Jonathan Smith. I think that what I'd like to emphasize is that the primary locus for collaboration with the Linux Foundation has been a project that we call Ops 5G. It's Open Programmable Secure 5G. So the name tells you everything. It's open source. The focus is security and the particular problem that we're trying to wrestle with is the advances. Some of them being the programmability of the edge and some of them being programmability of the infrastructure. For example, the virtual functions. So what we did was we looked at how to deal with the problem of securing 5G. So our analysis showed that there were going to be hardware and software elements, but the hardware was going to be fairly generic. So the locus of, for example, security threats at the edge or security threats in the core, we're all going to be in the software. And that's not surprising. There's a lot of complexity there and there's a lot of people with expertise. Now, I think that the fact that we're driving so much functionality out to phones and edge devices tells us that we have to protect them. So what we did with ops 5G and we're delighted to be working with the Linux Foundation and the larger open source community because there is a very strong case to be made for transparency as perhaps the most fundamental advantage in security because for example, one issue that we can get traction on is we can get the source code. We can use various analytic tools that R&D and practitioners have and analyze it and we can be confident that it's running using integrity checks. So our model was to try to transform as DARPA does, transform the ecosystem for 5G, we had done some work and we identified Linux Foundation as an ideal way to do what we call transition. And transition means that it reaches mobile network operators, vendors, et cetera. So in my slides, I illustrate a flow from left to right as we go along the timeline from basically a box shipped with customized opaque software systems where we don't know what's going on to community built software systems perhaps with advanced elements that anybody can look at and they see can address other threats that might exist. So the ops 5G program is structured into four technical areas. The four technical areas are focused, I'll tell you about the last three first because they're very 5G focused and the first one is somewhat more open source focused. The 5G focused pieces are, we're focused on IoT devices and what we're trying to do is use the somewhat imprecise zero trust model. When we looked at that, our interpretation of it was it was basically a way to talk about least privilege and systems people are familiar with that. And the idea is that you only open the interfaces that you need for the tasks that you're doing at the time. So we've got work that we've chartered and paid for and gotten started on IoT software systems that are intended to be zero trust. In some cases they have very cool edge properties. For example, one of the projects has the notion that you can do some of the cryptographic work on the mech to offload work on the actual edge device. The concern with programmability is that if you look at many examples that we've seen throughout history, programmability is something that is too faced. It offers opportunity as, when basically creating exciting new systems and applications but it also potentially because of the complexity introduced by the software machine plus the hardware machine can introduce errors. So one of our focal points in TA3 is secure slices. And so what that does is that focuses on our view of how to isolate slices from each other. TA4 is focused on programmability again. The real objective of TA4 is to try to shift the model of programmability to one where the programs are constrained by the infrastructure to do good stuff and not do bad stuff. So to give you an example of an approach that you might use there, you might use formal methods to ensure that code has been verified. You might market with a signature. What we've done in TA4 is we focused on a challenge which is to try to defend against a Trillion node Mirai botnet because there's been a lot of speculation that we could actually get to a Trillion IoT devices. So if that occurs and we have something of that scale it will not be a pretty picture. So we've set a stretch goal like that and we'll be testing aggressively to see if people can meet that. And I'm gonna finish up quickly because I understand that we're over soon. But one of the craziest ideas and we do crazy ideas that we're trying is we realized that given the portability, the generality and the code quality of open source software it sometimes takes the open source community longer to get to the polished product that you want to ship. And one of the main problems with telecom software is that for interoperability it has to be built according to standards. And so we took a look at the process of moving from a standard to working code and we're betting in TA1, Technical Area 1 that the 3GPP and other standards documents are not Finnegan's wake. And what we mean by that statement is that the standards documents are using a specialized vocabulary. They're formatted and structured in a way that should allow natural language processing techniques to get some traction. So we're taking a run at building software systems that will essentially read standards documents and produce either intermediate forms of code or perhaps if we are as successful as I hope we are we'll produce big chunks of working C, C++ or other code. So with that, I think I'm done. If I'm paying attention to that. No, no, no. That is fine. I mean, JMS, we were just so intensely listening to your thoughts and research, but if you can hang on, we may have a couple of questions as well coming from. Oh, absolutely. I basically, I could talk forever on this. It's a pleasure working with the Linux Foundation and the Linux community. I started at Bell Labs in 81 as you mentioned. So I've been with Unix and networks for a long time and it's great to see where we are. No, I think that's fair. Thank you. No, I think it is very refreshing to hear the research, like true research versus development and deployment, the true research being a fundamental pillar of, I mean, it was always a pillar of DARPA and DOD, but collaborating with universities as you're showing, some of the top-notch CS departments, computer science departments, and collaborating with all the innovation that has happened in open source at the underlying layer, whether it's networking, 5G, Edge, IoT, et cetera, and then contributing back into the community from a security perspective. So I think the intention that you communicated, we really appreciate it because at the end of the day, I think everybody benefits, right? The government benefits as a user, obviously, but the open source community benefits because it's more secure. Telco's benefit, right? And I think that cycle is, I think, what we really appreciate. So there are a couple of questions that have come in. And again, for those of you who are not yet familiar with this platform, there's a click Q&A button on the right of your session just, or I think it's on the top, just send the questions over. But the question is, if you implement standards, obviously on the Edge and Core, how secure will it be so that people cannot infiltrate the 5G networks if there's any backdoor manufacturing things that have happened? And at the scale of, I think, what DoD warned five years ago. So I think from a perspective of security, how do you see the next couple of years using this strategy? Okay, here we go. So what I see happening is, so my view of security is that is the right stuff happening at the right place at the right time and the wrong stuff not happening. So you could imagine that consumer devices might be optimized for ease of setup because it's very costly to have personnel on the phone. So you make the device very promiscuous. It has lots of ports open. It reaches out in Bluetooth and everything else, trying to auto setup. Now that turns out to be a great model for not taking any service calls, et cetera. But it is not a good model for having a giant attack surface because if there's one bug in any of those ports that have been opened, you've got a problem. So when I think of, what I think of the way things are done today and it's probably the way that you have your home set up, I have a gateway and the gateway is my boundary between my personal policy and the policy of the network infrastructure that I connect to. So I think that what we're talking about with 5G and the edge devices is in some sense, we're moving some functionality and some of the role for securing things into the public network. And it's not quite the same thing, excuse me, as the model where we're kind of hierarchical and there's federated structure. I mean, I think the mobile network operators are offering some extremely exciting capabilities but the reason that we're investing in edge devices is that what we would like is for there to be no cost difference so that people will not buy cheap and insecure and then later discover that they didn't want to buy cheap and insecure rather the idea is that there will be secure infrastructure and there'll be vendors who will be able to sell it at a price competitive with junky insecure stuff. That's cool and I think there are a couple of questions related to that which is how do we expect global players to contribute to the effort in terms of the projects or more importantly the upstream downstream. And I think the one thing I would highlight personally having kind of worked through this is because we can view this project as utilizing the upstream open source projects within LFN say ONAP et cetera, LF edge like a Crano et cetera or even ORAN and Kubernetes or many projects that are underneath it. What we are going to see is an end user like a DARPA and a government would contribute back any research or any findings or anything from an open source perspective and upstream it with the help of the community. So the work, actual collaboration and the entity and this is open source 101 always push stuff to upstream don't fork, don't play it out and I think that's how we are setting this up which is clean and it's always done in the open community open source world through a commercial ecosystem that exists as part of Linux foundation. So I'm just trying to paraphrase the question and kind of answer the setup as well that we have for Linux foundation. So any other thoughts, GMS, if you want to add on that? Well, I think that what you're going to get is you're going to get a lot of evolution and this will affect the global community. You're going to get evolution in the applications that people want to use these devices for. I don't think, you know, if we roll back the clock many, many years that the vision was say to interconnect supercomputers with the Internet but in reality, you know, most of the bytes that are moving on the Internet today are email, social networking data, etc. And so, you know, I don't think we'll be able to fully envision what will happen but, you know, I see I see, for example, that you imagine the following. So let's say you're on a curvy road in the Alps. You could conceivably have cameras placed ahead of you on the curve and have your automobile reach out and give you a view around the curve which today people might hack up with mirrors. But, you know, you could do that with an IoT device in 5G because you have the capacity with the 5G offerings to do that. So, you know, this is to just take one crisp example of a safety application that might be considered a useful public infrastructure but it has to be secure because what you don't want somebody doing is monkeying with this and, for example, you know, showing nothing there and you have, you know, a double semi tractor trailer coming around the curve. So, you know, that's just an example of how we think, you know, security helps everyone. It's, you know, there seem to be people who like to monkey around with systems that are trying to help people and we're trying to stop that. I think that's a great statement to sort of wrap this up but very, very insightful and I know we can go on for days but thank you, GMS. Sure, absolutely. Pleasure to work with you guys. Thank you, thank you. All right. Okay, with that insight, I think we move to another sort of an enterprise and user. And for those of you who may not know, you know, Walmart, I think, I don't think you should be on the call. No, sorry. But we have, you know, from the Office of CTO Senior Director of Technology and Commercialization, Subhadra Tathawarti and she is actually the, you know, brains behind figuring out how to, you know, lead with technology, how to lead with open technology in specific enterprise vertical, right? In this case, you know, retail. But how that can be utilized across a lot more and what they are doing to kind of step up the game here. So with that, let me welcome Subhadra on the call. Thank you so much, Arpit. And thank you so much for the warm introduction as well as the opportunity to make our first, I think, presentation here in this absolutely fantastic organization and forum. As Arpit said, Walmart almost everyone knows Walmart. We are the largest retailer. Traditionally, our businesses have been operated in store, from stores and clubs and recently on e-commerce or online. And when Arpit and team asked me to do a presentation here on how, what is open source positioning within Walmart as well as how Walmart can contribute back to the community with a view and in the lens with what is happening in the last one year due to pandemic and everything else. I thought this was a fantastic opportunity to kind of illustrate some of the challenges Walmart had had to face in the last one year and how we were able to absolutely use the open source technology available in the community to scale our business as well as some of the not so well known contributions back to the community in the recent past and what we plan to do in the next couple of years to come in contributing back to the community and to forming an extremely symbiotic fruitful relationship. Next slide please. So this is just a quick illustration of the business impact. We all have had a tremendous impact. I absolutely resonated when Priyanka was talking about the changes that the pandemic brought about in not just in our daily lives but also how businesses work. Just to give you an illustration of scale and we service 222 million customers across Omni Channel, across 24 countries. It's an interesting tidbit that when everybody else was running out of toilet papers you were able to actually provide those toilet papers. That was quite interesting and I thought I should just put it in here. Traditionally, most of Walmart customers would have the percentage of Walmart transactions were primarily in store and in clubs and not necessarily as much in terms of percentage online. What COVID has done is completely flippant for us. We've seen up to 76% increase in e-commerce traffic. We also observed that there was a significant change in customer patterns, behavior patterns. More and more customers were coming to Walmart first. Second, because of price EDLC strategy. Second, we observed was more and more customers were buying online but picking up in store. So technologies like BOPIS and check-in not only had to be developed fast but also we had to scale really fast. We also understood that hyper-personalization although it was already happening but it accelerated during the COVID times. Given the changing customer patterns what we observed was stores now instead of being the product discovery and transaction centers were becoming or morphing into fulfillment centers. So advent of new technology ensuring that latency requirements to support our associates who are fulfilling these orders how does supply chain work what kind of technology needs to be hosted where all of those became more and more important problems for us to solve during the last year. And most importantly rising volumes was fantastic for our business but given retail business and our operating margins operational efficiencies became paramount for us. So how did this impact technology? We had to absolutely modernize our tech stack. We were already on that process. Last year or 2018-19 was a big year for us to modernize our tech stack. We are increasingly moving our workloads to cloud. We have now an analytics workload completely on cloud. We have 1.7 petabytes of data residing there. But it also meant that we also had a significantly large on-prem presence. So we operate in a hybrid cloud environment. We have presence on Azure, Google as well as on-prem in our own data centers. Not only that we also had to figure out how to scale our fleet up and down seamlessly in a way that actually had minimal impact in availability, reliability as well as security. As part of this whole process of modernizing tech stack modernizing making sure that our fleet was available or available for sourcing traffic, etc. The very first tenant, as we all know for a highly available system is observability. And the observability became even bigger principle in how we design our systems. How do we restructure, re-architect our fleet? How do we make them cloud native? Observability at every layer of our stack. And as I said, stores were morphing. Stores were morphing into fulfillment centers from a business perspective. But then we started to look at and in the past, as an example, we had three nodes in store to deploy business continuity applications for very critical applications like cloud power checkout. So checkout as an example in store was absolutely critical for us. However, as more and more technology was being used by us for the changing customer patterns cloud our edge presence in store and how we then think about edge platform for Walmart started to change as well. And to again and through the process, most importantly just to take a step back in the last three to four years I think Walmart is very proud to know, I mean it's very little known fact, but we are one of the extremely early adopters of AI and ML to drive operational efficiencies within retail sector and we have had tremendous amount of returns in terms of efficiencies due to these measures and we will continue to invest in AI and ML in the next foreseeable future and then COVID has definitely accelerated that investment. Next slide, please. So we just talked about business impact of COVID. How did that impact our technology stack? What did we have to do? And again, I always I joined Walmart a couple of months ago and it always amazes me that it's almost impossible for to run such an extremely large efficient operation without latest and greatest technology. We don't talk very much about it outside Walmart. We are huge consumers of open source technology. We have no GS to open stack on the left hand side. What you're seeing is just a few open source technologies that we actually currently use and has helped us to do exactly what we talked in the previous slide, which is ensuring our tech stack is able to scale scale seamlessly and be able to then work for the future use cases. On the right side, which again, Walmart is not very known for and hopefully with this collaboration with the next foundation, we will be a big more out there. We have actively open source. These are the top three projects in the last year and a half. Electrode is our Node.js framework. OneOps is essentially our cloud deployment and workload management platform. Primarily earlier designed for VMs, but again, as I mentioned earlier, we operate in a multi-cloud environment. Our store nodes, which is we also are also managed through OneOps. And in the next iteration of OneOps is actually our Kubernetes based with what is called as WCNP, but essentially it's again another cloud management workload management software and deployment software that manages regardless of where your workload sits, whether it's on-prem, off-prem, IHG, et cetera. And then Concord is our CI CD platform. So again, I think what I want to kind of illustrate here is just from a business consumer perspective, although Walmart, I mean, we know more known as a retail giant, what has powered Walmart's business and operations has been A, technology, B, investments in infrastructure and innovations that has helped scale and scale up our technology stack. And three, absolute, I wouldn't say collaboration as much as usage right now, but in the future collaboration with the open-source community to actually work together. So it's an open-source community to actually help us attain that scale. Next slide, please. So bringing it down to the topic of today, right? Edge cloud networking area. And so some of the open-source technology that we've used to solve the similar problems that we talked about is Envoy and Istio. Istio was opt-in last year, and now we're moving to opt-out. So across all applications. So it is becoming even more important for us to be able to understand what services we operate, understand, have a fantastic topology of everything that we run, regardless of where we run and be able to then effectively manage and get the observability we need. EBPF is another significant player in what we have been able to do on the networking side. In fact, we are in the early phase of figuring out how to give back what we have done with EBPF to the developer community. One example of what we were able to achieve through our EBPF solution is that we were able to control the connection limit as a feature set. It's a number of concurrent connections on team and proxy node. So we had, during the Christmas time, the shopping season, we had an event. We were able to control the number of networks or number of connections that were coming in. We were able to very effectively manage them and have a way of still maintaining our customer experience without necessarily taking down the entire site. So we will be talking a little bit more hopefully in the next few weeks about what the EBPF solution is and how we are planning to open source it through the next foundation. And of course, we are active members of CNCF and we've had other open source technology that we've used in the past, so we're going to continue to use it in the future as well. Again, next slide please. So again, my talk is fairly short. I was given 15 minutes, so I thought 10 minutes for speaking, but very quickly. So Arpitha asked me, what would be Walmart's call out to the open source community? Again, this is a stance we are taking. In the last three to four years, we have done a tremendous amount of investments in not just using open source technology but also giving back to the community. We hope and that we are able to form collaborative partnerships across this community through Linux Foundation's help. There are many, again, the WCNP or O1Ops workload management and work for deployment software. We plan to open source it as well. We plan to make sure that others can benefit through a multi-cloud deployment tool that we have greatly benefited from. It has increased developer productivity. Deployment doesn't happen as painfully as it used to, especially, you know, we are one of the very few companies that have also extremely important workloads running in edge in our stores. We hope that the community is, we were able to make a call out to the community to effectively partner with us. Early adoption, again, this is another call out, you know, we are with Walmart scale and complexity. Very few other companies will want to solve and then provide it back to the community. So adopting some of our software early on will hopefully give you the benefits that we have read and we can form a symbiotic relationship. And then from a prioritization perspective, I hope that we are able to prioritize this, some of our technologies for all of us collectively to be the benefits. So all in all, I think there are three different call outs. One is that we want to be, we are making a statement here that we want to become even more active partners and collaborators in the open source community. The second is the many open source technologies that we've used in the past, you know, whether it's on wire, STO for service mesh and other use cases or EVPF based technologies for network functions, deployment of network functions for example, observability within our stack or and as part of that whole ecosystem, we hope that we can become bigger partners and play a bigger role in the community going forward. So thank you. All right, excellent. So I know we are a couple of minutes late but I do want to take one question and give you a summary of five similar questions that have come in. Most of all of the questions are related to the decade old question, you know, is centralization good or is distributed good, right? And specifically in the context of edge. So the question really surrounds the Walmart strategy where, you know, whether it's because of COVID or just online or whatever, right? What do you have to do to, you know, really are relevant in the edge data centers slash stores slash distributed sort of retail outlets versus centralization, right? Yeah, so it's so as we are, that's an excellent question, by the way. In fact, that's exactly what I and my team are right now thinking to figure out what is our edge strategy? What kind of workload should be our edge? What should be our edge platform as a service? As an example, whether it's internal or external, what should we do with there? The ones that came to light very quickly, and I'll first talk about overall edge use cases and I can then talk a little specific about Walmart's edge use cases. The one, the workloads that are overall feel like that can be deployed on the edge are primarily the one that require, and again, it's a very, it requires low latency and high compute, right? Extremely data driven. We have seen workloads like AI and audio AR VR workloads that we need to have a fairly significant and robust edge platform. We are investing heavily in security that is driven by cameras. So processing image is extremely fast and being able to inference, so AI and ML driven inferencing. So anything that you have to very quickly inference and which will affect your latency is going to be a challenge for us. So those workloads will be on edge. We already have a GPU nodes in every store to be able to process those use cases, security-related use cases, checkout and payment systems, payment gateways. So we are introducing self-checkouts. We are introducing seamless shopping experience within stores as well. This is again one of our COVID learnings. So a combination of security and checkout will be very critical where latency is critical for us. We cannot to reduce fraud and other things. We are also experimenting with drones and anything that is related to a chatty application if the drone architecture is not right, but most importantly, we don't see that chattiness, but what we see is that there is a significant amount of data offloading that is required for anything autonomous devices, whether it's drones, whether it's vehicles within Walmart's fleet or even robotics that happen that are used within Walmart's infrastructure and space. We have seen those use cases where there's a significant amount of data offloading and computation that is required. So those are some use cases that we feel that edge will become extremely critical. Oh, my God. Yeah. This is very much in line with I think what the community is thinking. So very good. Thank you very much for giving your insights here at the summit. So thanks a lot. Thank you. All right. And then, you know, I would say the next presentation requires no introduction, right? The POSTA, So you can have a short conversation with the Board of Trustees, on the social and media community, Andrey Fuch and AT&T, you know, are the poster child of Open Source of Open Source taught leadership. They have been leading the global ecosystem on how to think open, how to change capex-opics, radicalizing SDN, NFV, automation, you know, and kind of showing the rest of us the way do Sondre then I think I'll get some negative points because you know that's like not a good thing and everybody knows Sondre so take it on. Thank you. It's great to be here and thanks everyone for tuning in. This is hopefully a much brighter uplifting year we're heading into compared to last year and certainly you know with the pandemic and everything we experienced last year it undoubtedly changed the world and I will say the one thing that hasn't changed here at AT&T is really our commitment to keeping our customers connected and you know the work we do at AT&T is really critical to millions of people businesses and first responders not just throughout the United States but throughout the world and I'm really proud to say that our network has really withstood this test in time and I can't say enough about not just the amazing technology that delivered that but also the amazing people that were behind that technology and while we didn't see COVID coming we've done a tremendous amount of building and investing in our network over the past several years that have really paid dividends during this crisis and I'm going to share a bit about that shortly. If you look at the investment we've made we put in well over 135 billion dollars over the past five years into our United States infrastructure to build a very robust network with self-healing architectures, open standards that have really helped us get ready for this you know arguably unforeseen moment in history that we all have been experiencing. So if we go to the next slide let me kind of show you some data behind what I'm talking about here you know when it comes to our network even without COVID we were carrying more data than ever before in fact our global network carries more than 450 petabytes of data traffic on an average day that's that's quite a bit of traffic. You can see on this chart here on the left and it coincidentally it was exactly one year ago to this day March 10th 2020 when we began to see really the first major traffic surge that you could argue was due to the COVID stay at home mandates that were being issued across the country not to mention across the world and you can quickly see the surge the jump we saw across our backbone network as depicted in this graph. We saw over a 20 increase in traffic in just three weeks compared to pre-pandemic figures it's pretty astounding and as you can look you can see the growth really has not slowed. If you pay attention to sort of the right hand side of that graph you can also see a late surge towards the end of the year and I get a lot of questions like what happened there well it may be a distant memory or maybe not too long ago we had a lot going on in the last a few months of the year of course we had the elections going on so we saw a lot of traffic going across the network in terms of what you're seeing with the social media networks a lot of uploading of videos and downloading of videos. We also saw this tremendous surge in 4k traffic that's 4k video streaming and this really corresponds to a lot of the strong 4k tv sales that were reported in the last quarter as you probably know with everyone being home a lot of folks out there I guess decided to upgrade their their display sets and of course if you're going to upgrade you want to go with the latest and so 4k has certainly showed up in the statistics. We also saw a great increase in iCloud traffic too we all know Apple launched their latest and greatest iPhone and that was been a very popular product and of course as people have been syncing up their pictures and videos we saw a lot of upstreaming traffic hit the network as well so that's that sort of accounts for that some of the components in that big surge you see towards the end of the year. I want to just share a couple other interesting data points that we saw throughout the year you know early on in the pandemic customers twice set the record for text messaging during the March spring break last year and on Easter weekend people were sending at one point more than 23,000 text messages per second across the AT&T network you know if you look at the previous peak pre-pandemic that was about 15,000 texts per second so that's about a 50 53 percent increase compared to pre-pandemic levels. Also believe it or not voice is popular voice became popular again wireless voice usage soared we saw over a 40 percent increase as people began to work from home and use their mobile devices to attend meetings and conference calls and you know to connect in and stay productive. Of course also interesting that was voice that surged but what's interesting on mobile data and maybe this is surprising to some but mobile data volumes actually slightly decreased during the early part of COVID last year since many people were connecting their mobile smartphones to their Wi-Fi broadband connected networks at home and so that sort of accounts for why we would see on the cellular network kind of a decrease on the on the data side there. Now I also want to talk a bit about our first responders and this is a really important part of AT&T's network we built the whole separate network dedicated to first responders we call it FirstNet and thanks to the unique benefits like dedicated connectivity when it's necessary always on priority and preemption and also a very high quality dedicated spectrum of what we call band 14 FirstNet is one of the fastest commercial networks out there and we have over two million connections on this FirstNet network and we're also seeing some interesting behaviors and statistics there where first responders consume more than twice as much data as our general consumers reinforcing the need and importance of having a network purpose built for public safety. Our AT&T network was able to withstand this unprecedented new normal of demand and as you can see in that chart and and that was really thanks to the investments that were made and a lot of behind-the-scenes work that we frankly put into these open initiatives over the past decade and so I'm going to talk a little bit more here on the next slide of what we've done there. So a lot of this was started with a program about seven years ago now that was really we set out to virtualize and and software enable the majority of our network and this is really the foundation of everything we do and part of this really important foundation is open source technology and the advances we've made in in cloud native virtualization, containerization, hardware and software disaggregation have really allowed us to reach our software-based networking goals of 75 percent. This was a goal that we pledged many years ago to hit by the end of 2020 and I'm proud to say that we not only hit that goal but we surpassed it and right now we're well over 77 percent so a lot of great work and kudos to the folks the many many great people that were behind making that happen and especially the many many open source software projects out there that we've been a part of in those communities that have helped us get there. So having a much more software-centric network really enables us to respond rapidly to any new demand on the network you know even those that are caused by pandemics just a couple quick examples here I mentioned voice becoming popular voice over wi-fi was 100% virtualized prior to COVID-19 and that's really thanks to the great work that we did to roll out and deploy our virtualized evolved data packet gateways that's a network component that we use internally and these functions run basically on open stack on our on our internal network cloud and we use own app for orchestration and data analytics and in addition we also have our engineering and operations team using quite a large suite of open source tools to help monitor and manage what's running in production. You know during the first few months in the pandemic wi-fi calling averaged 90 percent higher than pre-COVID levels and not surprisingly the average call duration was over 75 percent longer than pre-COVID and that network worked flawlessly and and again that was because of this great foundation that was really based on a lot of good work and and open open standards and open source. Another great example is what we've done in our what we call our SD WAN service space this is a global service that we offer to our enterprise customers basically this is think of it as enterprise VPNs for employees to access their private corporate networks from anywhere they might be on any type of ordinary broadband access and as most of us saw early on in COVID and even still today home access to these private corporate networks is absolutely essential. There was a dramatic increase in usage of these services during the COVID 19 spread across the world in fact we saw 16 times demand surge compared to the average usage in 2019. The virtualization of the service allowed us to dynamically increase our capacity to meet the surging demand in weeks so again a great testament to a lot of the software behind this great service. Also mobile messaging as I mentioned earlier you know here at AT&T we experienced three years of text messaging growth in just one week and that was again something we've never seen prior and we fortunately had moved this messaging traffic from our legacy infrastructure to our cloud-based infrastructure again our internal network cloud-based infrastructure to assist the offload and really help this demand to help serve this demand surge and our software-enabled network allowed us to add more than 60 capacity to our messaging platform in just one day. So you know as you can see from these examples this is really the power of having an open automated software-centric network and it enables a much more agile dynamic network that can elastically scale to meet you know whatever the demands and needs are that we throw at it. So if we go on to the next slide here and this is my last slide just quickly here I want to just share just a few of the projects of the many open source projects that we've been participating in the community with and just highlight a couple of areas that have really helped us position our network and to deliver some of the examples I just walked you through. First the open compute project you know we previously submitted designs for several open disaggregated routing platforms to OCP and since then we've been deploying our next gen IP MPLS core routing platform into our production network based on this open hardware design reference spec. We also chose some brought in some very new and innovative disruptive suppliers such as DriveNets and they're providing the network operating system software for this core use case and it's in production today and it's working quite well. We also followed up with our first AT&T IP edge use case a peering use case which is also running in our production network today it's actually running on the exact same hardware as our core however we chose a different software layer a different NOS and in this case here we chose Cisco and their iOS XR network operating system to provide the management control functions for AT&T's IP edge network and this is really the start of a journey to converge the spirit edge implementations we have today onto a more common software and hardware driving platform that really delivers uniformity simplification and agility and this is what's really going to power our network for the next decade as we see you know continued growth I call it you know the tsunami of demand that hits our network each and every day. Another really important project here that we've got launched here is airship and you know AT&T's 5G cloud infrastructure is really enabled by airship and the cloud native computing foundation CNCF and recently AT&T we contributed airship which is a purpose built high performance network cloud infrastructure that integrates about 14 different CNCF projects together and AT&T's network cloud uses airship in in production today to really enable faster deployments much greater scale and ensures a 100% consistency that the network is is operating as we expect it and and certainly ensures that it's also secured as we need it and I'm proud to say as the now airship is a certified Kubernetes distribution under the CNCF performance program this provides us lots of great key benefits to name a few you know really consistency to simplify interactions for users it gives us timely updates so we always have the latest features the community has been working on and then conformability so that any user can confirm that their distribution or platform remains conformant by running the identical open source conformance applications that was used to certify it in the first place so we're really really happy with how that's working and a really great project another project a new project it's not really a new project it's really a convergence of two other project two projects together just renamed a new kit I'm very excited about the merging of CNTT and OPNFE they've come together into this single entity here and the merger here really builds on a rich development history that we've had with OPNFE and the substantial specification management from CNTT and this move will empower the global communications community we hope to really better bring together the various reference cloud excuse me reference cloud infrastructure models and architectures together and to allow us to go much faster and how we deploy to be much more reliable and of course to be much more secure so really a great collaboration here and we're really excited to be part of this and looking forward to lots of new advances to come here of course I can't talk about open source without talking about the RAN the radio access network and of course the RAN we're all about O-RAN the O-RAN alliance and specifically the O-RAN alliance in the O-RAN software community are really two important and very complimentary open initiatives and the goal of O-RAN is to really enable new participants to come in into the RAN space by disaggregating the RAN with open interfaces and the O-RAN software community continues to grow it as we've just completed our cherry release just late last year and we're going to be moving the O-RAN ecosystem closer to commercial deployments around the globe the number of individual contributors and software commits have really grown significantly over the past year and we're really proud about that the LAD I should say the latest release also creates a new software project a service management and orchestration SMO and continues to build an open interface to drive development of auto configuration and management of O-RAN elements now to show how far O-RAN has progressed towards practical implementation AT&T along with Nokia we just recently completed a proof of concept demonstrating a full end-to-end layer 3 call over a fully virtualized cloud RAN and this successful trial represents a really key milestone for us in improving out the maturity and the capability of a truly open architecture so we're really excited about this and then last but not least I have to talk a bit about O-NAP and specifically a lot going on there but specifically I want to just mention the alignment of O-NAP with O-RAN O-NAP does many different things but one particular area is how O-NAP is helping us with our 5G and O-RAN initiatives and O-NAP is positioned to serve as an open source implementation really supporting the as I mentioned the O-RAN service management and orchestration functionality and also this non-real-time RAN intelligent controller that we've been working on and as we all know data is the lifeblood of a network and it's critical to drive if you want to drive a really intelligent agile network you've got to have lots of data and you also have a lot of capabilities of course we're looking to AI and machine learning type implementations to drive better orchestration and control and O-NAP is a great platform for us to really pull that together and to start supporting that. So many other open source projects out there also that were engaged in I don't have the time to mention them all but I do want to thank all of the open source communities I do want to throw some shots out to Acumose, Acreno and Dainos and thank everyone for all your contributions and I definitely want to thank all of you for your help and support to help build these projects and make them what they are today. So with that our pit I'll turn it back to you. Yeah no this is awesome on right every time you say and give us insights it's just out of the world right like and you know the very fact that exactly a year ago March 10th is when you got the spike it's just been a year fantastic. Did you plan that or something? It's a year's anniversary of the AT&T spike no not at all but there are so many questions I'm going to have a time for a couple that are very relevant the first one being how is AT&T aligning org changes given the software defined world that's the first question in terms of skill sets and people the second question is more you know do we expect network traffic patterns caused by COVID going back to quote normal or any different so those are kind of two I think important questions for people to understand. Yeah let me let me take the second question first and just and I'll you know give you kind of a sneak peek you saw you know you saw the chart for 2020 you're probably wondering well gee what happened you know what's happened in the last couple months I'll just say that the curve just keeps going up and so and that's probably not a surprise right and it's it's pretty if you think about you know how connected our lives are you know there's recent surveys that you know when you interview people they'll actually say you know having internet connectivity ranks right up there with you know electricity and indoor plumbing and in some cases I think when you ask sort of the younger demographic they would rather give up indoor plumbing than give up their internet so I think demand we can just count on it you know everything if you look at everything in your house what you wear wearables like I've got appliances now even you know I've seen connected now vacuum cleaners I mean it's just demands going up and up I would say what's also interesting a new phenomenon we're seeing here is uplink you know more you know typically you know the asymmetric nature of traffic of downlink to uplink is sort of traditionally been 10 to 1 we see that changing more and more where uplink is more and more relevant and that shouldn't be surprising right as as our lives become more connected and it's not just us you know uploading more tiktok videos or youtube videos that's certainly growing but also it's all the connected machines in our lives are that are uploading their data so we will see continued growth there as far as the first question goes around you know just organizationally what have we done what we've done is we've taken a lot of the knowledge and people and the tools and the systems within AT&T that were incubated in this more central area within the network and we've actually distributed that out more and more into the business units and what we've done with these tools is also enable them so that our business clients within the company can take advantage of those the much much more within their own communities as opposed to having to you know sort of contract if you will into and have a very technical person try to describe their business problems and work solutions what it's all about is enabling and making that data available but also making the capabilities to tool sets that utilize that data available as well so I tell you you know at a high level you see a distribution of that talent going in all areas of the business very good very good excellent I know we are a couple of minutes over the hour there are a few other questions we'll take them offline I think there's one general couple of questions on the same thread on on the geopolitical nature of open source collaboration so let me handle that you know open source and Linux foundation in general we have put out several advisory notices but we are a global community and if you don't believe that you know join us tomorrow you know we have several of our leaders from Asia talking about you know innovation and how open source builds on each other you know there are no country government or people boundaries when it comes to open source and collaboration that's what makes it really thriving and I know you know some of the questions were in that domain but I know we're over time but Andre as usual very insightful thank you very much for doing this and for everybody on the on the call I think we are at the end of today we'll see you tomorrow afternoon please log on for even more keynotes thank you Andre thank you