 So we're going to have some lightning talks here with updates on a bunch of our projects, blockchain technology, networking, JavaScript, you name it. So we have Brian Bellendor from our Hyperledger project, Bill Snow from our Open Networking Foundation, Chris Anichuk from the Open Container Initiative, Michael Rogers from Node.js, Heather Kirxie from OPNFV, and Chris Borchers from the JS Foundation. We'll be coming up and giving a rapid set of lightning talks and updates on each of these projects. So I'd like to first introduce Brian Bellendorf, one of the lead developers of the Apache web server, one of the founders of the ASF, former CTO of the World Economic Forum. Just check his Wikipedia page. Sorry. Mic drop on my behalf. Cool. Well, y'all heard a bunch yesterday about Hyperledger and the project and kind of what we're building, especially if you came to my talk afterward. So I thought, it'd be a little redundant if I just gave you another update. What I thought I'd do since most of you are still probably confused or at least like asking a really honest and genuine question, which is, why would you use this when you could just scale up with a single big database? And so what I thought I'd do in the brief time that I have is try to see if I can get through five real-world apps in five minutes that hopefully give you a flavor of why this is different. So many of you have probably heard about the diamond industry example, right? So diamond industry is centralized in some ways. There's a big player called DeBeers that runs a lot of it, but decentralized in others. There's a lot of retail shops. There's a lot of suppliers and an awful lot of mines. And one of the problems is that some of those mines are in countries that have very poor human rights records, so they want to keep those from entering, those are the conflict diamonds. And a lot of times diamonds are used as a way to funnel money for real estate activity, all of that. That industry has decided to get together and take what is today an existing process, a centralized process for tracking the flow of diamonds, which is not very transparent. It's centralized in a nonprofit in Brussels and instead move that into a shared database. Now why can't that one nonprofit in Brussels just scale up with a single big database? People don't trust each other in this market. People want to be able to record transactions. They want to be able to see the data authentically, and they don't trust that one org will be able to keep the truth clear for everyone else. And so this is already in pilot today. It's running on hyperledger fabric. They've already caught millions of dollars in fraud in the form of diamonds that don't quite match up. Diamonds out out of this node does not equal diamonds in, right? And when they move it into production, this will essentially be the system of record for who owns what diamonds out in the world. The second one shifting immediately from that, but kind of the same thing. Pork in China. Walmart and a couple of other companies and a lot of suppliers are getting together to implement a tracing system for the flow of the supply chain around pork and other meat products going from farms in China to retail networks around the world. And the rationale here is obviously things like looking for food safety issues, potential fraud, but also importantly trying to catch even Walmart isn't big enough to be able to tell the entire supply chain, just come on to our database. We'll set up one big database where we can track all of you and keep everything straight. No, even they don't have that market pressure because all those suppliers sell to other parties. Instead they walk in and say let's collectively get together on one big chain. And so whether it's Walmart, whether it's other companies that are a part of that chain and other retail outlets now will have across the industry a clean system to do this. It has to start with some minimum viable threshold of participants, but once you get it going people want it to be decentralized. Another one, and this is a bit more kind of nascent, but there's a lot of interest now in the health care industry. One of the places where this really could have an impact is in public data sets such as provider directories. Who's your doctor, right? What are their certifications, right? What is the evidence that they've recently been recertified in their domain of expertise? How do you trust when they say they have a certificate on their wall that they said they went to a certain medical school, right? So a group of companies are getting together to build provider directories on top of a public blockchain, well on top of a blockchain that will be publicly visible to whom certain entities will be able to write. And these are the entities that issue these certifications as well as the government that tracks a lot of these as well. This will be a public database. This is being built on Hyperledger as well and is really interesting to track. All three of those have been like non-financial kind of use cases, but certainly the financial world is interested in this because all over the place they have these transaction networks. And I mean the best quote, if you want like a pithy kind of one-liner, what the internet was to information blockchain technology will be to transactions. That was Ginny Rometti from IBM. This is a good example of that. CLS Bank is one of those secret world government kinds of organizations you never hear about. They handle almost all of the foreign exchange transactions between the major national banks of the world, right? So when the Bank of China wants to change some of the euros that it holds in $2, they route that, well a lot of those transactions through CLS Bank. CLS Bank processes $5 trillion a day of nominal value of these assets, right? And so they have a system today that is one big central database that's getting harder for people to trust. They are netting, turning that into what they're calling a settlement netting process where each of those different banks will now be a node on a consortium chain to make this work. And in the last 10 seconds, there's a company in China that is now setting up a, what's called an energy blockchain, but this is basically for all the sorts of distributed energy generation kinds of activities, renewable energy, that sort of thing, as a way to try to flatten that and not everybody having to trust the central kind of power utility to be clean and consistent about who gets compensated for what when it comes to energy generation. All these use cases are still emergent, with still early days in this space, but they're all being built on Hyperledger. And if you want to join hyperledger.org, thanks. Hi, I'm Bill Snow. I'm very thankful to Chris Rice this morning that set up the presentation for me and gave you the background on networking. But telecom and networking in general is in a big sea change. It's going through commoditization, much as the personal computer did many years ago. And networks do not support the workloads that we have now, and the workloads that are predicted in the future with the mobility, with the amount of video, it just, it won't be supported. So these networks have been built through vendor proprietary, vertically integrated boxes. And now they're moving towards commoditization, the hardware is commoditized, you can go to the open compute project and see many of the hardware efforts there. The software is being replaced through open source projects and platforms. So it's a lot happening in this industry. And I started 10 years ago with a concept called software to find networking, which is really matured a lot now. You bring software to find networking together with network functions, virtualization, and the ability to scale through the cloud. Then you can really start building some very interesting networks that will support these future capabilities. And the Open Networking Foundation is one of the nonprofit organizations that was started out of the team that invented SDN. The Open Networking Lab, which I work for, is another one. And what we've been responsible for is the open source platforms. And we have two Linux Foundation collaborative project projects here. One is onus, one is cord. What I'm going to talk about today is the new ONF Open Innovation Pipeline. Sorry to use the word innovation, Linus. But it's how we have worked with this disaggregation to bring solutions forward. First, just a little more background. The Open Network Foundation was founded as standardized, a very important part of this, it was a protocol called OpenFlow, and to evangelize SDN and get support in the industry for it. And they've done a great job with that. It's 110 member company organizations, a lot of accomplishments, but there's still challenges there. In the open source era, it's really not enough to have standards driving innovation. And in fact, there's a lot of very interesting change going on there now. So limited success also in the open source side of the Open Network Foundation. Also, we need solutions. Once we aggregate all these pieces, it's hard to put them back together again. There needs to be some way of doing that. And there's been great progress in separating the forwarding and control parts of these systems, of disaggregating these networking devices, of bringing out many different types of virtual network functions, and many open source projects. And again, we have two of them in the Linux Foundation. But there's of course a lot of challenges. Broad adoption has been slow. There's a lot of complexity now with all of the different platforms. There's a lot of uncertainty and confusion with so much change happening. And there's a lack of expertise and talent pool, especially when you look at operators. And so the Open Network Foundation is merging with the Open Networking Lab so that we can bring together the best of open source projects and the best of the working group standards activities to create a new value chain in which everyone in the ecosystem can participate. This value chain we call the Open Innovation Pipeline. And it just takes the pieces from the data plane up through your control plane, up through the programmable platforms, out to integration and service creation. And you need all of these pieces to be able to bring a solution to an operator. And these software defined standards where they come into play is in the interfaces and some of these components and allowing these components to be swapped in and out. So that's the concept. But it's also the way that we've been working inside the, inside Owen Lab. And I'm going to give you one example of what we're doing for mobility because 5G is just around the corner. Hopefully we're all going to have very, very, very interesting applications on our phones soon with 5G. But starting with a programmable data plane, adding a control plane with the Onus project, adding a solution platform with the mobile core. And then mobile core brings in a software defined radio network. And a disaggregated set of boxes that used to form the core of the mobile network. And many of those are now virtual functions which run on commodity Intel servers. All the way from the radio, all the way into the core, open source software to build these mobile networks. And these are serving, you see many of the operators here are part of our efforts. So mobile core is just one example. You can, as a company, you can come in and innovate at any part of this pipeline. There's some other solutions around enterprise as well as residential. Welcome to vendors. I'm going to skip ahead to the last slide. We're 200 members strong. We are very deeply involved with the operators. We work very closely with more than 20 of them. And our whole goal is to bring better networks for the good of the public by working with them. So thank you very much. All right. This is live. So I apologize in advance for my voice. I had a few days of skiing on the mountain. And my voice is given away. But let's chat a little bit about OCI. So I'm sure many of you are aware that containers are a hot topic in industry. People are starting to use containers in earnest in production in all various environments. If some of our sister foundations and the LF have done reports on this, Cloud Foundry has a container report that they did last year in June that showed more and more companies using containers in production. And CNCF recently did a survey within its community that found that people are using containers even at large scale, which is great. But for those who are aware in this industry is there's been a bit of a fragmentation in the container space. There's this little funny tweet I'd like to show that's one of my favorites when it comes to kind of describing the situation. The OCI was really formed to help alleviate this. And so if you kind of ask for like a quick description of what the OCI is, it's just a simple open source community to build a vendor neutral set of specifications around container runtimes and formats. That's it. That's all the OCI is. And if you really kind of step back, OCI really is delivering on the promise of what containers are supposed to be. They're supposed to be portal across different stacks, different orchestration platforms, different clouds, and so on. So that's really all OCI is trying to accomplish. Who's involved in this effort? We have a great mix of companies. We have all really the kind of major cloud providers here. We have AWS, Google, and so on. We have a good mix of startups that are doing really cool, innovative things in the container space. And of course, companies like Facebook who have been in Twitter who have been using containers in production for a very long time. In terms of technical leadership, we have a good mix of folks from a variety of companies that have been doing really innovative things in the container space. We have folks from Core West, Docker, Google, Red Hat, and even the famed Greg K.H. from the Linux kernel community are participating in the technical oversight for OCI. So what have we shipped? You know, I was here last year giving a brief description when OCI was started. But in a year, we've had over 3,000 commits across different OCI specifications and projects from over 100 people from across over 30 organizations. So we've done a lot of great work in the last year. In terms of actual releases, just a couple of weeks ago, we shipped 1.0 RC4, which is very, very close to kind of being the final 1.0 release for OCI. Hopefully in the next couple months, you know, I'll be happy to do an announcement that like we're finally here, we hit 1.0 and OCI is kind of ready to go. Docker recently donated more code to the project, this wonderful digest library that we've added. So a lot of work has been happening in the last year in terms of getting closer to that 1.0 final for OCI and really kind of delivering on that original promise of what containers we're supposed to do. In terms of early adoption, we have basically like any place you could actually run a container has been already adopting OCI technology in some fashion. You know, Docker with container D embeds, OCI technology, Kubernetes has been doing work, even the Mesa's community is picking on things. And just a couple of weeks ago, AWS announced that ECR is supporting, you know, OCI there. So it's been great to kind of see, you know, any place that, you know, essentially runs a container has been adopting OCI technology to kind of cap things off since about a minute left. If you're interested in container standardization and want to get involved and really help this happen on your various, you know, cloud providers or platforms, join the community. You know, all the meetings are public or open. And we'd kind of love to have you involved. So thank you for your time and look forward to the one, one release coming soon. All right. See, note in five minutes, because that's possible. So, okay. Oh, whoops, that is the wrong transition that I wanted. Sorry. Can we go back? Never mind. We're just going to move on. I don't have time for that. All right. Note is everywhere, literally everywhere. It's on your phone on your laptop and definitely in your browser tool chain. If you're a front end developer, you're using a ton of node tools in order to build out all of your front end resources. If you're on the back end, node is supported in every single cloud environment. Node is really driving a lot of the serverless stuff that's happening and every serverless offering has node in it. Your desktop applications, if you're using Slack like everybody else here that wants to get rid of their battery, that's a node app. Sorry. Visual Studio Code, it literally is everywhere. And one of the reasons why it continues to take off is because node and JavaScript are this universal platform and this universal skill set. We're continuing to see 100% year on growth. Now we're at about 7 million users. We have about 400,000 packages that you can use and install from MPM. That will be half a million shortly. It's really just a very impressive ecosystem. That said, I really want to get into what makes the node project a little bit unique, especially in a room like this with so many great projects and so much great growth going on. Node.js is really one of the first big post-github platforms. What I mean by that is the first commits that ever went into node where it happened on GitHub. The first releases happened on GitHub. As this community grew and everything blew out, this community happened on GitHub. Over 99% of the modules in MPM are on GitHub. All of our users are on GitHub. It's kind of one and the same. So as GitHub has really changed open source and made the open source landscape look so different, a lot of the new problems that it's caused are really our problems to deal with. So there are now millions of active contributors every month on GitHub. That's great. That's a huge pool to be able to pull from. The majority of those commits are from casual contributors. Not a majority of the people. The majority of the total commits on GitHub are from people that just casually contribute every once in a while to a couple projects. So if you can't harness those people and you can't bring in that kind of effort into your project, you're not probably going to survive in this kind of environment. And we certainly experienced that at one point. So just to get into it real quick, the way that we look at sustainability is that we have our users and then a subset of them are contributors and a subset of them are committers. Or in our project or in most large projects actually, and Apache projects look like this too, you have users and contributors that send in patches and committers and also a smaller group of people that just make high level decisions so that you can grow out the committer pool for reviewing and whatnot. And when you start to have sustainability problems, it's when the lines between these groups just get too big. So you have too many users putting too much demand on too few contributors. Or you have too many pull requests coming into one or two committers that are on a project. Or you're bottlenecking on high order decisions because you only have one person that's unable to do that. This is certainly what happened with us. The traditional solution to this problem is to say, oh, well, we need more committers. Let's find people that look like a committer on our project. They fit that profile and convince them to come into the project. Essentially, if you're a big enough project to have this problem, you're probably a fairly hard project to contribute to just technically. You have a technical barrier to entry. And so the natural thing to do is say, oh, how do I find more people that could possibly become committers and move them into that project? And that's what Node.js tried to do for a very long time before the foundation. And that didn't work. It just didn't work in this new GitHub environment. The landscape was just too competitive. And it was just very, very hard to attract that many high level people. And so we sort of abandoned that entire approach. And we went in the opposite direction. So instead, we look at our, every one of our users is a potential contributor. So how do we convert them into a contributor at any level in the project? Whether it's a one line doc change or a test change or whatnot? And how do we retain them as contributors over time? And then how do we eventually level them up, keep them around, turn them into maintainers? So instead of thinking this as a recruitment problem for high level committers, we're not looking for high level contributors. We're trying to do is create high level contributors. We've created a huge support system and a huge educational system, essentially, to level up really, really high valuable people. And we've seen people start with a one line change in our website and become learn C++ to hack on the C++ guts of Node.js and become a decision maker within about a year and a half. So this approach has been tremendously good for us and really fits what GitHub has done to open source generally. You can really start to see this in our most recent commit their contributor graphs. So this is the number of unique contributors every month. As you can see, when we started the foundation, we started to see a natural incline as we move to these policies. We really started to like see more people showing up in the project, more companies getting familiar with the project as well and dedicating resources. We were making it fun and more easy to contribute. So we saw a steady incline. But in the last three or four months, we've seen a dramatic increase. And this is really the whole project internalizing the idea of focusing on the most vulnerable contributor, the contributor that is the most difficult and least likely to contribute to the project. And how do we get them? And if we can get them, the whole spectrum in between them and a really hardcore contributor is easy. If we've done it for them, everybody else can contribute. And we've attracted people at every level of the project and every stage of the project. And it's been tremendously well. And I'm out of time. Thank you very much. So good morning, everybody. I'm Heather Kirksey. I head up OPNFE and I'm not using slides. So anyway, I'll just talk for a couple of minutes about the project and what we've been doing over the past couple of years. So we had Chris Rice this morning. Also, Bill Snow talked a little bit about what's been going on with networking, right? The traffic usage patterns are going up. The types of traffic are changing. You think about how you, what you do these days, there's not a time almost that you're not connected somehow to a network, whether that means you're getting your TV over the Internet. Now, whether that's Facebooking, tweeting, posting your selfies, using the Snapchats, everything you do is connected to the Internet and the service providers are having to reimagine their entire networks from the bottom, where their hardware is, all the way into their back office, where how they provision services, how they roll out services, sorry, I also have a little bit of a voice issue, and how they're going to approach services in general. So that's a really large thing to try to tackle, reimagining the networks that everyone uses from scratch and thinking about how to make that experience better for everyone who uses network-based services. Fortunately, there's a lot of stuff out there already that exists. OpenSAC is out there. Open Daylight has been out there for a while. We have new projects coming in, like ECOMP that you heard Chris Rice talk about this morning. It's still a lot of pieces to then put together and assemble and do an end-to-end network. And what happens when you actually try to do something like instantiate a Layer 2 or Layer 3 VPN that's made up of various piece parts, will that actually work? Or has things been lost in the unit testing, things that all the individual projects are doing? So what OPNFE does is systems integration as an open source effort, which is really interesting when it means that you have an open source project that is pretty much mostly using code that is owned by other organizations. So what we do is we actually pull together what we call scenarios. And we compose scenarios made up of particular individual components from particular upstream projects and deploy them and see how they work. If you deploy, you know, sort of things in an IPv6 only environment, do they actually work? If you want to pull together various SDN controllers, various forwarding plane options, try various back-end around orchestration and management, can you actually get a functioning system that does sort of what it's supposed to do so the operators can actually do this network transformation? So we first compose scenarios, then we deploy them. You know, one of the things, you know, as you are moving into this new paradigm is that you want to be able to make them automated. You want to be able to sort of take advantage of modern DevOps techniques in your network operations. So we do deployments and we do those into 16 labs right now around the world. We call those our pharaohs labs. We have multiple hardware architecture supported. We have hardware from the OCP project. We have white boxes. We have traditional OEMs. We've got ARM. We've got X86. So we've got these labs around the world with great environments. We auto deploy every night and then we do testing. And as I said, you know, if you're actually trying to meet the use cases of the search for editors like, you know, enable IPv6, enable VPNs, enable various network services, you actually have to validate that that works across the entire platform. So we do a lot of automated tests. We get that information back and then we feed that back to those communities. And we work really closely with those communities so that we're going back to those projects and sort of saying, look, when we deploy this on the end to end system, it either worked or it didn't. We also have a lot of performance testing things. If you think about sort of what NFE is trying to do, it's actually trying to sort of, you know, mimic what these sort of proprietary integrated boxes, there is, there's crazy code on these things so that you get the throughput you need for something like a core router. So being able to sort of get that throughput. So we've got a number of performance test suite that we've created that people can use to validate that you're getting the sort of performance that you need to enable things like streaming video. You know, that that's got a lot of really hardcore network requirements. So we do all of those things. We do all that testing. We feed that feedback upstream. And then we also create, right? So there are a number of features out there that operators know they want to use. And then we work hand in hand with upstream communities to do that. So for example, with OpenStack, we've had a huge focus on fault detection and fault resolution in Barcelona this year. We had a great video where we actually had a commercial piece from a mobile network. And Mark Collier from the OpenStack Foundation actually went in with a giant pair of scissors and started cutting wires out of the back of this commercial piece of telecom hardware. And we were on a live mobile call. This was a live code demo. And the call stayed up, which was really exciting. So it worked. So we were able to create and work upstream to get the features that service writers need in. So we compose, we deploy, we test, we iterate, and we work upstream to get all of that stuff in and sort of creating CICD that goes across all of the things. And it's really exciting. So get involved in OpenFV because one, the network is very important. And two, because it gives you a great deal of broad exposure to a lot of technologies out there, a lot of upstream communities. It's a great place to start in OpenSource because you will get that broad exposure. And also, quite frankly, we are a lot of fun. Okay. So I'm going to talk a little bit more about JavaScript. So the JS Foundation is one of the newest projects here at the LF. We launched back in October. So we've been around for about four months. And we are focused on bringing together this idea of the quick innovation cycle within JavaScript and creating a focal point for that. And so we have this idea around these three C's of basically by creating that center of gravity and having a unified community, we can then provide these mechanisms and a place for collaboration among those projects in a more sustainable way to provide continuity and long-term support for some of these JavaScript projects. So these are the organizations that stood us up and we're working with to grow this community. And so right now we have 23 open source projects. If you're familiar with the JavaScript's ecosystem, you probably recognize a number of these logos. So it's obviously a huge group of projects and this is really just scratching the surface of the extended JavaScript ecosystem. We're approaching about 6,000 unique contributors across 20 orgs and about 200 repos. And what I want to do is kind of take a look back at history and so we still support jQuery, which is over 10 years old now. And it kind of illustrates this idea of this long-term sustainability. So jQuery itself is maybe not the cool new project on the block, right? But it still needs to be supported. 18% of the internet still relies on it. And so maybe even illustrate that a little bit more. The jQuery CDN gets about 15,000 requests a second, which translates to over 36 billion downloads a month. So that's kind of a crazy number, if you think about it in terms of just and that's just one of our projects. And so if we take a look at some of the others, so this group of projects are the sort of top projects based on NPM downloads. And you would think based on those numbers that jQuery would probably leave that list. But instead, it's actually number seven. And Lowdash is downloaded 39 million times a month just on its own. And it's a utility library that practically every JavaScript project depends on. So we support basically the building blocks of the JavaScript ecosystem. And so in aggregate, all of our projects are about 80 million downloads a month. But it's not just about the numbers, right? So we have a number of initiatives that we work towards. So we work on standards processes. So we participate in W3C in a number of different working groups around web standards. We have representation on ECMA, TC39, where we're defining the JavaScript language. And we have a number of other initiatives that we are just kicking off around IoT for one. So we have two projects, Node-RED and JerryScript, which have a lot of momentum around them in the IoT space. And so we're standing up this sort of IoT segment within the foundation where we will be doing a lot of focused effort on brand association and governance and funding for IoT and JavaScript, as well as we recently just kicked off a working group around TypeScript in partnership with the TypeScript team to improve the TypeScript developer experience for projects not written in TypeScript. So for those that aren't aware, TypeScript is sort of a super set of JavaScript that adds typing and some other features and compiles down to JavaScript and it's a little bit difficult right now for non-TypeScript projects to support applications written in TypeScript. So those are two big things that we've just started kicking off this month and there's a lot more down the road. So with that, definitely get involved in our projects. And as far as membership goes, you can reach out that way as well. I'll be around. Definitely interested in talking about projects and supporting the foundation in general. And yeah, thank you.