 Good morning, everyone. Welcome to day four of the OpenStack Summit. My name is Jeff Borek, and I'm here with a couple of esteemed colleagues to talk to you about the landscape around the Open Container Initiative, what's going on with containers, the cloud-native computing foundation, and also how it relates, most importantly, to both the Magnum and the Courier Projects, and what I want you to do at this point is everyone pull out your smartphone, whether you're here or we're streaming over the web or catching the replay, because Val, Dan, and I both have an objective to get our social media numbers up. So follow me on Twitter, get the other links out there. I hope we'll have a fun and interactive session with you today. What we're going to cover are the benefits and trade-offs of this standardization approach around containers, what's going on, both from a high-level technical perspective, but also what's up with some of the working groups and or emerging standards around this space, how it relates to projects in OpenStack, again specifically the Magnum and Courier Projects, and some more details about what's going on with the OCI and CNCF, and you just can't have enough acronyms in our industry, so take close notes. Just briefly, again, Jeff Borek, I've been working in and around open source for over a decade now, and I most recently took a role in IBM's open technologies and partnership space, and in addition to working with foundations and communities, we also are re-energizing the activity around open source projects at IBM, so I'll mention a little bit about that at the close. I'm here with my IBM colleague, Dan Crook, and Dan is a senior software engineer and a customer advocate working in open source and open technologies with many IBM clients, and last but not least, really pleased that Val's joined, Val was the cloud czar at NetApp and is transitioning now to become the CTO of their most recent acquisition SolidFire, which is obviously very relevant to the whole OpenStack community, so very pleased to have the three of us up here, and without further ado, let me go ahead and turn it over to Dan to get us into the heart of the matter. Thanks. Okay, so let's take a quick look at the container technology, the benefits and the features about it that really drive why it's become so popular and why so many companies are adopting it, people are developing new technologies for it, and therefore, why there's a reason to have foundations to govern it, like the open cloud, open containers initiative and the cloud native computing foundation. So containers provide the isolation for processes on a machine, sharing resources, having dedicated isolated resources to run applications on top of a host system. You can consider them kind of a light compute resource, the analogy is not perfect, but basically they fit in the same compute role as a virtual machine, but the real benefit, the thing that has driven them to be so popular, is that magical magnitude improvement in the resources they require to run. They share kernel with the host operating system, as well as the tenfold decrease in the image size that makes them portable. So container technology, it's hit that magic 10x moment, so it's avoiding all of this duplication and making things more efficient. So I think it was at one of the keynotes or in an earlier session, I heard a pretty good analogy, there's a ton of analogies around containers, but buying a house versus renting an apartment is kind of the use case and comparison I like to see. If you want your own property, if you're gonna be there for a while, if you want to make modifications, you might buy a house. There's duplication in terms of the resources needed for that, but you have your own electricity, you have your own plumbing. Versus renting an apartment tends to be a transient nature, you don't really want to modify it, it's really not your property anyway. So you get the benefit of having signed a lease right away instead of going through closing and you're able to reuse the existing resources of that building. So you get greater density, but you still hear your neighbors. So there are some great things about the container technology, but there are trade-offs and a whole ecosystem of other projects have come up to kind of strengthen the promise of what containers can do. So even though they are revolutionary, they aren't new. It's been about 16 years of innovation. If you don't count the introduction of CH Root back in the late 70s. Essentially within a single UNIX server BSD was able to build on CH Root and create what are called JAILs, Enterprise of File System Isolation, but still some shared resources, so not complete, but still pretty revolutionary. A year later that technology, that idea was brought over to the Linux kernel, but unfortunately it required recombination to enable that feature. So that was basically a non-starter for many of the providers of distros or system administrators who didn't want to deviate too much from the defaults. Solaris brought the idea of packaging, at least introducing it, but it had some requirements. It could only run on Solaris and it only in it required the ZFS file system. There was some great innovation that came out of Google too, to actually isolate groups of processes in what are called process containers, giving them a set of resources dedicated to them that they could share. Independent of other containers. And Red Hat took that a bit further, allowing the idea of root access within a container and segmenting the actual the users and the rights, creating a new hierarchy below the basic root system on the host operating system. IBM back in 2008 started providing some user land tools for system administrators to better use the namespaces features in the C Groups. And Docker really brought this to the application developer, made it super easy to use. And from there it really took off and then some competing implementation started to take over, which really started to drive the need for governance, the need to collaborate on this technology that was so promising. And the thing here you'll notice is, you know, this was organic innovation and it came from lots of different companies. And that's kind of where we want to go with what's going on in OCI and CNCF as well. Okay, so OpenStack, it's an integration engine above all else. It adopts brand new technology as it comes out. In another keynote you saw it seen as kind of the lamp stack of the cloud, meaning that it's one of many components. It's something that provides a common ecosystem, a common framework for things to work together. So it's no surprise that you'll see containers throughout OpenStack and it can get confusing to, you know, what exactly people mean when they say containers on OpenStack. So over the last couple of years you've seen Docker, people, you know, looked at that analogy that they saw a lightweight VM. So it started out as kind of a compute resource for Nova. And just like compute resources, you wanted to orchestrate those resources on OpenStack, there was heat support provided. And some of the newer tools, COLA, its particular focus, is actually to deploy OpenStack itself, the control plane as containers and provide some ansible tooling around that. So that's for OpenStack itself, not really for the end user concern to self with. On the end user side, Morano is an application catalog to run containerized applications on top of OpenStack. But the two most relevant projects for what we're talking about for standardization are Magnum, which allows you to provide containers as a service to your end users for OpenStack, and Courier, which unifies the networking between containers, virtual machines, and bare metal, providing, again, that integration engine that makes different implementations almost transparent using a common language. Okay, so let's just take a look at Magnum and its relationship in particular to what are called container orchestration engines. The most well-known is Kubernetes, which is one of the projects you've probably heard many, many times already this week. So one of the drawbacks of Kubernetes is it's not designed to be multi-tenant. So what OpenStack brings to the table is its strengths in providing a project or tenant-based division of workloads. So you can isolate things that are running in different namespaces, different projects, and therefore, in each of those, you can run a container orchestration like Kubernetes without the need to worry about the multi-tenancy at the Kubernetes level itself. Kubernetes is just one of the projects in there. There's also Meso support, Docker Swarm as well, and it focuses on what OpenStack does very well, does strongly, and allows you to still have the power of using those native APIs. And the native APIs are really what are going to be governed by a lot of the organization of the standards, the implementations of container technology. As I mentioned before, even though Magnum is new, even though Courier is new, both of them leverage, mature OpenStack components and plug in fairly well. On the right side here, you'll see these are really just the core Magnum components. The common OpenStack design to provide an API process, a client process or package, and a conductor that works with the state and the database. And it leverages heat is used to actually lay down the images and provide any of the agents that are required for the COE. It builds on Docker and Nova, Neutron, Glance and Cinder, and Keystone of course is what I already mentioned for the tenant isolation. Okay, so Courier is a fairly new project. It was announced and my colleague Phil Estes did a presentation in Tokyo along with Mohamed, another colleague from IBM. Basically, the aim of Courier, and if you haven't already seen the presentations, there was an intro yesterday, there was a deep dive, several demos, you can find those on, the recordings are already up, so I won't go into too much detail. But what Courier is trying to do is unify how all of these compute resources, these pieces of the integration engine in OpenStack can come together, use a similar API, which will reduce the overhead of too much translation, too much nesting. And as well as provide a common programming model so that you can use the pieces you need, if you like containers, if you like VMs, you like hardware resources, you can leverage already what's built into the OpenStack cloud that you have. So a bit of the architecture, both Docker engine and Kubernetes, they've got their own native clients. LibNetwork here implements a standard called Container Networking Management, CNM, which is the protocol or the model of handling resources that Docker, is native to Docker, Courier translates those commands to what Neutron expects, and the same applies with Kubernetes as well and in the future, other orchestration engines. Okay, so giving you the basis of the need for governance and how it fits into OpenStack, I'll hand it back over to Jeff to take us through the OpenContainer initiative. Thanks, Dan. So I'm just going to take a moment. I don't really, with the kind of organization in the crowd we have, I don't need to preach to you about the values of open source per se, but it's an interesting time to stop and kind of just reflect back just for a minute in terms of when it comes to container technology. First, let me see. How many out there have heard of the OCI or know a good bit about it? Okay, about half, maybe not quite half of the group. So who should get to define how containers are built? And who should control how they're verified? And how would it be appropriate to have a single entity decide how they're signed or even how they're named? And I've been coming to OpenStack conferences since San Diego, and it was at the Atlanta event that there was a lot of initial buzz around Docker. And so, again, Docker is a great little company. They, Solomon really latched on to a really important innovation. And just briefly, without a lot of history, they were looking at doing a startup that initially was focused on end-to-end cloud-native application development. And they were working diligently, but they were starting to run out of funding. And so they had to do a quick pivot. So it was in late 2013 that they reemerged as both an open source project because they realized that they had created this fundamental enabling technology that was making containers, something up until that point, as you heard earlier from the history from Dan, you know, highly technical, you know, you could even call it a bit goppy, the kind of technology that companies like IBM and Google and others had been using for decades now, but the kind that really wasn't that approachable to the average developer or sys admin. And when Docker did their pivot, they reemerged focused on that technology and putting it out in open source and also creating a new entity or renaming themselves as Docker incorporated. And they made a pledge to be radically open. And we and other companies worked with them over the course of 2014 and into 2015 when, ultimately, about a year ago now, they announced that they were going to contribute to help found the OCI. The Open Container Initiative is a working group. How many are familiar with the collab projects that the Linux Foundation does? So that's even less. So I'll just mention briefly, I mean, everyone thinks of the Linux Foundation and you think of the Linux Project. And yet, for some time now, the Linux Foundation has emerged as an honest broker to help other significant projects, you know, find their way to not just an open source license, but also open governance. And a great recent example of that is the Node Project. You know, I think some, if not many are aware that, you know, that there was controversy in that community. There was also rapid growth. And there was actually fragmentation. And so IBM and a lot of other companies worked together to help a joint in the community reunite around a project that's now under the stewardship of the Linux Foundation as a collab project. How many have heard of the Swagger Project, the Open API standardization effort? I was fortunate enough about a year and a half ago to meet with Tony Tam back when that project, which had gotten some significant momentum around this concept of trying to provide a lightweight and simple framework about standardizing the API creation process and documenting it. And a lot of other companies have tried to solve that problem. IBM has tried to solve that problem in the past. And it's very challenging and very complex. And it's also very difficult if just a single commercial entity is putting something forth saying, hey, this is the way to do it. So another great and final recent example of Open Collaborative Projects is something that actually IBM drove. How many have heard of Bitcoin? Familiar with Bitcoin? How many are familiar with, keep your hands up if you're also familiar with Open Blockchain? So only about half of that. So Bitcoin, as you know, emerged as a significant e-electronic cash initiative several years ago, but it was always clouded in some level of murkiness. You know, who actually founded that? Who is behind the initial technology? IBM saw a lot of prog promise in the technology behind Open Blockchain, which is the core technology behind Bitcoin. And so IBM research about a little over a year ago now started working on coming up with a version of that technology that was hardened and provided some additional type of functionality that would be important to enterprise type of end users. And then IBM late last year came to the conclusion that, you know, we could try and productize that, but it really wouldn't be in the best service of the technology. And so IBM approached the Linux Foundation and other leaders in that segment of the industry and said, we'd like to make this an open level playing field project. And that was announced in December and they're working out to consolidate and move that project forward. So back now specifically again to the Open Container Initiative. It is a working group of the Linux Foundation. You can see the mission. It's around creating a single standard format and specification for container technology. And certainly I mentioned Docker, but a lot of credit goes to Alex and the team at CoreOS as well. They've had a long track record of both working with Docker and then having respectful disagreements with Docker as to how the technology would evolve. And Docker again to their credit has been very open about finding a way to make this something that everyone can take advantage of and everyone can leverage. And you can see the whole list of other companies that are signed up to participate and be involved in this important initiative. Lastly, I'll just comment briefly on the latest news. The initial effort was focused on the runtime specification and the RunC code that Docker donated to the project when it was announced at DockerCon mid last year. CoreOS has put forward APSI, which is their view of how to solve this challenge. And the effort has been underway in earnest since the OCI formed the technical committee and started working against this towards the end of last year early this year. The latest news which was announced here in late April was that moving beyond just the focus on the specification of the runtime, we're now working towards the development of a common specification around the format. And that's important because it provides that added element of transportability and a shared common foundation that assures that the OCI is still heading in the right direction. An analogy that I've heard used at this is, you know, it's like the web browser space. You've got Firefox, you've got Chrome, but they're all based on HTML5. And, you know, I can't imagine a productive web that didn't have a shared core common technology that wasn't controlled by a single individual. The challenge, I think, that the OCI is going to see moving forward is you really want to try and maintain the simplicity. You know, that's part of the secret sauce that Solomon and the Docker team brought to this space. And so there's really going to be a concerted effort to expand this effort just enough to get that common core, but not let scope creep come to this to the point where it suddenly starts to take on other elements that would start to undermine that focus on simplicity. And that sets us up to our third topic of today's session, and that's the CNCF. And so I'm going to hand it back or hand it over now to Val from SolidFire to talk about the CNCF. Thanks. Awesome. Thanks, Jeff. Thanks, Daniel. I was at the Mesos launch, the Open DCOS launch last week, and Peter Levine from Andreessen Horowitz actually came up with one of these really quotable things that I tweeted out, which is the containers are the MP3s, and orchestrators like Kubernetes are the iTunes. So it's a really good way to think about how we start to manage all these things. Why did we create the CNCF, and again it's under the Linux umbrella, Linux Foundation umbrella, is much like the container formats, the image formats in the APIs themselves, we want to create an environment that democratizes the power of Google for as many people as possible, the power of Borg in particular, if you've read about Borg or Google. And that's operating a massive amount of very heterogeneous services at Google scale, but more importantly for affordability so that what you do here is viable at Google levels of operational and resource efficiency. And what that means, how we define cloud native, and we're open to feedback if you think it should be modified, but how we define cloud native is first and foremost container packaged, and we've covered that already today. I'm going to talk a little bit about dynamically managed and dynamically scheduled a bit later on, and obviously the goal here is to get to the current nirvana of software development, which is a highly microservices oriented, if not fully microservices oriented environment. So the ever important logo slide, I think the things to note here are that we do have Mezos involved, we do have Docker involved, and very importantly, we do have Google actively contributing their intellectual property, the knowledge they've gained in building and operating Borg over the years, and of course, as you'll see later on, the key open source technology being Kubernetes itself. And I love this slide because for me, this really articulates what we're trying to help foster in the industry, and more importantly, what we're not trying to help foster in the industry. We don't want to be all things to all people, we definitely know that an environment here such as the OpenStack community has already done a lot of heavy lifting and built out some really cool services, be they networking, be they authentication, be they image management and so forth, not the least of which is for me storage. So in the dark blue there are some of the areas that we are focusing on, and more importantly, we're not trying to be the bare metal OS, we're not trying to obviously do a lot of the configuration management and so forth, we're not trying to do the kernels, the OS kernels. We are trying to create an environment for interoperable container package, dynamically scheduled microservices applications, which brings us to some of the major accomplishments. In fact, I think last time we were here as a group, not here but in Tokyo if you were there, across the Pacific, the CNCF really wasn't formed yet, so this might be news to a lot of you, but we did form since the last OpenStack Summit. We've had a series of activities, you know I call them sort of sausage factory activities in terms of building out the innards of the CNCF. Most importantly, we have a very, very high profile technical committee, technical oversight committee, a lot of familiar names there, we're from Joyant, obviously from Google, from Cisco, Weaveworks and so forth. So really rock solid technical steering committee if you will. And because we have such credibility that we're bringing to the table here, I think the major milestone to announce is that Google actually released their IP, donated their IP to the CNCF for Kubernetes. So for lack of a better term, if you're looking for where the home, and particularly the open governance structure for Kubernetes is, that is with the CNCF. Now the technical oversight committee is relatively new, it's a couple months old or so, and what I'm really asking of you here, particularly as you've seen, you know, Daniel talk about Magnum and Courier and all the container-related services in OpenStack today is, this is an open process. Our meeting notes are open. Our discussions, debates as to which projects we want to take on, you know, help incubate, help govern, are open. So clearly we're encouraging everyone here to join us, to contribute ideas. We're particularly looking for not just project ideas, or perhaps mature projects that would like to benefit from a governance structure that is cloud-native, that is focused on container package, dynamically scheduled apps, but also perhaps, you know, looking at, you know, creating new services that might not exist yet in the OpenStack environment. So we're there to offer up expert advice, to offer curation services, and not the least of which is a very tangible asset that we want to share out with everyone. Courtesy of Intel, and I don't know if Jonathan Donaldson is in the room here, but Intel has donated a thousand node cluster to the initiative for contributing projects to test out, flesh out their code, test at scale, do some things without having to, you know, purchase or spend on Amazon, Azure, Google credits and so forth. So it's freely available to interested parties that satisfy these criteria. We also want to encourage not just technical contributors and projects, but most specifically end users. So clearly there's a lot of vendors and a lot of open source developers that are excited about this initiative, but what we still lack is a nice quorum of actual end users in brick and mortar enterprise, you know, in web companies, in SaaS companies and so forth, that want to figure out how to actually scale through Google scale using these container package in dynamically orchestrated environments. The one last thing to make clear to everyone is for those of you that have a heavy Nova heritage in OpenStack, knowing that, you know, Nova is very prescriptive and it's very, you know, it's not a cost-based optimized runtime environment. It's prescriptive, so it's not declarative, it's not opinionated. The one big thing that Google decided we had to get developers, humans, out of the way of is scheduling resources. So probably the most important initiative to think of and one of the biggest distinguishing benefits of a technology such as Kubernetes and Mesos and so forth is the fact that you don't have to runtime optimize anything. If you code cleanly your containers to the proper APIs, you give resource hints if you need to, or state or persistence hints from a storage perspective, you're out of the ongoing management business and so your operations become much more scalable, much more affordable. Your capacity planning goes from a month-long exercise to a minute or hour-long exercise. There's a lot of benefits towards trying to think of using and leveraging OpenStack services but having opinionated declarative, dynamically scheduled runtime environments. So just a little plug for Kubernetes there. I think, Jeff, with that, we're going to have you wrap it up, right? Great. Thanks, Val. And we're running pretty much right on time, which is great. Please start to think about the questions you'd like to ask us. We're going to have some time for Q&A. I won't really... I don't want to read these to you, but I do want to enforce a couple of key points and that is that one, you know, container technology, we stand on the shoulders of giants, right? This is a key enabling technology of hyperscale computing and it's going to be a fundamental part of the OpenStack future as well as the future of these other frameworks and orchestration tools that are merging. There's also going to be a significant next step in all of this and I'll just give a strong shout out again to my colleague, Phil and Doug and others that are working in the community across the container space. But specifically, I know that people talk about the transport and the capability of containers to enable that workload to migrate to different platforms. But for many people today, when you're so cloud-centric, the platform needs different cloud technologies, but for IBM, platform also means different architectures. So just like you want to have choice with respect to different tools and different platforms and different solution mixes, we have a phrase at IBM, we basically say choice with consistency and one of the things that, again, Phil, Doug and the team have been doing from IBM is working on helping to lay the foundation for multi-architecture from a hardware architecture perspective and you're going to see some very interesting developments in that space in the coming year. So with that, I'd like to welcome back up Dan and Val and we'll open it up for some questions. And please use the microphones. Very good. Good morning gentlemen. Scott Fulton from the new stack. Jeff, you described Kubernetes and the role of Kubernetes within an open stack environment. And you mentioned that Kubernetes, at least for the time being, is restricted to a single tenant environment. And correct me if I step over the line here. And I would think that by restricting Kubernetes to single tenancy, you are effectively saying that orchestration of applications takes place only among those applications belonging to a single tenant. And I'm wondering whether the CNCF is satisfied with that definition of orchestration as limiting the scope of application orchestration to just what that single tenant owns. And if so, how does SDN take place where you have to have some type of visibility into the network? Yeah, thanks. No, if I said that, I don't think that certainly wasn't my intent because I don't view that as a single tenant type of architecture. The whole idea of this CNCF is to really build out a federated ecosystem where, while certainly Kubernetes is the foundational element of it, I know for a fact that my colleague Craig McClucky, who's really helped drive this from Google, is very committed to the idea that he doesn't want the CNCF to be all about Kubernetes because he views that, essentially would be a failure of the CNCF. If that was going to happen, they could just continue to keep that as a project under Google's umbrella. The whole idea of this is to create an ecosystem. And in fact, I'll even take it a step farther and tell you that Craig has stated in public that customers have come to him and said, yeah, I'm interested in the Borg that Val talked about, and I'm interested in leveraging that key technology, but there's no way I'm going to adopt it if it's really effectively an open source project of one, which is what it would have been had it remained under simply Google's stewardship. And maybe an additional point of emphasis, one of the cool things as a developer that drew me to the CNCF and as an example, Kubernetes as a technology, is that community is inherently and natively thinking in a multi-cloud context. So not just single tenant to the opposite of single tenant, but certainly a very heterogeneous environment where abstracting the underlying bits of an AWS, a GCE in Azure, software, so forth, are a core part of the technology and the architecture. Thank you, gentlemen. So my name is Tim. I work on a Kubernetes project. I wanted to add the idea of multi-tenancy is... Can you bring his mic up? Yeah. You forgot to add a comma yet at the end of the statement about supporting multi-tenancy. Kubernetes is, you know, just a year old, a year into 1.0, right? So multi-tenancy is very much on the agenda. It just hasn't come to the top of the list yet because it's a really hard topic and we don't want to botch. So we're trying to take it slowly, listen to what people are trying to do with it, what they're trying to achieve before we really jump into it. But it is definitely on the agenda. Thanks. I have a question about Courier and the, you know, the multi-cloud story that you guys presented. So Courier right now is an open-stack project and if these container technologies start depending too much on Courier, to keep them independent of open-stack, would something like Courier be, you know, proposed or developed for public clouds or other clouds? I think the question there, right, so the direction of the dependency, I think, is what solves this, right? So Courier provides that shim that supports the translation from whatever docker or Kubernetes model for networking and uses the Neutron client to then translate that to Neutron. So it kind of goes that way. So however that changes within Courier, the support for the translation will be done in the Courier project. So that's how they can evolve separately. So the OCI spec was announced, or rather as an initiative mid-last year and at the announcement it was, it'll be a few weeks, we'll have an initial spec out and, you know, there'll be a lot of movement here. Nine months on, we actually got an initial public spec. Was that an initial hurdle in your view and now we're over it and we should see a lot more out of OCI or has the momentum really moved to CNCF? No, I think that's a fair question and there actually was interim progress. It just wasn't formalized and when you get that many and you saw the number of corporations that were all involved with the OCI and it's founding, it just takes a while for everyone to process the paperwork, get the eyes dotted and teased crossed but there was already collaboration happening between the various elements in the community, right? It just took a little bit longer to formalize but again the biggest news and I would point you to a couple of things. There's a great write-up on the new stack on the recent announcement I mentioned. I didn't go into it in great detail but the evolution from the focus on simply the spec to now a shared format is really going to kick this into high gear and I definitely started the, you know, collaborating around the OCI with a level of optimism but I knew it wasn't going to be easy and this breakthrough that was announced on April 22nd is a huge additional major step towards truly making this not only, you know, relevant but more rapid moving as we go forward. And Docker 1.1 recently announced that they support the OCI image format already. And then if in addition to that there's also a couple of different great write-ups including a blog on the core OS site on this topic as well. Yep, so we're just about out of time. In fact, I think we're right on time so I'll just thank once again Val from NetApp Solid Fire and my colleague Daniel. Thank all of you for your interest. It's been a great session and we hope you enjoy the rest of the summit. Thanks. Appreciate it.