 Good morning. Thanks, Jim, for the introduction. Steven Tan, I'm Cameron Bohar. It's a pleasure to be here. This is, believe it or not, my first Linux con, but I worked on Unix some 25 years back on AIX 1.0. So, a big proponent of open source, I think I've been at Huawei for almost four years now. And during these four years, we have an open source governance committee, and we've contributed effectively to two major projects, Hadoop and OpenStack. And we're in the top 10 in both. And in our embedded systems, for storage, I'm a storage CTO. We contribute a lot to Linux, and all our systems are based on Linux. So we're big supporters of Linux, and I think we look for collaboration and participation from the community to help us move this forward. So today I want to talk about OpenSDS. It's a dream we've had over the last 10, 15 years in watching the industry grow with virtualization and containers now. And so I want to talk about a sneak peek to a project we want to launch, which is called OpenSDS. And the idea is that, you know, I started a company called Parascale. We built the first software-defined storage, paralleled, you know, software platform, and the idea was that you could scale from a few nodes to large numbers of nodes, and you could provision and treat storage effectively as a service. So you could discover storage, you could dynamically provision it, attach it to various volumes and virtual machines, right, and all unified through a single pane of management. And the issue we ran into was most customers, if you look from the customer point of view, every customer we visit has a two-vendor minimum strategy. So there's no rip and replace strategy for enterprises and telcos, and they have a lot of legacy equipment. So it's very hard to bring in something fresh and brand new in the storage sense of the world, right? It's always an evolution, not a revolution, because as VMs and containers come and go and they're very dynamic, your data is sticky, right, and we cannot lose one byte. So it's a really tough problem because it's a one-in-a-billion problem. If we lose one byte, you're out as a vendor in most customer situations, right? And I think you probably know in Germany we're proud to launch a public cloud with Deutsche Telekom and T-Systems to serve all of Europe, and we're putting a lot of infrastructure there, and the problems are real, and as you use those services, you will be effectively using these kinds of architectures, right? So if you look at the state of storage management, typically you have lots of frameworks, and storage tends to be statically bound. You create a connection to a particular volume, and it's there for a long time. That's changing, and with the darker, you know, great presentation, right, you can see how things are pre-dynamic. Containers come and go, they're ephemal, but the connection to storage, you know, is persistent, and that has to stay. So virtual machines created a big storm in the storage industry because they came and went, and things went from purely static physical cable connections to dynamic and virtualized, you know, drivers, and what you see here effectively is a hodgepodge of connections and drivers to connect various systems together. So just in my lab, 50 people in Santa Clara Silicon Valley, right, we have both Docker, OpenStack, and ESX VMware environments, and it's not possible for me to connect a volume from one to the other. If I finish using this volume, I can't move it and reuse that dynamically with a click to another environment. It's sort of locked in still to either VMware or OpenStack or to a Docker instance, right? So again, it's sort of static. So is there a better option? And what we're proposing effectively is this virtualization layer that effectively does discovery, provisioning, management, and orchestration of advanced storage services, and allows the open source vendors to plug in their open SDS adapters to manage storage. So Ceph, Cluster, and ZFS and what have you can plug in through that stack, and it allows the vendors from EMC, Huawei, Intel, you know, HP to plug in their adapters. And as the previous guest mentioned, strict adherence to clean interfaces, right? So we define the interfaces once, we make them general enough, and then we're able to update both in an open source way and in an appropriate way these vendor APIs. But the plugins allow us to connect multiple frameworks to multiple storage targets dynamically, whether you have legacy investment in infrastructure or you go open source. So it's sort of you can have a hybrid environment where customers move to these new infrastructures, they can leverage their existing investment in legacy and connect. So common management APIs and simple orchestration. So the high level value proposition is solve real storage management problems for our customers collectively together, integrate seamlessly through these adapters to Kubernetes, OpenStack, and various frameworks, and be open and clean so that as other new frameworks come, you can easily adapt. The idea, again, also is not to rebuild anything from scratch, but to re-use sender, Manila, Swift, and anything else that's already in open source and build around that and make it more feature rich and more reliable, more scalable. We're talking about, you know, millions and millions of virtual machines, you know, 100 million virtual machines, billions of containers running in a public cloud, private cloud, and connecting enterprises to public clouds. And we'd like to collaborate in an open source community and invite vendors and customers and open source collaborators to work with us to build this thing and make this a reality. So high level mission is, you know, we want to develop an open source SDS controller platform and allows us to manage both virtualized and containerized and bare metal environments and facilitate collaboration. So adherence to standards, leverage existing open source and have a customer and vendor community that comes together to solve the problem together. At this point, I'd like to hand off this presentation to Steve to talk about the second. Hi everyone. Good morning. So Cameron just introduced what our vision of the end open SDS controller is about. And what I'm going to cover here this morning is to go through two solutions that open SDS can offer to the open source world. And then I'm going to cover a little about the project proposal that we are going to be talking about. So this Kubernetes, I'm sure everyone here is familiar with this thing. We have the Kubernetes master and then the two storage nodes at the bottom and this architecture that for Kubernetes. And Kubernetes is great. Everybody, I mean, knows that some Docker swarm is also great, by the way. So Kubernetes, what Kubernetes does greatly is that it allows, it actually does container orchestration in a highly available fashion. And everything's automated. If something fails, I mean, it just repos some up. I mean, some pause up again. And the Kubernetes community is actually a large community and the ecosystem is actually growing. So, but what Kubernetes lacks is actually storage management, which it actually leaves, I mean, it actually leaves the task of storage management to third party storage controllers. So what you have is you have all these various storage controllers providing the storage management for Kubernetes. And the problem with this is that each controller is solving the same problem of trying to provide persistent storage for Kubernetes. And each one is trying to take care of like fuel over, HA, and stuff like that in a different manner. And some of them are mature and get some of them are, I mean, pretty, I mean, at its infancy at this stage. So, and the other thing about it is that each controller actually is either solving a specific problem or it's actually solving a limited set of problems, providing only a limited set of storage solutions. So, with OpenSDS, what's possible is we can actually have an OpenSDS agent that's able to support all kind of storage, any kind of storage. You just need to install one OpenSDS agent. Okay. And then with that, with OpenSDS orchestration and the OpenSDS adapter, you can, what we can provide is a single solution for end-to-end management of, for Kubernetes, any Kubernetes deployment. Okay. Next, we take a look at this OpenStack. OpenStack is, I mean, if you guys haven't heard of it before, Jim mentioned in this morning, it's the most popular cloud OS that's available right now. And it has a large community and large ecosystem. What's great about OpenStack is that it has two projects called Manila and Cinder. Manila actually provides the file services and Cinder provides the block of volume services. And through these two projects, it actually provides a lot of storage support for, I mean, it builds out a huge ecosystem of storage for OpenStack. So actually what, but what it lacks is that, it lacks the basic functionalities of our storage management. The three points that Cameron mentioned just now, which is discovery, configuration and management. And again, with OpenSDS management, what we're going to try to solve here is that we're going to leverage what has already been invested into Cinder and Manila. And then we're going to connect to provide the connection to the existing ecosystem of storage. And on top of that, we're going to try to enhance the storage management by providing a standardized copy and configuration and monitoring for OpenStack. So to recap, actually, the OpenSDS project tries to bring three key benefits to the SDS management. The first one is to be able to plug in, to be able to provide a seamless plug-in for any framework. And second, to be able to provide a end-to-end storage management with a single solution. And third, to support a broad set of storage, including cloud storage. So this is the work model that we are going for with this OpenSDS project. First, we're going to try to kick this off as a technical project under Linux Foundation with like governance. Next, we're going to have a technical steering committee for the technical oversight of this project. And source code is going to be on GitHub, and we're going to have our Gary code reviews. And you can have design group prints on Launchpad. And we're going to hold regular IRC meetings as well as made-ups. So we're going to work based on these four pillars. First, we're going to be open in terms of licensing the software design and specs and so on. And we're going to focus on the needs of our users. And on top of that, we're going to collaborate with different communities. We're not just going to be working within the OpenSDS community. We're going to be working with CNCF, Docker and OpenStack and so on, including standard organizations like SNEAR. And lastly, we're going to try to get as many storage vendors to work together with us on this thing. And besides storage vendors, we'll try to get other vendors to work with us. So yeah, so that's my introduction. The high-level point is, you know, join the project, discuss with us. We're here all week. And at this point, I think I'd like to invite our surprise guest, Reddy Chagom. Please come on stage, Reddy. Reddy is the chief architect of SDS at Intel. He's been battling this problem for the last three years. And so we thought, you know, you should also maybe chime in and give you a perspective on this. Yeah. So as Cameron talked about, we did not actually come up with this idea yesterday and propose this today. So we spent almost three years to figure out how do we address the storage management problem in the industry. I initiated Open Source SDS effort in Intel, starting with the Open SDS controller project as a prototype effort, then realized that without storage vendors participating in this effort, it is going to be really difficult to make this project successful. So part of the focus has been how do we bring in the storage vendors? How do we make this project more of open source, industry-wide collaborative effort? I had numerous discussions with EMC and ended up collaborating with EMC on the Open SDS controller project. So the open source, the Viper controller software under the code name called Copperhead. This is really the third phase of what we are trying to do in the industry. So the goal is, let's make sure that the storage vendors actually come together, do this in open way, in an open source project, solve the real end user pain point, and at the same time, really look at evolving the storage interfaces that can really give us a path towards a cleaner integration that Stephen was talking about in the previous slides. So this is kind of the third phase of the project. I'm really excited to see this progression. Really looking forward to collaborating, and we are looking for you guys to actually start contributing to this effort as well. If you look at the networking ecosystem, they have actually made significant progress. As you looked at in the previous talk, there was a Linux Foundation project called Open Daylight, and we are trying to do a very, very similar thing with the storage side of the world, and we are really looking forward to this effort. And we have a panel discussion tomorrow at noon in Hugo South. So Dell EMC, Intel, and Huawei will be there. So if you guys have questions, then come on, and then get your answers to the questions. All right, thank you very much. Thank you.