 Good afternoon everyone. My name is Mario Blandini with HGST. Happy to be here on day three of the OpenStack Summit. What we're going to talk about is a demonstration, first of its kind actually, of an open Ethernet drive architecture. Thanks for stopping by to check out this exciting technology, something we just announced here for OpenStack. So as I go through my presentation, you may not even know who HGST is. How many of you know who HGST is? Anyone? All right, more than I expected. Hopefully you've stopped by the booth. And how about you, HGST team? Do you know who you are? That's right. So we're here to engage with you in some discussions around developing on new technology. Now we're a public and traded company. You'll see all my forward-looking statements here. Hurry up and read. I'm going to switch the slides quickly. Close your E-Trade account. Don't go trading stuff. And we're not going to guarantee that we can follow up with you in detail about everything we talked about today. But I would encourage you to follow us on social media or go and register for one of our webinars in the future so you can get more information on this technology. But you are of the population. It's the first to see this out here at OpenStack. So what is an open Ethernet drive architecture? It's quite a mouthful. And if you look at the structure of the term, architecture is the noun. So it is an architecture for deploying infrastructure and giving a new option for applications, especially scale-out applications, to run some services that can really benefit from being as close to the storage media as possible. So there's a couple other terms in there, open, in the sense that there is a Linux environment in which developers can deploy code. And I'll talk a little bit more about that. Ethernet drive is a big part about what it is inside the system. The drive interface is Ethernet. And you likely know why the world is going toward object storage and going toward Ethernet. The reality is these things create a lot of data. And a lot of the applications that we're working on in the OpenStack community are to get to that massively concurrent scale-out type architecture for these type of applications. So it is not a product per se, but rather a building block for these architectures. In our view, from a vendor-specific perspective, I think most folks at OpenStack would agree, having something that's open and flexible is going to give developers the most creative environment in which to deploy new applications. So we don't want to confine scale-out in any way. We want to really enable it. And we do that in a couple ways, which I'll describe here. So naturally, the infrastructure has to live somewhere. And a lot of us who might be developers, actually, let's do a show of hands. How many of your developer-oriented coding-type dudes? Thanks for coming, because actually this is more for you than the infrastructure guys. But from an initial infrastructure perspective, people do love storage as an enabling part to go with the compute and the networking. So we want to, as I mentioned before, enable application developers to distribute their storage services throughout the infrastructure in a scale-out way, much the way that it's done today in a multi-tier infrastructure architecture with applications like Ceph or Swift or Gluster. And I'll go into details on those a little bit more, but part of distributing those services is having some resources available as close to the storage media as possible. So let's go into what those resources are and what the architecture is all about. At the beginning, this architecture is not about a drive in as much as it's about having a Linux environment to run software services. And our belief here at HGST is having that environment available to developers is going to open up a lot of creativity on ways we can optimize today's applications and then even create new applications that can make use of that resource that's out there at the drive. So the resource there is a drive running Linux. Now, also at the drive is CPU and memory resources that are there to allow for code to run on that Linux environment versus running at a higher level on a server. So the example applications that we have today that I'll go through in detail are all open source ones where we just grab the code, we compiled it for that CPU environment, and we're able to run it intermixed with existing nodes from those same applications running on Intel servers. So that's kind of cool. You don't want to have to change out your infrastructure if you're going to take advantage of new architectures. In this case, this open Ethernet drive architecture fits right in to the stuff that you may already have running. And what most people think the most exciting part is this is a drive with Ethernet. Heck, you know, Ethernet is a very cool thing from a search term perspective. It's what really catches people's eyes. And we have a drive with Ethernet that allows then the architecture to work end to end, but it's not just an Ethernet drive. Think of it as a environment where it has Linux, where the resources are available to run storage services and to simplify the solution and allow for a switch fabric to integrate into your existing architecture. The connectivity is Ethernet inside and outside the box. Now, naturally, people want to know what's under the hood. And for my boys over there and my female friend there at HGST, shout out again. See, they want to talk to you. So I'll show you a lot of stuff here on the big screen. Take a closer look over at Booth T5 to get more of a look under the hood. But what it is is a standard hard drive. We happen to use an example of a four terabyte drive, but this technology can be applied to most any type of drive, a hybrid drive, SSD technology. A lot of the folks that we've talked to to start believe that there's a really cool option for large scale storage. Thus, a four terabyte drive is kind of a good medium point right now to start from a development perspective. It has Ethernet connectivity. It has an expansion of the ARM processor footprint that's on a hard drive. Does everybody know that there's already an ARM processor on every hard drive? It's already there to run the drive firmware. So we've expanded that with another ARM and we've got RAM there and we've got the Ethernet complex integrated onto the drive that allows you to treat it as if it's a node, not just a drive inside of a node for a scale out architecture. So what does the enclosure look like? As you go to put some of these things into this enclosure, to make it easier for our application developers to test it and really play around with it at scale, we have an enclosure that allows us to put 60 of these into a 4U enclosure. It has, as you can see on the back, 10 gigabit Ethernet out the back. That's more of a rendering than an actual example. You can see we have 810 gigs on that reference design enclosure there at our booth. So it would connect up into your network, probably to your aggregation or even back to your core because the top of rack style of connectivity is already built into the box and the enclosure does appear as a Linux server. So it gives you that ability to connect it into your environment and intermix it with existing nodes that you might have that are Intel based today. So what's running today? This is really more about software and this being a great conference with a lot of open source software collaboration. We have a lot of examples of the types of applications that are running today. So we'll go in and look at some of those. We have worked with Morantis. We're using some of their fuel technology to create a visualization just to see what you're looking at for a demo here. So it was something we could put together really quickly and you'll see that we have showing in our booth, Swift, Seth and Gluster all running in a environment off of the same enclosure, which is really cool. Going a little bit deeper into Swift itself, this is some CLI output from the demo that shows that we have multiple nodes. The proxy is resolving to three nodes for replicas. One of those is an Intel server. Two of those are open ethernet drives inside of our architecture. And if we look to see what services are actually running, we actually didn't have to talk to anybody. We can go to the internet, which is totally cool, download the code, compile for that environment and start running it. So you'll see here, this is essentially the storage node in a Swift cluster. And the node is not necessarily the entire enclosure, but rather the individual drive. And going forward with things like erasure codes and having to plan for lots of different failure scenarios, having that granularity is something we've got some good feedback on. But really we're here today to open up the concept and get folks to think of great ways we can either optimize existing applications or develop new ways or new future versions of these applications to be even more optimized with those resources running at the drive. A quick look at Ceph. We have it visualized here. You can think of the OSD from Ceph running on the drive itself so that each one of those becomes their own OSD and they're connected to your network through that switched fabric. From a cluster perspective, we also have that. You can think of the BRICS services that would be there in a cluster running on the drive. So in this case, we're not advocating that you eliminate servers per se, but rather optimize the deployment of the application where you can take advantage of those resources closer to the drive. Naturally you'd always need controllers or proxies or those other parts of the solution in the environment. And with more talk in the open source community about erasure code options for these open platforms, there's certainly going to be no less processing power required. And that's where we see an exciting opportunity to take advantage of those resources. More than just for software-defined storage, those happen to be some easy ones to do because it's open source and we make those demonstrations. But we see it as an opportunity for data-centric applications, whether it be around search or analytics or other types of things where you can ship processing to where the data is and leverage those resources there as a part of a scale at architecture, is being something that's interesting. People often ask, hey, when can I buy one of these things? And the truth is, until we as an ecosystem come up with a great horizontal application that combines software with the hardware that you could take advantage of, that's when you'd see it productize. The reason for us being at the show, really, as I say here in the goal, we want to work with developers and operators of OpenStack and other distributed scale-out environments to really work on ways to really exploit and leverage the architecture. So if you're a developer yourself and you want information on how you can get in hackathon style, we will be putting one of our systems into the OpenStack Swift community test cluster here in the near future. Not sure if you saw the press release today, but HGST and Swiftstack have partnered up to put a large public test cluster in place so you can test some of your applications running on top of Swift. We're using our industry first helium sealed drives, our UltraStar HE6 drive, in those enclosures. Intel's also donated equipment there to put that infrastructure together, so we'll be able to test that and we'll expand that test cluster with more equipment from our open ethernet drive architecture for those folks that are interested in doing that. But we'll also have a lab that we can do online and work with folks. I'll be your point of contact if anybody's interested in learning more on how we can work together in that regard. But before I go ahead and close up, I'll say that you couldn't see the live demonstration, it's live over there. I encourage you to go over, look at the equipment, open up the door, see exactly how it's deployed and touch and feel our hard drives that have the ethernet interface, but don't rub your feet before you do that. ESD and all that sort of stuff. But do go ahead and check that out because it's really exciting. And I want you to visit some of the other companies that are working with us. So Ink Tank over there, they're not going to give a shout out because there's nobody over there, but they are excited about the opportunities of this type of architecture, Red Hat as well, and just on the other side of where HGST is, the folks at Swiftac can tell you more about Swift running on this architecture, as well as talk more about that community test cluster that you can take advantage of. Because most oftentimes when we're doing our development, we're developing on VMs and it's not characteristic of big equipment, it's hard to carve off some of that big physical iron to do some of your testing. It's why we've made that investment in that community test cluster. So with that, I want to thank you for your time and encourage you to go see my friends over at HGST. They're ready to talk to you and show you more about the technology. Thanks again for your time, folks.