 Live from Austin, Texas, it's theCUBE. Covering DockerCon 2017, brought to you by Docker and support from its ecosystem partners. Okay, welcome back. I'm Stu Miniman with my co-host and singer and lyricist everyone's watching the crime. We're here in class. And happy to bring back to the program Giorgio Rigny, who is the CTO of Scality. So good to see you again. Hey, hi Jim, Stu, very nice to see you again. Giorgio, I interviewed you at Amazon re-invent. So talked about where you fit in the cloud environment. So here at DockerCon, it brings us up to space. You're a software defined storage company. Where do containers and Docker fit into the offering that you have? Absolutely. So we have two different storage for the enterprise. One of our goal is to simplify storage operation because it's hard to actually build a petabyte scale system. How can we make it easier for our customers to use? And one of the things that Container gave us is the ability to easily package our software and deploy it anywhere, right? For example, we have options. What do you want your interface to be for storage? Should it be on the client side? Should it be on the server side? Should it be somewhere else? With Container, it's very easy to automate and one Container can do a lot of things, right? So it's pretty easy. Yeah, talk about how scalability fits into your environment. And my understanding, you work with Docker Swarm. Do you also work with Kubernetes? Yeah. Yeah, so I will talk about an announcement we made today. Just before I do that, just a quick... So the Container, we follow the immutable Container design. So when you have a Container, you can kill it at any point in time, right? And another Container will take over. So there's nothing in our architecture that's a signal point of failure. So with Docker, it's very easy to do, which we did before, but Docker simplifies all this operational aspect for us. Right, and so the announcement is, do you also do Kubernetes then or is it just the Docker Swarm right now? Yeah, so there's a Container automation war. We haven't picked a side yet. Okay. Yeah, absolutely. Talk to us about your customers. How much of is it a hole for them asking you about containers? How much is it just something you are building into your architecture because it makes sense going forward? Yeah, so we work with very large enterprises. They don't know what the other department is doing. So sometimes you talk to the storage team and they tell you, we never deploy containers. But then if you talk inside that company, you will see that another group has deployed Container for the last two years in production. And they actually have a support contract with Docker. They have an enterprise department. And so you have to find out is there Docker experience and 99% of the time there is Docker experience. Yeah, it reminds me of Linux a lot. You know, 15 to 15 years ago, you talked to a big group are you doing Linux to the guide? No, and they're like, wait, Bob's been doing Linux a bunch and we're doing it and everything. So yeah, absolutely. Same thing, yeah. And this has been such a huge explosion of what's been happening. I've talked to some of the vendors here that have been working with containers for eight, 10 years almost, but with Docker it's really helped just bring it to the masses. So yeah, can you maybe speak to how it's changing your environment as CTO, how it influences your vision of the future? Yeah, so as a CTO, it allows us to go from the development platform to the laptop of our developers to the simple one server deployment for open source versions that can start on any VM or any one machine. Down to the distributed system with a thousand of servers, a hundred of kilobytes is all the same container. So this flexibility is huge. And for continuous delivery, continuous integration platforms that we have, being able to use the exact same code from the laptop workstation to the actual deployment improves quality a lot. All right, Giorgio, the keynotes today talked about a lot of open source things there. There's the Moby project, there's Linux kit. Are you guys involved in any of the open source? How are your customers embracing open source these days? So Docker is releasing a lot of software. We cannot take everything and bring it to enterprise. It's not a software company that sells products so we don't actually also own platform. It's our customers. So we need to go a little bit slower. So Docker is faster than us in releasing new features. But that means that the feature that was released last year, like Swarm, now is ready to be used in production for our customers. And so that brings me back to the announcement from today. So the last time we talked in Las Vegas, our open source was new and we had 50,000 downloads. Now we have 250,000 downloads. So in less than six months, I think it's four month and a half, we added 200,000 downloads. And one of the reasons for that is it's so easy to use it with Docker. And then people in the community were telling us that they need to be deployed in a full tolerant fashion. So being able to lose a machine and continue having the storage working, which makes sense, but not at the scale of a ring, not at the scale of our multipelabyte systems. So something in the middle. And so we tried to look at developing our own automation, our own full tolerance. And we say, wait a minute, Docker is doing that. They built Docker Swarm, but that's exactly what we wanted to do. So can we use that? So our release from today is you can actually deploy our storage system using Docker Swarm. So a few common line. It will automatically be full tolerant. If you lose a machine, it will start from another machine. And it all works load balance automatically and with security as well, because communication can be encrypted. So it's all of these benefits by just using Swarm, we don't have to call anything. So we will follow up on that. Salomon talked about this morning, Docker will be where you want it to be. On-premises in the public cloud around. You talk a little bit about your software, the breadth of support you have. We talked to you at AWS, think you guys support Azure. What's driving you to certain environments? What are your customers doing and what is that breadth that you guys offer? So a lot of things that Salomon said resonate with our customers. So one thing is that you don't want to be stuck with one platform. You want the liberty to be able to pick and choose and change. And so storage is very sticky. So if you have a pelabite somewhere, it's going to be hard to move. But what you can be sure is the next year is going to be two pelabites. So when the extension comes in, you want to be able to select your hardware vendor for private, but also for public. What about if you could decide the next four pelabites go on Google Cloud Compute and the next five pelabites go on Azure so that you're not stuck with any of them? And so what we're releasing, the first release talks about that, is the ability to deploy your S3 service, so our objects or service, and target with the same instance, multiple storage backend. And they can be local. So local volumes, drives on your machine, very simple stuff. Even an NFS, ZFS mount point works as well. It can be public using AWS and we're adding Azure and Google Cloud Compute. So the same S3 code base can actually give you different location and the location can be hybrid, local, private, public, you name it. Another key focus that Docker talked about, especially in the open-source community is security. Can you speak to how security fits into your environment, anything in your announcement that enhances the security pieces? Yeah, so there's a lot of key management to be done, so access keys, authentication keys, SSL keys, and each vendor is trying to build their own. They're trying to think about their own ways to actually store the sensitive information. With Docker, we haven't done it yet, but what Sullivan said with N8 is, what about if you use Docker as your security authentication provider, so that it's one shop for everything else? So yeah. And this is something I'm going to look at. We haven't implemented it yet, but I'm going to look at it. The other thing that was said, I think that was in its previous, is portability. So we developed our own authentication engine called Vault, which actually implements the Amazon IM interface, so identity and access management, so it's pretty standard. But if you use Vault, the same authentication token for local, we work on AWS, we also work on Azure, and also work on Google Cloud. So as an IT admin, I can just use my active directory, connect it to the skeleton vault, and if a user leaves a company, I can just delete it from active directory, it will disappear from all the clouds in one big, portable, transparent way. So yeah, this is kind of the things we look at as well. With multi-level access controls, and role-based... So groups, all-based... Delegations and so forth. Dedication is in there as well. So it's a big bet. Last year, we decided to implement IM, which nobody else has done, and it pays off a lot, because a lot of our customers have banks, insurance companies, and they need that level of security. So it's a big advantage, right? Jordan, one of the big things that's been talked about for about the last six months or so is how things like IoT are really going to drive edge computing. I think back to the early days of object storage, I'm curious how that whole development fits into what you're doing and how you think about storage. So we're looking at IoT very closely. There's a lot of volumes, but the volumes arrived after the data has been crunched. There's some kind of consolidation, right? And the objector is perfect for that layer. So let's say the daily start at the edge, with very precise granularity, then we get compressed into some kind of time series data, and this fits very well in the objector. For the edge storage itself, I don't think there's a solution today. And there's no standard as well. So I'm looking at this and seeing what's going to happen. I think object store are great for storing all the archive, but not good for the real-time IoT data. But I'm still looking to what standard is going to emerge. Yeah, federated object storage for the fog, you know? Yeah. The IoT, yeah. And it's both a database type, workload and object storage. So it's fascinating, but there's no answer yet. I don't think so. And just you guys tell me you've seen it. I'm not aware of it. Yeah. Okay, Giorgio, so you've got the announcement. What are the things, can you tell, Scaldi, what's going on this week? Have you had any customer conversations this week that have stood out to you? Yeah, so we have a few partners at DockerCon. So it's great to be able to meet them here. I'm also looking at automation. So DockerSwarm is one, SwarmKit, but there's also Kubernetes and Mesosphere. They're all here this week, so I'm going to talk to them. And HPE, which is one of our partners, is here too, so we're going to talk about this as well. And I need to find some time to understand the security model we talked about. All right, well, Giorgio, really appreciate all the updates here. What, want to give you the final word on what's exciting you, you know, you talked about some of the partner things, but anything else you'd want people to take away from this show? Yeah, so I think the hybrid model for storage makes a lot of sense because you don't want to be stuck to a provider. And I was just going to say that in a few months, so in June, we're going to make a big announcement and that will show that with Scality, you can leverage any cloud and automatically like manage your data of our multiple providers. And we're going to give a hint of that next week at NAB. What I'll be presenting, we're a large customer of some of the prototypes that we've been working on. Giorgio Reni, really appreciate you to talk to you again. We'll be back wrapping up day one of DockerCon 2017. You're watching theCUBE. Thanks for watching theCUBE.