 Hello, can everyone hear me? Why don't you guys come on in here? Well, we're going to talk about containers. This is a pretty cool OpenStack Summit. This is Diamante's first OpenStack Summit. And we were kind of hoping that when we came with a container message that people would be interested. And it's been really cool to speak to so many people who are either already doing containers who plan to be doing it very soon. If you'd like to stop by our booth A15 in the front there, we'd love to spend some more time with you. Talk about your container requirements. We'd love to show you a demo and enter you to win a Ferrari drive for a day, if that sounds interesting to you. So I'm going to talk a little bit about why people are doing containers. This may be something that you've already seen around, but there are some real benefits to why developers are rapidly adopting containers. There's a promise of application portability. No one wants to be locked in to a particular cloud provider or particular vendor. You want to be able to deploy your application where you want it to be. There's a great developer productivity that comes from being able to decompose a problem into smaller, bite-sized chunks. And when you take those microservices, you can very easily scale out your application. So this is the promise of containers. And it's really very exciting. I encourage you to go to some meetups and then check it out. Like a lot of new technologies, though, there are a few rough areas. You look at how easy it is, really, to get application portability and to move your containers around. Google did a great job at GCP Next talking about some of this complexity. You don't just do your Docker run. You have to be able to deploy all the different components of the application, including all your infrastructure, your network, your storage, which are still thorny issues for a lot of people. And that takes a lot of time to actually get that done. Then you think about improving productivity. Well, now you start stepping through all the individual steps you need to actually deploy that container and make it operate at scale with the performance you require. And unfortunately, typically, when people get through this process, it's a highly manual process. People end up hard-wiring their systems, dedicating infrastructure for their containerized applications. They deploy individual databases, individual parts of their applications onto hardware. And at the end of the day, it does work, but it's very expensive. And you end up with that fixed configuration. And to me, that's the antithesis of cloud and containers and agility. So we think that there is a better approach. Diamante, we took a fresh look at what it means to run containers at scale. And we purpose-built what I believe is the only purpose-built conversion for structure for containers that allows you to rapidly deploy your containers into production. I'll give you a little demo to show you exactly how it works. The first thing we offer is virtualized I-O network and storage for every container in your environment so that you can guarantee the performance and the scale that you get for each of your containerized workloads. We provide isolation for the network and storage in the same concept as how your container runtime, your docker, your rocket, for example, provides a degree of isolation from CPU and memory at a process perspective. So we bring I-O into first-class citizen so that you can deploy your containers and know that you're going to get the resources that you require. We package our product as a converged infrastructure appliance that we cluster together for high availability and linear scalable performance. So come by our booth. We'll show you exactly how it works. The benefits of our approach gives you easy container migration with no vendor lock-in. There's no code changes required. There's no special drivers at all. You basically take your containerized applications. You deploy them with your open source orchestration tools. We work with Kubernetes, Docker, Mesos, and others as well. We make it very, very easy to set up your network and storage for your containers. It's done automatically, in fact. You don't have to manually tweak and tune your infrastructure as many people are accustomed to doing. And we work with your choice of Linux and your choice of application. So very easy to get started, very quick to deploy your containerized applications. So we provide, and this is a shot of our graphical interface, and we'll talk more about this. We provide a very easy way to deploy network interfaces and storage volumes for each of your containers. Think about how you're going to introduce containers into your current environment. How are you going to make sure that your existing services like DNS and DHCP, load balancers and firewalls continue to work? It's actually a significant challenge. There's a lot of work going on, and you may see some others here talking about that as well. Well, we allow you to define in software however many interfaces you want for each of your containers. We allow you to define in software the storage volumes that you want for your containers. But we do it all with guaranteed performance so you know what you're going to get. The promise of containers is enabling you to deploy a very high density of workload, lots of containers on the same infrastructure. Well, that brings with it problems of noisy neighbors. Anyone who's ever deployed applications in a public cloud knows that one minute things are fast and the next minute it can slow down because of all those workloads that are beating against each other. So we allow you to deploy these workloads, arbitrary networking, arbitrary storage, but you know that you're going to get the performance that your workloads actually require. And if you look at the type of utilization you can get, right, typically what we've seen people doing is running their infrastructure at 10% utilization because they want to get that performance guarantee but the only way they can get that performance guarantee is by over provisioning systems and manually dedicating hardware. We allow you to run 80, 90% utilize because every workload, every container has guaranteed access to its network and storage IO resources. And again, this is all with standard Linux, no special coding, no special drivers required. We make it very easy for you to inspect the performance to be able to see what the configuration is if you ever need to troubleshoot or you want to create a new application, you want to scale your application. It's not easy in most environments to get visibility into what's going on, to be able to know where my performance bottlenecks are as I'm trying to scale out my application. Well, first of all, we make it automatic. As I'll show you in a second as we go through a little demo, we allow you to use open source APIs, vendor agnostic APIs to define your network and storage requirements. And then if you want to close the loop and actually see how those are being delivered, see how your application is performing. You can do that through our graphical interface. You can export that information to third party systems like elk or any other monitoring system you may have. So let's take a look at how do we do this? How do we do this in a way that's truly vendor agnostic, no lock in, no customization? Well, the first thing is we allow you to define your networks such that they'll work with your existing data center environment. We have a very simple command line where you can define your interfaces. We then allow you to create storage volumes. Very, very simple approach to get a block resource that you know is going to scale. This is an example of open source orchestration working with Kubernetes. We've contributed code to the Kubernetes 1.2 release that came out last month that allows you to take your existing pod definition and simply add to it vendor agnostic open source options to define the performance requirements, the quality of service performance tiers that you have for network and storage in that container. And now what do you do? You use your open source tools to deploy the application. There is no special command required to get those applications deployed. Now, in a real environment, you're going to have multi-tenancy. You're going to have many different applications operating at the same time. Maybe I deploy my application now and it's working properly, but then maybe you show up two hours from now and you decide you're going to deploy your application. Well, what happens? Are we going to conflict with each other? Right? Is your application going to run slower? Is my application going to start running slower? Well, every application in the Diamante environment operates with guaranteed performance. And what you can see on the screen is you can see that each container, here we have a multi-tenant environment using standard MongoDB, Docker images, MySQL, Nginx, whatever you want to run. Each of these containers has an assigned SLA or a performance tier. So they all coexist and they're all sharing very high throughput IO. We're delivering literally hundreds of thousands of IO's per second, all in a guaranteed environment for each individual container. And the way that we can do this is because of our appliance model. We are able to create these virtualized resources, network and storage, connect them to the Linux operating system in your choice of Linux and then map those into the container namespace so that each container has direct access to resources that you know are going to perform at the level you require. What does this really all mean at the end of the day? So there's no more manual tuning. A lot of us, you know, IT background, we're used to months long efforts of customization and special deployment procedures. With Diamante, you use your standard open source tools, there's nothing special or proprietary that you're doing and you're not spending months tuning performance. You simply request what you want in a simple declarative model and you get it delivered to you literally within seconds. So we work with some users who spend a lot of time custom engineering IOPS and latency and network bandwidth and all these other complex topologies. We break it down to a few minutes to get set up and literally zero seconds trying to extract that performance level that you're looking for. All of this with standard runtime, your existing applications. What we've also seen unfortunately is that when people do this level of customization, they lock themselves into a particular architecture and so six, 12, 18 months later when things change, they have to start from scratch. They do a forklift upgrade, they bring in whole new infrastructure and they start again. With Diamante, you don't do that. You need more performance, you ask for it. You say, I need more storage, I need more network and you get it because it's all controlled in software and enforced in our appliance. We also looked at the performance of data. You know, in a traditional shared storage model that many of us are familiar with, iSCSI, Fiber Channel, whatever you may be using, NFS to try to get access to your data from your application. You know, best case you're looking at a millisecond of access time, you're looking at 90% IO weight, which is slowing down your application. That's why people over provision, that's why people are running a 10% utilization. We take a zero off that, we divide it in 10. We give you 100 microseconds of shared storage access. So if you want to be able to deploy an application literally within seconds to be able to get higher performance than you have today, you can take our approach, stop by and we'll show you exactly how it all works. But you're able to get that with full mobility. You're able to deploy your application anywhere in the Diamante environment and get access to your data protected with low latency and guaranteed performance. Because you're not over provisioning, because you're able to take advantage of what you're actually buying, right? You're not buying something and only using 10% of it. We're delivering 6x the utilization. So it's fewer things you have to manage, less overall cost, less power, less overhead in your data center. So this is how we talk about a standard based approach where you can migrate your containers, keep them as is, you don't have to change anything, but you have the ability to get much greater performance, much faster deployment without ever being locked in. You can put your containers on Diamante. If you like it, you can stay, if you don't, you can leave, you haven't lost anything because you haven't made any changes or any customizations to your application. If you prefer a particular open source orchestration or you build your own, we work with that as well. We have open APIs, we are committed to the open source ecosystem, we contribute code on a regular basis to make sure that we can extend the vendor agnostic notion of network and storage requirements in this container landscape. Think about the time to market, whereas it typically takes people weeks and months of planning to be able to actually deploy their application at the level of scale and performance that they require. We make it very easy to do within seconds. So it's much easier to get your application deployed and operating as you need it to be. We're working with users in the media space, in the financial services industry, in the service provider industry, in the web infrastructure industry. These are all users who care about deterministic results, getting their applications deployed, making them successful without having to re-architect and constantly retune their infrastructure and their applications. We'd love to talk to you further. We'd love to learn more about your containerized requirements. We'd love to enter you a chance to win a Ferrari for a day. So please come by, yeah, you like that, there we go. So come on by, A-15 right there by the entrance. Please stop by, we'd love to hear what you have to say. And I'll be around here if you wanna ask any questions. Happy to take those questions. Thank you very much.