 Without any further ado, I'd like to hand it over to our associate director of the DC office, Mr. David Barbell, as he's going to take us to a world beyond doctor. So I'd like to start off by just pulling the room. Who here has used doctor? Who here has heard of doctor? Okay. So before I talk about doctor, I'd like to start by just talking about computers. This is a basic computer. You've got hardware. This is your CPU and your memory and your USB and all of your stuff. And then you've got an operating system that kind of controls all of those things. It provides this really nice abstraction layer above your hardware. And then above that, you've got your applications. And so your application talks to your operating system. The operating system talks to hardware. And what's really interesting about this is that nothing really cares about anything except the layer below it. So application only cares about the operating system. Operating system only cares about the hardware. So now let's talk about virtualization. Back in the 60s, somebody at IBM figured out that you could take that basic computer and run it all in software. You could simulate that hardware in software. And so you could spin up these virtual computers. And it looks kind of like this. You have the actual hardware. And then you have the hypervisor, which is a layer that could spin up these virtual machines, virtual computers. And then you have these virtual machines, which have hardware. It looks a lot like hardware to the operating system, but it's actually being run in software. And then you've got the same operating system. You've got the application running on top of it and all the application stuff running on top of the operating system. So from the perspective of the application, the OS, everything is the same. But what you can do is you can run multiple computers, virtual computers, on one actual computer. What's also cool is you can take each one of these virtual computers and you can write it out as a file. You can save it down to a single file, which you can email around. You can FPP it around. You can also have a website. Do whatever you want with files. So this is virtualization. It really started to come to the forefront back in the 80s when it hit the PC market. And then when computers got big enough and cheap enough to effectively run a lot of computers on one piece of hardware, probably early 2000s, companies like VMware made a lot of money selling virtualization technology to allow you to consolidate all sorts of things onto a single server. But each one of these is isolated. Each one is completely isolated from the other. This one thinks it's alone in the world and can't directly touch anything in any one of these. So it's really good for security members too. So what is Docker? Docker is containerization. And this is the important to talk about. Let me talk about what Docker is before I talk about what it isn't. And talk about what's beyond Docker. So Docker and containerization is very similar to virtualization except you don't have the hypervisor and virtual hardware. Instead you have the same underlying computer. You only need to have one of those. And then you have an operating system that can run containers above that. And then it creates little virtual operating system spaces. It can simulate for the application you have your own file system, you have your own username space, you have your own things the operating system provides. And so from the application's perspective, it has complete control of computers. It provides the same kind of isolation. It provides the same kind of boundaries as the virtualization. But each one of these is really, really fast to spin up. It's really cheap because you don't have that virtual hardware. You don't even have an additional OS in the virtualization. You have full OS for every single virtual machine. This is just a basic OS process. So it's very, very lightweight, very, very fast to spin up. You can spin up one of these things under a second. And what's cool about this is you can also save these off so that you have your application and it has its own view of the world and all the supporting libraries and things like that. You can save one of these off into a file. Same as you would a virtual machine. You can share it around just like you would a virtual machine. Now, one important note here is really, if you look at it this way, containerization in Docker is just really cheap virtualization. It allows you to save off your application and all of its state and all of the things that support that application and share it around as a file. Virtualization provides exactly the same functionality. The difference here is this is dramatically cheaper. It's dramatically faster. You have a lot less overhead. And that's one of the reasons it's so popular right now is because with virtualization you can only run three or four of these things that don't forever start up because we're starting up a full computer every time you're starting a virtual machine. But these, you can spin up incredibly fast and they're really, really lightweight. So let's look at the procedure for Dockerizing an application and then putting it on to another computer. So you write your application code because your app is the stuff you really care about. That's what you're generally trying to achieve is you have an application and you want to get it on a computer and they can do all the things it needs to do. So you write your code. You put your code into a Docker container or any other kind of container using a different container-based technology. You configure it. You configure all the things inside that container to run your application. You probably need to add a web server. You need to add supporting OS libraries, language libraries, things like that. You might need to configure the web server. You might need to set whatever permissions you need to make sure the security is okay. And once you feel you're done, you save it. And you save off an image. That's the file that is your container. And you push it up to some really major repository. Somewhere, hopefully, on the Internet where you can then pull it down to another computer. And then it has everything it needs. It just spins up everything that's in that file and you can run it. So it makes deployment of that Dockerized application very, very easy. It's just pulling down a file, opening up and running it. Now, this is really, really cool and incredibly useful. However, it doesn't solve all of your problems. One of the first problems it doesn't solve is scheduling. So what is scheduling? So let's imagine a world where we have three servers. I may use myself. So you have 12 application instances of application A. And it's pretty easy to divide that among three servers. So you say four per server, okay. Four, four, four. All right, great. And this server has some capacity left. This one has some capacity left. This one's full. It's usually all the memory and disk space and all the things. So it can't really fit any more applications on it. So say now we want to run a couple instances of application B on this. Well, where do you find them? Well, you can put one here and one here. You've got capacity. Okay, now say you want to spin down a couple instances of application A. Okay, at 12 you want to drop to nine. Where do you start taking those away? Do you take two here? It starts to get very confusing. Especially you start adding and removing applications and scaling up and scaling down. Keeping track of all those things is a very complex problem. And one of the things you'll probably want is a piece of software called Scheduler that will oversee all of that, all of this. Who puts what where? It's one of the first problems that we tend to run into when you're running dockerized infrastructure, which is why there are a lot of task schedulers out there. The big ones are Mesos, Diego, which is Cloud Foundry's task scheduler for Kubernetes. So what's another problem that docker doesn't solve by itself? External services. External services. So this is a web app. Say you've got your web app here. This could be a Ruby web app, a Spring web app. You've got all these web apps running in these containers on your web server. And they need to connect to a database in order to store some application state. So you could herd code the address of your database here. But then what happens if the database moves? What happens if you need to change something? Well, then you'd have to rebuild your docker. You'd have to find out what it is, rebuild your docker, and just redeploy them. What you really want is you want something that can dynamically tell all of these things where this external resource is. Tell all of these things in this container where this database is. And this is true for all sorts of things. This could be login services. This could be all sorts of external services that you might want to find in your application. And docker doesn't solve this for you. You have to have something that points all of your containerized applications to that database, to that external resource. Another problem, routing and load balancing. So say we've got three applications, A, B, and C. They're all at example.com. So A.example.com, B.example.com, C.example.com. So you want to go to B.example.com. So the web request comes in. And then you need to have something that then routes those web requests to the containers that have application B. So that B.example.com will get B rather than A or C. It's pretty easy to describe. It's really hard to solve, actually. Because each one of these things needs to continually register this thing needs to know somehow which applications are where and be able to wrap things directly, not just to a server, but to specific containers on a server. So this is actually a fairly sophisticated problem to solve. You're running your applications on a server. Hopefully your applications are doing things. They're taking web requests. They're tossing off error messages. They're showing access logs. They're screening access logs, things like that. You want to be able to capture that in some way. And you can't just have the logs inside the container. You need to be able to get at them in some consolidated way. Also, these containers are bouncing up and down. Like if the logs are in the container, what happens when the scheduler moves it, when a container gets destroyed? You really want to have some kind of mechanism for automatically streaming things that are happening in here out to some centralized logging service and then aggregating together all of those logs. Now, it gets even more sophisticated when you're running different applications streaming those things, this log service, because you want to be able to see what is my entire cluster doing? What is application A? Doing one of the events for application A. So it needs to be able to know that log events from application A and application A should go together somehow so that you can get a unified view of what's happening with application A rather than having all of your applications mixed together. So your logging service needs to be container aware. It needs to know exactly what is streaming into it and be able to parse all that stuff apart. There's another problem with Docker business law. So what do I mean by packaging? So let's look at our web app again and web requests are coming in to say application B. Let's dive in on this container for application B. And inside this container, inside each one of these containers, there's going to be several pieces. There's going to be a web server so that you can actually handle web requests. It's going to have libraries. It's going to have OS libraries. It's going to have programming libraries, Ruby gems, Java jars, whatever. And then you have your application. This is your code that you've written that you really care about. What happens if there's a security vulnerability in one of your libraries? But what happens if you want to tweak the configuration on your web server? Say you want to increase the amount of RAM available through your web server. What you're going to really do that is to rebuild your Docker image, make the change in an image, push that image up, and redeploy it through using Docker to standard fashion. So that means in this case, any time you want to make any change to, say, the web server that application C uses, you would have to rebuild and redeploy all of application C. And if it's a web server that all of them use, like say you're using Tomcat for everything, you're going to have to rebuild all of the images for all of your applications and then redeploy all of those images. What you really want is you want to have something that just allows you to worry about this, this application code. And you want a service that can package all this stuff, handle the web server and the libraries, and deal with all that stuff you don't care about on a daily basis. And have that change independently. So you want to be able to say, hey, I've got to change my application, rebuild all of the containers and build a new image for all the containers that have that application on it. Hey, we need to do a patch for the Tomcat web server, rebuild all of the images for the applications that use the Tomcat web server and redeploy those. So packaging is actually one of the big pain points with managing a large, dockerized infrastructure. So all of these needs, these things that it doesn't do, so what's beyond? What's beyond? What's the next step? What we're seeing now is we're seeing a lot of platforms, a lot of software projects that solve these problems using containers. There's a ton of schedulers, there's routing services, there's logging, and a lot of these things are being consolidated into unified platforms. Mesos and DCOS is one that's very popular. Kubernetes is another. Cloud Foundry is a really big one. And all of these solve these things at various levels of abstraction. Most of them do all of them except packaging. Cloud Foundry solves the packaging piece as well. What's interesting about these platforms is as they get more sophisticated, these are container management platforms. But as they get better at what they do and dealing with all the issues with containers, the unsolved problems, you find yourself worrying about the actual containers less and less. Because you have all of this automated logic that is rebuilding your container images, it's scheduling your container images, it's pushing things up and pulling things down and managing it for you. And so the future of ops and the data center and containerization is not having to worry about containers because you have platforms that will solve for you. And that really is what I see as the next frontier. So with that, I'll open it up to questions. I guess somebody who owns a business that has a technology product that we do, if I am a non-technical person, why should I care about this? Why does this impact my bottom line? I guess why do I care about this? Is this just an engineering thing or is there a big product or business thing? That's a good question. The big pain point that any business that writes software has is not writing the software but running the software. And if you look at a lot of companies that run a lot of software, there's a lot of overhead to running that software. If you're running your own data center, you have to wrap hardware, you have to wire cables, you have to come up with network configurations, you have to come up with operating system deployments, you have to come up with all of the pieces that this largely does for you. You have to do all of these things anyways. You have to figure out where are my applications going to run, which computers are they going to run on, how do they connect their web server, how do they connect their login service, how do they connect their database, how do they do all of the things that we're talking about. And it's largely been a manual process. It's been very labor-intensive in the past. Containerization is a huge step forward for deployments for actually putting something on a computer. It makes it very, very easy and very cheap. What all these other things do that the projects like Cloud Foundry and Kubernetes do is they make it really cheap to do all the other stuff. So we're removing all of the manual labor of actually running your software company. And it frees up your people rather than worrying about, I'm going to write my application and then I'm going to write all the configurations for the web server that will run my application and I'll worry about how to configure the operating system so that I can write a run book. So the ops team knows how to manage my application. It takes all that stuff away and it makes it so you can just worry about writing your application and focus on the things that are really important and value-generating for your company. So does your company design a custom solution for each and every client? Or are you offering certain pre-package systems or people to sort of pick A to your C? How does that work? So our company has a product called Cloud Foundry or Pivotal Cloud Foundry. And Pivotal Cloud Foundry is a platform that solves all of these problems. It actually goes a step further and it spins up hardware. It generates virtual machines and then runs containers on those virtual machines and manages those virtual machines and containers and provides all the routing and the logging and the external service binding and scheduling and all of those things that I talked about. So by the end of the day, you just push up your application code and it just runs. It's pretty cool. Cloud Foundry is pretty distinctive in that it is one of the few platforms that allows you to just push up your application code and not have to worry about configuring your container and pushing that up. There are some posted solutions like Amazon, Google Black Engine, RR proprietary where you write it for a particular cloud and you can also run it that way for Roku as another one. But Cloud Foundry is one you can install on virtually any data center, virtually any cloud hosting provider and get that just to worry about your application functionality. You mentioned Amazon for an application that's not really posted on AWS. Does it replace? Go back to this. AWS is really good at providing servers. If you look at EC2, it provides you with... Let's go. It provides you with this, the hardware and operating system. But you still need to manage all of the application state and all of the configurations on top of those services. So you need to configure your web servers. You need to manage system libraries. You need to figure out what deploying your application on that server looks like. It's really not from a functional perspective. It's not very different than running your own physical data center. The advantage is, rather than racking hardware, you just push a button. It really manages the EC2 provides. What a platform like Cloud Foundry or Kubernetes provides, is it provides this automatic scheduling and logging and routing and all of these other problems. You still have, if you're just running raw EC2. So as you have these more sophisticated platforms, not only do you get containers, but you get automatic container management. And you'll want to layer that on top of the EC2 or Azure or Google Computing or OpenShift or things like that. And I think that's really where the future is. We're moving to these cloud-based infrastructures, but we're also finding people layering things on top of those cloud-based infrastructures in order to help them manage all of their applications. Containerization is a great raw component, but you also need these intelligent systems to help manage all of your containers on top of that infrastructure. So it's potentially cheaper and can offer automation of a lot of things that I actually manually. There seems to be a lot of pros to it, but it doesn't... It feels like, to me, if I am running a technology organization, there's a lot of magic that's happening. What are the possible drawbacks or other considerations to going this type of approach? Where it's like, I'm used to having all of the control of doing everything myself, and now it's like, there's like, I'm trusting this other thing. What other considerations are there for maybe why I'm not going to use one of the plug-ins? It's pretty hard to argue against something like this at this point. You could say that manually tweaking everything can give you slight savings on memory. Maybe you could manually configure all of your machines and all of your applications. You could get a little bit more consolidation, a little bit more efficiency out of your servers. But it's a lot like the old days of the assembly language programming where people said, well, why would I use a higher level language like C or Java? I can manually do each opcode and optimize the heck out of things. What we found was not only was that laborious and you were way too slow in doing it, but it wasn't worth it, especially with computers and memory and everything getting cheaper. A little bit of overhead is effectively free, especially in comparison to the human cost. What you're really doing is removing the human manual component out of all of this. For almost all infrastructure and computer things, anything you can automate is a big win from a cost perspective. You also get portability. You get the ability to arbitrage between various providers. If you're using containerization, you're using these platforms, my containers will run the same as Google Compute Engine or Azure or EC2. If Amazon is offering a Super Tuesday deal on compute, then I can take my application, move it over to Amazon running one of these platforms. I don't have to change my own thing. I don't have to change my application. I don't have to reconfigure anything. I just move it over to the low cost provider. While these huge cloud computer providers are on a race to the bottom for pricing, it allows you to arbitrage between them. Thank you very much.