 Okay, let's start on time. Welcome to this session about Cloud Platform and containers and Docker. This has been a pretty good session before, so I think some of you guys are warm up on the topic. Pleased to be here. So my name is Alexandre Vassar. I work for Pivotal, I'm based in Paris. I'm part of the field architect group. I've been working in, you know, development, Java, enterprise architecture and Cloud Platform for about 20 years. I've been with Cloud Platform for about five, six years. I'm part of the VMware team when it started. There, as an R&D project, I've launched the Cloud Platform Meetup in Paris, so if you happen to visit Paris, you know, send an email, and we'll get you to have a talk over there as well. It would be fun. I've been discussing with many, you know, organizations looking at, you know, cloud-native platforms, platforms, containers, you know, over the years. And obviously, you know, the landscape is changing very quickly. We've heard all about Diego, the fact that Diego would be part of the, you know, Cloud Fundry certified for 2017 next year, although we have it, you know, production ready in Pivotal for a while. And I think it was a good time for, you know, stepping back and looking at what you can do with that. And it's not just the only thing you can do around Cloud Fundry and containers. And there's been a number of initiatives around containers, as well as, you know, for services, and not just for apps and so for pipelines. So I wanted to aggregate three use cases and discuss some of the architecture, the, you know, building blocks and moving parts, which implies, you know, challenges if you're the platform architect or platform operation team, and pretty good use cases if you are more on the user side of it. I tried to articulate that with some demos. We have very short time, so my demos are recorded. I don't want to download stuff from the internet or whatever. And hopefully, it will give you a feel of, you know, what the stuff can look like. So let's have one thing. And you've heard yesterday about, you know, here's my app. Run it in the cloud for me. I don't care how. So, you know, the first use case is here's my app as a container. Run it for me. And, you know, most importantly, you know, keep it running for me. So scale it, heal it for me. And I don't care how. So, you know, if you think about it, you know, that's enabled by obviously Diego. And, you know, the work done in Garden and the work done in Garden Run C, previously Garden Linux, with this, you know, goal of having kind of the standard for container libraries in the same Run C part of the open container initiatives that Docker is using. So, you know, fundamentally, when you go into that use case, the container becomes the unit of currency. Even if, you know, the container, you know, contains the app, the user is exposed to the container. He has to build the container up front rather than the platform building the container on the fly. And you see if push a container. So what does it mean? It means that you actually see if push a reference to a container that exists into a system that in the ecosystem is called a registry. And the registry is kind of the namespace. It can be secured. It can be completely online or it can be private. And the registry is having a repository, obviously, to store stuff. So it's a number of moving parts. As a user, you have an app. You probably need to care, you know, care about, you know, the app as well as each one times. You know, if it's a Java app, you need a JVM or maybe you need Tomcat or maybe you need whatever the app needs as well as, you know, all dependencies. So you would build up a Docker file in the Docker world to describe that. You would use your Docker tools on your machines, maybe on a pipeline actually, to build the image and upload the image to the registry. And then you would use CloudFundry. And then it creates a dependency between CloudFundry as a runtime platform and Docker as a registry because Diego would pull the image from the registry. So the registry becomes a fairly, you know, sensitive component for your runtime platform. So there are many, many things that you need to think about when you start doing that. And obviously, you know, as use cases starts to appear, I think you need to be cautious about what you put into your Docker file. And, you know, I think the previous session was pretty much about that. So let's go through, you know, a quick example to look through that. As I said, I recorded the demo because, you know, building and pushing Docker file is not going to be fun over Wi-Fi. I did that with fiber to the home. It was good. So, exactly. So if it's too small, I have, like, you know, the zoom thing. So it's a basic app. Doesn't matter. We have a Docker file, parent image, and the parent image is having a parent image, etc. My app, the port, and the kind of start command for the app. It's Java, but it doesn't matter. It's actually Spring Boot. Then using the Docker tools, I built that stuff locally in my machine. The little Docker demand would run luckily if I want to try it. Layer of five systems goes in. If you look in the bottom, my local images gets updated. And then I can try it. I can try it without cloud foundry. It's just Docker. Maybe map, you know, my local machine port to the container port, need to remember what I put in the good Docker file, by the way, and looks to be working. You know, shut it down. I don't care because I want to run it in the cloud. So Docker push to my registry. Not providing a server name. So I'm going to the public Docker.io registry. And I'm logged into it before. You haven't seen that. Big uploads, especially if you remove the image all the time. In parallel, I'm going to connect as a user to my cloud foundry. Check that I'm running Diego. Check that my admin has enabled Diego Docker so that I can actually push Docker image as is. That's fine. It's enabled. In this meantime, my image has been uploaded to the Docker.io. It's been just updated a few seconds ago. So it's good to go. And then I can start to do a CF push. Dash O. Dash O like the Docker image. Or OCI image. Happens to be a pretty good name. Dash O rather than dash D or whatever. Point to the registry. And then give it a name. CFS like CF submit. Then we have the staging process that is happening in the back. So we'll watch the two things in parallel. This is running on a pivotal cloud foundry on my home lab, private DNS. So if you see a password, I don't mind. It's on my private VPN. So it takes a little while. As much as I understand that maybe build pack is actually slow, blah, blah, blah. Actually, I don't think Docker is that fast to start anyway because there's a number of things that is happening in the overall process as a whole. From Diego to the image running to the host checking in and the host manager and the router having the accessing the app and et cetera, et cetera. And of course, there's demand on the storage. And I haven't heard a lot, but I think the storage back end of a cloud foundry is really, really critical to the overall cloud foundry performance and staging. Once you're there, you don't care if it's a container or not. It's just an app. So you can scale it. CF scale just as you would do. You could look at the logs. They are aggregated across different cells. You can see cell zero and cell one because they have two containers. They've been distributed. It's all fine. Survive high availability and placement. And then I can access the app. Look at the app running into my container kind of network in the cells. I can kill the container. When I click that button, I see an index one. It's actually got killed because it says, hey, it's me, but then sort of get killed. And then I can see that this is only one left, but actually, you know, hoto healing starts to kick in. So there's another one starting, which is good. And it's the container one that was just killed. If my admin has enabled also access to the container and I really mean the garden Diego container. It doesn't matter if it is OCI or built-back based container. I can SSH in using my UAA credentials, et cetera. If my admin and organ admin and space admin have provided access to me. And here we are, we are in the container. And we have auditability. So it's a truly, you know, generic platform and I can check, you know, who's got in into SSH, has the container crash, whatever, you know, all the cloud friendly goodness kicks in. So, you know, it gives you like a quick overview, very, very quick, very fast. But I think, you know, all this is what we already know. So let's switch to another use case. As you know, in cloud friendly, we have apps, but we have also services. So second use case is really about service as a container. So unlike the first use case, service as a container is not so much part of the core of the cloud friendly, you know, stable confidence. It's been actually kind of more R&D research projects, et cetera. But let's talk about the use case before talking about the implementation. So the use case is, you know, get me an instance of X, database, caching, whatever, no SQL, bind it to my app and do that over and over and over again. Maybe with actually a different size of the service X. And I don't care how. And all that is not specific to containers. It's specific to the service worker. The service worker is really, really, you know, variable in cloud friendly because it gives the user the level of abstraction. So the way the service is implemented, is it a dedicated VM, is it a pre-provision VM, is it a container on the fly, is it a container on the fly, onto a container back end, swarm or Kubernetes or whatever, is a service implementation detail. And the way the application is going to talk to the back end service is actually managed by the service worker that will provide credentials from the service to the application using the mechanism that, you know, many of you know, which is, you know, the VCAP service, environment properties, et cetera, injected into the app. So it's pretty interesting, though, to look into, you know, how does it work if you really want to go and do that with containers and using your favorite, you know, container back end. Like, for example, swarm. And I know, you know, some people have been actively looking into that. It was an R&D project started by Ferrand Rodenas in Pivotal, put on GitHub, and I know at Stockinway and Dr. Nick is fairly actively making that. So I don't think we'll learn anything looking at that. So what are the moving parts? Somewhat, you know, the same. We need an image with the service. Someone has to build and prepare that. Having kind of, you know, Cloud Foundry in mind, because the image will have to pretty much publish credentials back into the Cloud Foundry service worker components. Then using, you know, some kind of container back end, swarm or Kubernetes or whatever, we need a service worker that will interact with that to ask for the container provisioning. Synchronous, asynchronous, etc. The service worker can do anything. Key point is, of course, that once you're there, you actually need to operate kind of two platforms. The app platform with Cloud Foundry and the container platform with the kind of container back end. So you then decide, hey, you know, why don't we take this container back end and make that a Bosch release? So this is also part of this GitHub project with the Docker Bosch release. It's actually a Docker swarm as a Bosch release, which, you know, has many benefits if you want to look into that and you already know Cloud Foundry. Or it can be a little bit of a surprise if you're more coming from the Docker world and don't know anything about Bosch and Cloud Foundry. But, you know, as a Cloud Foundry user, I thought, hey, you know, let's give this project a try and look into this architecture. So again, you know, let's have a look at what it looks like from a usage standpoint. So in this scenario, I'm going to start with Bosch. So I'm really the platform admin with Bosch. I'm looking at my deployments. It's actually already there. So we'll see Docker swarm as a Bosch release on my platform. That deployment is kind of very small. I have, you know, kind of a control plan VM. What is the service broker and the swarm manager? It should be better, by the way. And then I have two Docker engine, right? Kind of. To manage high availability or whatever. Then I have a manifest file for that Bosch release. The most interesting part is the broker configuration, service broker configuration. So you can see a memcached D pointing to the image in a registry, right? On Docker hub. And then I have two plans. One with small memory, one with big memory, etc. So that's really specific to the service broker implementation. You could configure your broker anyway you'd like. Important part is this image is, you know, publishing credentials. So that image is already in the registry. It has a Docker file. You could look at it. It's pretty ugly because it's a Docker file to install and unzip, you know, memcached D, etc. Then, obviously, other users I still have that app running before. And as a user, I can check service brokers. I have these service brokers to deal with containers. At the bottom, I just looked into the Docker swarm cluster to see what we are going to create on the Docker swarm cluster as an admin. So let's just create a cache service. Create service memcached D14 128 meg for the service plan. Call it cache. So it should kick in into the Docker swarm cluster, an image kind of pool in a container creation with memcached D. And then I can bind the service to my app. And here we go. We are given the credentials. If I want my app to take these credentials and the app needs that, I probably need a refresh. So maybe restart, maybe restage. Depends of, you know, how the first image was built. And then in my tool, whether it is UI or command line, I can see the service and I can see the vcap services credentials. Like, you know, any service broker and service implementation would provide to me. So if I'm accessing this app, obviously my app source code can no access and discover the services. And here we go. And it doesn't need to know it is Docker or not. It's just happened to be Docker in the back end. So, you know, kind of great use case because I think, you know, it shows kind of how to simplify maybe Docker engine management with Bosch if you really want to go that road. But it most importantly shows the value of the service broker as an abstracted construct on top of, you know, how the service implementation is made and how the service plan is provided to the user. So, you know, think about it. You can switch the back end. It should be transparent for the user. Definitely not transparent for the people doing, you know, platform architecture and platform operations, etc. So what is pretty cool about our use case is obviously your service catalog can be really, really broad. I think in the first prototype they came up with about a dozen of services. You know, kind of more development ready than production ready, obviously, because then a whole lot of use cases happens around how you manage, you know, service upgrade, service high availability, etc. All the time it's going to be service specific, you know, the way you manage database, clustering is not the same way as managing, you know, caching, clustering, or, you know, no SQL clustering, etc. So, you know, it's good. It's been good because it's lit. Developers start pushing their apps, knowing that there will be Postgres, knowing that there will be LogStash. And yet, so then the ops team has some runway actually built production version. Exactly. Mucking around in a laptop, like we just tried to save them from. Exactly, yeah, exactly. All right, I'll come back to you with Mongo and you'll say, no, we're not doing Mongo. Yeah, exactly. So, you know, it happens to kind of drive the consumptions because then you have the service catalog. You do the homework to provide these as images and then they get started. And as you ramp up your production system, you decide if it's still, you know, that implementation or another. And, you know, at Pivotal we happen to work with many partners. So we work with, you know, Stock and Wayne and their Dingo style, specific, you know, distro for PCF. We happen to work with DataStacks and MongoDB, etc. to provide production ready back in services. And, you know, that's not up to us or the cloud-friendly community to decide if containers are best for that kind of technology. I think that's more for them. DataStacks, Mongo, etc. Are they happy to run in containers or not? What are the shortcomings, etc.? It's kind of possibly really great that they, as you said, all those samples don't have backups and everything else. But for the Dingo thing, we took that whole idea and just said, let's productionize. Exactly. That's the most great. Yeah. But it's kind of a full-time job and engineering effort, if you go that road. Absolutely. Yeah. So the next use case is pretty interesting. It's not directly related to, you know, cloud-friendly itself, but actually it is because it's all about velocity. It's really pipeline task as containers. And, you know, the use case can be summarized that, you know, build this for me with a clean environment and clean build tools, etc., so that I can build that over and over again. And I know what's going on. And, you know, go to the next step in my pipeline and go to production. And I want to build that stuff anywhere, anytime because maybe people are committing into the project. Maybe people are, you know, contributing images that are dependencies to my project and my pipeline, etc. So, you know, as you know, very close to the cloud-friendly system we have concourse. Concourse, you know, how to burst, you know, into the cloud-friendly ecosystem into the engineering group. You know, the whole of cloud-friendly is built with concourse and concourse happens to be an open source project concourse.io. It's a, you know, pipeline as code, right? So, you have a file describe, you know, what you want to do as a pipeline. And quite interestingly, the concourse server, you know, given the pipeline as code, happens to be quite a familiar, if you want to, unlike maybe other architectures. Of course, you know, in an enterprise, you may want to have access control, a little bit of access, etc., on the concourse. But fundamentally, you can spawn up a concourse on your developer machine and get going, rather than having a kind of back-end, you know, built infrastructure, etc. So, in that use case, it's pretty interesting to also observe that concourse is pipeline as code, but it is having container-first architecture. So, most of the concourse tasks, sort of elementary unit of work that are part of a pipeline, are actually docker images, if you want to. And if you deploy concourse, it happens to leverage, you know, CloudFundry components, sort of, you know, hit your own dock food. So, it relies on garden runcy as well. And you can run concourse as a brush release inside CloudFundry and operate concourse inside CloudFundry. Of course, if you do that, the docker registry kicks in again as an important component because it becomes, you know, kind of the registry for the build tools, if you need like Maven or Java or the CFCLI because the pipeline is going to push some stuff in there, etc., etc. So, building blocks, you need images for the build tools, you need the docker registry. It might not be so much a production system anymore for the docker registry, but, you know, if you think about continuous deployment and the sensitiveness of the pipeline, you know, you'd better think about it as a production-ready component and the concourse itself as well. And we've seen many enterprises that, you know, have been looking into build tools as a build system and they are completely unable to connect the build tools to the production platform because of their, you know, network segmentations, governance, etc. So, having concourse as a brush release inside CloudFundry into a specific, you know, service networks and VM pools, etc., solves a lot of these problems. So, the way you interact with concourse is with the pipeline, YAML file, and a command line, fly, and what happens with fly is you can also inspect the container that just ran your pipeline even if the container has stopped because concourse will keep the container for a little while. So, let's have a look into that. So, I'm going to show you a fairly basic pipeline, but, you know, for the sake of it, we're going to use concourse as a brush deployment. So, using concourse, garden run C, it's actually using the route registrar so that my CF Wildcard is having a concourse in MSpace and it's running on very basic deployment 3 VMs. One of it is the worker that will kick in the containers. Then I have a pipeline, pipeline as code, Git repo as a source so if I'm changing and committing stuff in there, it will kick in in that branch. CF as an endpoint as well so I can use that as a destination, Docker as a registry as a possible source or destination, and then my pipeline and that thing is a unit test maybe triggers Maven or whatever you need to do in your project and that task is going to be a container and that build is going to be a container and that push is going to be a container for doing all these tasks in the pipeline. So, what the pipeline is doing is taking source, unit testing, build it, push it to the Docker registry and then it could go on and CF push the Docker image and bind it to the thing. So, the task, that's another YAML file and that's the task that would do the Maven build, Maven, Java, compilation, whatever. So, obviously, if you want to do a Maven build, you would need a Java runtime. So, you can see the reference to the specific Docker image. It hasn't got like my name slash Java, it's just Java because that's part of the kind of Docker official root repository images. So, you could build your own kind of compilation VM. So, it's essentially pointing to the registry and then kicks in some tasks inside the container. So, let's have a look at the pipeline from deployment standpoint. So, using fly, I'm going to deploy that pipeline. I would pass in credentials into a different file. I would target a specific concourse endpoint, my server and the credentials wouldn't be in my Git server, it would be in a different file as the parameters of my pipelines. Concourse would kind of give the pipeline a name. So, let's call it Docker, would kind of update it if the pipeline is already there or not. And then, you can access the pipeline. So, you can see kind of the sources resources, app being one, Docker in the sense of Docker registry being another. And each green boxes is a task. And they are green because they've been successfully running before or they might have a color code if they failed. You can look into the history of each task activities. Some of them failed, some of them work. You can look into the details and if you look into what this task has been doing, this is the Docker push task. So, it's been building the Docker image and then pushing that to the Docker registry, all orchestrated by concourse. I could trigger the task manually as well, which I just did there for that demo. And while doing that, while concourse is working and executing that task, I can use fly and I can use the intercept command in fly to get an access into the container and maybe check the file systems or the log messages, what went wrong into my pipeline step. So, picking one stage of my pipeline, getting in, I'm inside the container and what I can see in there. I can see the Docker push, which is really the task that is kicking in. So, it's kind of using a container to run a pipeline task to build another container and so on and so on. So, gives you like an idea of what we mean by having these containers everywhere. So, going back to the deck and looking into the discussion, you know, we've seen, I don't know if you guys tried by the way. Guess how many containers today? In just 25 minutes, you don't know. How many containers have we built just in these demos, right? So, I mean, about between 10 and 20, I would need to know better the staging process because there might be hidden containers somewhere. So, let me remove this. Yeah. So, that's quite interesting, but, you know, if you step back a bit, you know, be on the buzz of, you know, Docker or container and a pretty nice, you know, engineering, you know, topic that it is with Layout 5 systems and all this, you know, at the end of the day, it's all about app, it's all about services and it's all about putting that in production with high quality and, you know, velocity and, you know, making sure that, you know, your process and your platform will, you know, survive innovations. So, you know, if you think about Service Barker, if you think about CF push and the Diego abstractions, you know, all of this make that platform a fairly solid abstraction and you can survive the next phase of innovation. You can survive the next phase of, you know, innovation from the platform regroup if they decide to change this and that components. And I think this is really important. Now, you know, David's in the details, so I don't know if you looked carefully if you guys are doing Java, my Docker file was pretty rubbish, you know, dash x, mx, 500 meg. Hmm, you know, I mean, then you allocate a container, which size you need to remember the size. The size is hard-coded into the Docker file. It's pretty bad. You know, and unfortunately, if you're using the build pack, there's a whole mechanism that would compute the heap size for you based on the container size. That's a massive benefit in the enterprise. So as much as containers are fun, you know, don't forget about these little details in enterprise. This is what matters to them. So, you know, it's not about adoption, but it's also about abstractions and architecture. Don't forget data operations and ecosystem and the moving parts that you need in addition to the cloud-friendly platform. Those as well, you know, because if you don't do anything, you will frustrate people as well. So, you know, I think it's pretty good to do something to showcase, you know, the joint use cases between the ecosystems and of course, these can work in the long-term only if you emerge, you know, good standouts and not just like ad hoc, you know, R&D or demonstrations. Containers everywhere, you've seen that. But maybe you don't need to care. It depends what you do with the platform. If you focus on app dev, app velocity, etc., it's a mean to an end. And obviously, my word of caution being there for a while, having done dev and production deployment and cloud-friendly and different scenarios, garbage in, garbage out. I think it applies to container. You know, it applies to VM in the past. So, it doesn't matter. So, you know, keep that in mind. Have a look at what's inside the container. Look into the architecture. And I think this is why we focus so much on, you know, microservices, 12-factors, cloud-native, you've seen my shares. It's not, you know, on purpose. You know, I think the future is about an end-to-end, you know, cloud-native architecture on a cloud-native platform. It's not just throwing stuff in a platform and expect it to run. So, thank you for your time today. Happy to take any question. Question? Do you know both of the public passes? Do any of them offer the DACA flag? That's a good point. I think very few of them are actually Diego-enabled, first of all. In the end, I'm not aware of any really public one, multi-tenant, that offer the Diego-DACA flag. P-Dubs and Pivotal web services, we don't enable that yet. We don't think it's ready for, you know, highly secured multi-tenant. So, running a DACA image is just a lot harder to do securely, because any, like, it's an untested image, right? Whereas a water route effect, you know, everything about it. Now I, now if you're running Guard and Run C, then out of the box and by default and without extra configuration, every container is unprivileged, is APARMA secured, has a set called whitelist, it's basically as secure as you really can get. We're turning everything on. So I think that recommendation is probably right to change it. It's going to change, yeah, definitely. You might want to repeat that for the rest of the audience. Oh, okay. Yeah, maybe go, come and chat with the folks there. Well, I think it's important. Well, I mean, you can repeat it. Yeah, okay. So the question was about, you know, is there any public platform or platform today that would enable the Diego Docker, which is the first use case? And the reality is not so much. I haven't seen any, but you know, it's expected to change short term thanks to the Diego Run C, because Run C is having all the primitives to run privilege in, to run containers in unprivileged mode with APARMA and, you know, security Linux and all the other stuff, which, you know, make that good candidate. What we've seen is more like, you know, spawn up a specific Docker cluster for a specific tenant and have them work into this thing. And I think also even Amazon is doing that. They spawn up like a specific cluster for you. Okay. Enjoy the rest of the day. Thank you.