 I'm from London. I'm very jetlagged, so it's very much good night. How is everyone? Thank you for that. So, thank you very much for coming along. I'm going to be talking about rocking the lattice, about two technologies, Cloud Rocker and lattice. My name is Colin Humphries. I am the CEO of Cloud Credo. We are a Cloud Foundry and Bosch consultancy based in London. I'm speaking with a very good friend of mine, Mr James Baer, from Pivotal. Hello. He's going to be giving the second half of this talk. He's going to be trying to use my Linux laptop, so that's going to be very amusing for everybody, watching him try and move between applications. Good luck with that, James. Thank you. OK, so, I want to start this with a question. What is an application? And I feel this is a very relevant topic for this conference. So, we're talking a lot about pushing code and source code, but we're also talking about containers a lot. So, what actually is an application? I think there's value in pushing source code. You push the code as you're used to doing that CF push. You push your code into Cloud Foundry. Cloud Foundry then has a look at the code, combines it with a build pack and produces a container. So, there's definitely value in that. There's also value in pushing containers. You have a known, good, tested artifact that has all your dependencies baked in. So, what's the right thing to do? I don't know. As I just alluded to, Cloud Foundry has two jobs, and these are actually very clear distinct jobs. It has the first job, which is staging, where it takes your application code, combines it with a build pack, produces a droplet. And the second job, which is running, where it schedules that droplet to be run inside of containers, and that's where it's scaled and has the services brought in behind and routed and all of that kind of thing. But these are two distinct jobs. So, I posed the question, do we want to take our applications on the left-hand side? And I apologise for my incredibly bad diagram. This is what happens when you use Linux as a desktop. You get diagrams like this. So, we have the application being pushed into Cloud Foundry, a droplet is produced and deployed. So, that pathway to production versus the application being built locally into a container, and then the container being pushed into production. So, we have what's on the left, and what Cloud Rocker and Lattice give us is what's on the right. So, we do our staging locally. And I think there's value in doing this. We get fast feedback about whether our application has staged correctly. If staging fails or there's an issue with the build pack, it's easy to diagnose it. We also create this artifact that is known good, and that artifact can be moved between environments, can be moved down your CI pipeline. So, you're thinking, stop talking, Colin. Show me how to actually do this. So, let's have a look at how this process works. Firstly, installation. This is a directory with a Java application inside it. This is my Java app. I have very straightforward set of files that comprise a Hello World Java app. Now, if I want to install Rocker, I can go get very straightforward. Let's go. This installs the Go binary. So, we've now got the Rock command line tool available to us. We take a quick look at Rock. This is the API you can use. So, we see a few things we can do there. The first one we're going to want to do is to download the Cloud Foundry base image. So, we would run Rock this. Now, I'm not going to actually run that command because that would pull down a 500 megabyte image, which is the same Cloud Foundry base image that's used to power all the Cloud Foundry applications running in a normal Cloud Foundry setup. I don't trust the conference Wi-Fi to let me do that in a reasonable period of time. So, I've already done that on this machine. So, I have a Java app. I need to have the build pack available to me locally. So, if I run Rock build packs, we can see that I've already added the Java build pack. If that wasn't here, I could just do rock, add build pack, and then a URL where I can go and get a build pack from. So, the GitHub repositories of any of the open source Cloud Foundry build packs will work here, and it will pull them down and add your build pack. So, as we have the build pack ready and the app, let's run this. Very simply, rock up. So, what this is actually doing now, it started a container in Docker that's running the same staging process that would run inside of Cloud Foundry. But it's doing it on my local machine. It's using the Java build pack, which we've got installed, combining that with the application code, and again, these logs are exactly the same logs that we've fired out by the build pack with normal Cloud Foundry staging. So, that's now completed. That's running. You see there the bottom line. Connect your running application at localhost 8080. So, that is the application staged and running locally. In case you haven't noticed, quickly look there, a little bit of unsubtle advertising. So, we've installed. We've rocked the application locally. What happens if you want to take this container away from your local machine, deploy it to production, maybe move it down your CI pipeline, do something else with it? So, we can do that too. Very simply, rock build, and then we give this a tag. So, I've given this my user at Docker Hub and Java test as the name of the application. This is staging the application, and then it's going to build us a container which we can push up to the Docker registry. So, it's done our staging, and that's actually creating the container dockerfile. In just a second, it will start running through that dockerfile and building us that container. So, we can see a series of steps there that have been run that are building our container that can be exported and run. As you see there, step number nine is the command that actually starts the Java application. Complete with double quotes and everything. There was a lot of pain to actually build, but now that's working. So, with this, we could then run docker push. We could push this now to the docker registry. I'm not going to do that again because of conference Wi-Fi, but we've built our container which we can then work with however we choose. So, you can push it to a registry. So, all the code for this is available on GitHub in the Cloud Credo account. This is an idea of the direction CloudRock is going in. So, we're going to add Rocket at C containers. We're going to improve the environment variable handling because it's not exactly the same as Cloud Foundry at the moment. It's an area that needs a little bit of work. I'd love to have a single command pushed to lattice so you can just build the whole thing locally and it goes straight into a local lattice setup. At the moment, this will only run a single application on your local machine. So, if you're doing microservices, it won't currently work. We've got lots of small services talking to each other, but we're working on that. That's very simple to fix. And as you may have noticed, my laptop is Linux. So, this works natively. I don't have a fantastic journey for Mac users at the moment. I'm providing a vagrant virtual machine which gives you a Linux-like environment and then I'm mapping in directories. So, I'm going to hand over to James. Thank you very much. Thank you, Colin. We were really excited to see what Colin was able to do in a pretty short amount of time with Cloud Something. It was called something else, Cloud Rocker, right? It was always Cloud Rocker. It was called Cloud Rocker. You can type these fun commands like rock up and rock this and it was pretty fun to type back in the old days. So, it leads to the question of what can you do with once you rock up something and rock build and what that is lattice. Well, we wanted to build something that was incredibly fun to use with lattice and something that we had heard about from people that had tried Cloud Foundry is it was a steep hill to climb because the first experience, the five to ten minute experience with Cloud Foundry was, all right, first you get to go learn Bosch. And if, like me, back in the day when I first started using Cloud Foundry, I was working in a Java e-server vendor at the day. And the first thing it said is, gem install something. I'm a gem. I don't, what's that? What's a gem? Because I'd only ever worked with Java. And so you have to go learn about all this stuff and you're, several weeks later, you might have your Cloud Foundry up and running if you're a typical Java developer. So that's not that great of an experience. So we wanted to bring a 10 to 15 minute experience to put Cloud Foundry technologies into people's hands. And I can tell you that we actually, in my view, succeeded on this. So lattice is really fun and simple to use. One way of talking about it is it's just enough Cloud Foundry. We're still actually very opinionated on the Cloud Foundry team that we think Bosch is the way that you can do production operations the best. And so we still are very much a believer in that. So if you want to introduce this technology to people, if someone has a docker image and they just want to run it in a Cloud Foundry-like environment with Cloud Foundry technology, do they really need to start there? The answer is no, not with lattice. It's just enough Cloud Foundry. Andrew Schaefer put together a metaphor for some people that were asking questions about the difference between Cloud Foundry and lattice. He's like, Cloud Foundry is a fully operational battle station. It has every bell and whistle operational experience you need. It runs on all the virtual machines, split out by default and see every release if you do that. Everything's kind of scalable. And if you look at lattice, it's really a little bit less than that. It's something that, probably a better metaphor is maybe a Star Destroyer and a TIE fighter, but this one works pretty good too. Lattice doesn't have all the things that Cloud Foundry does, but that's okay because on a laptop, you don't need all the things that Cloud Foundry has and you want a simpler experience. When I talk about Cloud Foundry by subtraction, which is one of the things Unsee referenced today in this Diego talk, you get Diego. We're going to get clustered scheduling of containers. On your laptop, you really only need one virtual machine for that, but you can scale up and have a cluster. Everything that Diego can do is going to be inside of Lattice. You get the go router. So now all the very nice DNS-based load balancing for your applications, if you scale up several containers, we'll load balance that to those and that's really nice. You get logger gator. So you get all the streaming logs from your containers. If you scale up to 10, 15 containers for an application, you're going to get all those in one place. And really nicely, you don't have to deal with Bosch as your first experience. You can just do vagrant up, or if you have even a digital ocean, Amazon, OpenStack, or Google Cloud account, you can use Terraform. Terraform apply, and in several minutes have a cluster up and running. So we won't have Bosch, and importantly, you're missing out on a couple of other things. We've taken away the Cloud controller, so on your local laptop, it really doesn't matter that whether you have, you don't really want multi-tenancy, you're the only tenant, you're a cluster root, so you're not going to be sharing your laptop with lots of people. If you have a digital ocean account and you spin it up to 5VMs, you don't necessarily want to share that with your whole organization, it's just for a small team of people. You don't need quotas and the marketplace and all the other stuff that comes with Cloud controller, you just want to run your Docker image. And we also took away the UAA. So the Enterprise Security and Login, the OAuth server, also a Java component, taking that away, saves a lot of footprint and hassle as well when you just want to get up and running quickly. So let's take a look at some of the commands that we have. So it's very similar to the CF command line. This is a command line, and we'll show you an actual demo in a second. But you look there that... Well, actually, let's go ahead and get into it. Let's just use Colin's example. So let's clear this off. And LTC list. You guys see that okay? So you can see we're just running a couple of containers already on this instance. And let's go ahead and see if we can just run Colin. So you guys saw him, he called one called Java test, right? So let's type LTC, create Java test, hat of monkeys, I have no idea what the heck that's all about, and Java test. So what we're doing here is we're taking the Docker image that already has gone through all the build pack processing. It's already pre-built, so we don't have to build that on the server side this time. And there we go. Now the application is up and running. Let's go ahead and... I forget how you did this, Colin. The Linux laptop hilarity is coming out. Oh yeah, right click. Oh, right click. Copy link address. Oh my goodness, we can do it. That's programming. Control T. Control, what is it? Oh, V, yes. There you go. So it's the same app that Colin showed you in... This can be run against a scheduler. And so you can be running it against lots of lattice containers. So let's do LTC visualize. So there you can see I'm running on one cell, and I've got three containers running. If I want to scale this up, LTC scale Java test. Let's go up to three containers. And scaling this up, it just takes just a moment. And what we're going to be doing is basically Diego does the scheduling for you, just like Auntie showed earlier in his talk, so that works. So you do LTC status Java test. And now we're running those three separate images. It's kind of interesting here. You can see how we do the port mapping for you. So all these containers are listening on 8080 inside, but we're getting different ports assigned to them on the host that they're running. And then the go router is automatically doing the load balancing for you. So it's pretty straightforward to do that. Let's go back to the presentation here. We also wanted to put a little bit of a UI on top of lattice. And so Pivotal's been working on something called Xray. And let's go ahead and go look at that. You saw that was one of the applications that was running on lattice when I first typed LTC list. And so what Colin did was he used Cloud Rocker. It's just a Node.js app, this one we're going to show you here, this thing called Xray. So Colin created a Docker image out of Xray, and it's just another app that you can push on top of lattice. So going back, let's go check at the LTC list here. I'll just do status Xray. And there is what I want to do. Oh boy Colin, you're in your Linux. It's going to be fun. Control Shift C. Control Shift C, oh boy. It's a secret code, there you go. Control T, control V, look at all this. I'm getting good. So this is a really simple user interface on top. So you can just do some simple things to visualize what's going on in your lattice cluster. So if you had more cells, you can see that they would be scaled out underneath here. And you can have availability zones and everything. So if we were running on a full AWS image, we would have that fill up the page with a whole bunch of stuff and do a demo. But if I go here and just LTC scale Java test one to get rid of stuff, you'll see that quickly already scaled down to one container there. So if I hover over here, you see Java test. These are the Ruby sanity and there's the Xray process itself. It's monitoring itself. So Xray is something we're going to add new capabilities to over time. What we really wanted to do was visualize what's happening in your lattice cluster so you can see it really easily. And this is an open source project that you can also get on GitHub. And so what have I seen at Pivotal for people using lattice? I've seen an explosion of interest in making the technology much more accessible to developers. So especially the Spring team has done a lot of neat stuff with it. So you might have seen some talks here from Matt Stein and some others using Spring Cloud and some of the Netflix OSS things. So imagine like a dynamic configuration server where you're distributing configuration to your containers that are running, but you want to change something dynamically. Let's say like a log level or something like that. That's a component you can run with Spring Cloud as a container on lattice and then have another app use that. If you want to use Hysterix and some of the other load balancing technologies, that's also built in there. The Spring Cloud team has done some really neat stuff with that. The Spring Batch team has made it such that you can now expose a new Diego primitive. Diego has this primitive called a task. It's actually how applications are staged. It's not a long running process. It's a one-time batch process that will have a lifetime in that end. So Spring Batch is this technology a lot of enterprises are using. So they've hooked up Spring Batch to lattice so that every time you want to run a new batch job, it schedules a Diego task. So now you can have your tasks in lattice being scheduled by Spring Batch, which is pretty cool. The Spring XD team has done some really neat demos where scaling up your Spring XD streams, for these are streams of data coming from things like the Twitter, Firehose and other places, and then piping them to other things and doing some processing on those. Every time you run a Spring XD stream, it will create a new Diego task for you and they've got that working with lattice. And this one is really cool. Mark Croft, Mike D'Lessie up here, and the team from the New York team from Pivotal. And then there's others involved in CenturyLink and HP announced some things as well. They're contributing some things for Windows. You can actually run Windows containers with lattice as well. So just like how you have Linux applications is what we were just demoing, Windows applications can run side-by-side with the Linux applications. And let's see if we can go see a JSON example that's approved to you that I'm not making. Do I still have that up here, Colin, or is that closed now? Is it in the... If you tab to the next one, you'll tab to the... I'll tab. Keep going. I don't want to keep tabbing. This one. Yeah, try that one. All right. There we go. So you're actually seeing... This is an output of the long-running process that we have of Mark's cluster that has the Windows server and a Linux server running side-by-side. And you see the root FS is the Windows Server 2012 instance. And so you can plug in these Linux images and Windows servers side-by-side and schedule the containers for both Linux apps and Windows apps in the same lattice technology. And the last thing I want to talk a little bit about is we have a team doing some hacking on Diego itself. When we're working with trying new things in Cloud Foundry, we've found it's really fun to just start with something small like Diego. And so we've had teams that are sending custom metrics to the loggergater firehose from their application. So I call this the foo metric. So imagine you're tracking sign-ups or you're tracking some other thing that's not just about your memory or a standard metric that comes out of your container. It's something that's related to a business aspect of the application. You can send that to loggergater on lattice and there's a demo of a team consuming that off the firehose as well. So that's pretty cool. Now you can have your applications sharing metrics that they're producing and having other applications consume those. And with that, I'll just show you that we have a proper mailing list on the Cloud Foundry community. So it's CFLattice at list.cloudfoundry.org. There's a Twitter handle there. There's some really cool lattice t-shirts that Andrew Shaffer has available if you want to get those. And then you can also join the... and contribute on the project. It's an incubation right now. And I think we're ready for questions. Okay. Any questions? There's two over there. Okay. Great question. So the question is, if you wanted to do something in lattice and you get it working there, how translatable is that to getting it working in Cloud Foundry? So today, lattice only speaks Docker images. What we're working on and we want to be able to have a lattice understand droplets as well. So one of the things that Cloud Rocker does is produce a full Docker image from your application source code. Well, what if we went the intermediary way? If you looked at what Heroku did last week, they kind of did the same kind of thing, where their CLI tooling will... you run a Docker image locally, but when you push that to Heroku, they actually just upload the droplet part. They call it a slug. So we're looking at something like that. If you're a developer and you don't want to know anything about Docker because you just have your Java code and you don't want to think about Docker because that's one more thing you got to think about, we want a path for that to work well on lattice as well. And then that's also translatable to Cloud Foundry. So it's something like that. You can just take your application source code, get it running on lattice, and then also get that running on Cloud Foundry. That should be a seamless experiment, experience. Could you please elaborate on the metrics that you can collect from the container? I couldn't hear that unfortunately. Let me see if I can... Can someone else repeat the question? Is it better now? Just about. Could you please elaborate on the metrics that you can collect from... with the lattice from the container? Metrics? Oh, elaborate on the metrics. Okay. Let me see. The metrics coming out of... the loggergator firehose basically, right? So with Diego, we're setting the container metrics that you... If you ever saw the stats endpoint in the Cloud Controller for applications, it'll show you the CPU, the memory, and the disc footprint. So you're going to get those for free by just listening on the loggergator firehose. What another team has done is they started producing metrics directly to the loggergator from their application. So that would be a custom metric that they make up. So let's say it's a number of sign-ups in your application you want to track or the amount of revenue you've sold or something like that day. You can send that metric along to the firehose and it's a structured thing as opposed to a log message, which is going to be just a kind of an opaque thing. You can set a structured value that downstream applications can interpret and understand exactly that value. This is a gauge or it's a value or it's an error of some kind, right? That's the kind of thing I'm talking about is being able to send those structured metrics down the pipeline. Okay, I think we're about done. All right, thanks so much. Appreciate your time.