 OK, we'll go ahead and get started now. So this talk is the Java Build Pack in 2017. My name is Ben Hale. I'm the Cloud Foundry Java Experience Lead, which is a title that makes it suitably beg for what I do. But basically, if you are looking to run Java applications on top of Cloud Foundry, I'm right in the center of that, putting in the foundational bits that make all Java applications run, making sure that they run in the best possible way, and then moving up from there, managing things like the Java client and acting as a customer rep for various other projects that we do inside of Cloud Foundry. And so the goal of the talk today is to talk about what it means to run Java applications on Cloud Foundry today. The problem is we've only got 30 minutes to talk about it. So you're not going to quite get as much as we might want to today. I'll be around to talk in the hallways and things like that. If there's anything I don't cover that you're absolutely interested in. And so no good Cloud Foundry talk would be complete without starting with the haiku. Here is my source code. Run it on the cloud for me. I do not care how. But the interesting thing is developers might not care how, but someone cares how. And it turns out I specifically care very, very much about how and specifically for Java applications. And that's because build packs, like the Java build pack, are that glue that says I've got an operations team over here whose entire job is to keep a UNIX process up and running inside of a Cloud Foundry foundation. And I've got application developers over here who have applications that need to become well-behaved UNIX processes. And a build pack is where the rubber meets the road there. And it says, OK, I need to take jar file, war file, zip file, what have you, and make it into this process. And so the Java build pack actually cares about three very specific things. The first is the JRE that you're going to run your application with, and specifically how the memory of that JRE is actually managed inside of the container. We'll talk a little bit about that in more detail later on. We also care about what container to run your application in. And this is not container in the Docker container sense. This is container in the, do I want to run it in Tomcat? Do I want to run it inside of a NetE container? Is this a Java main application? And so when we say container here, really what we're saying is, what kind of application is this? What do I need to provide to your application so that it can run as it expects to? And then the final thing we care about turns out to be the broadest possible list here. And we call it zero touch integration with services. All the build packs have slightly different views on what exactly it means to be a build pack. But specifically the Java build pack takes a very aggressive position on this, saying that you as software developers should do as little as possible to integrate with the services that the platform provides you. So if you want to hook into an APM, it should be sufficient for you to just say, bind service, I want to be attached to New Relic, I don't want to have to do anything else, I want the build pack to configure everything else for me there. So if we sort of break down over the last year what has happened in all of these categories, we start with the JRE. OpenJDK has been drastically improved. We're going to talk about two huge improvements to it towards the end of this particular session. But probably one of the most notable ones for a lot of people is OpenJDK, our version of OpenJDK that we build and distribute as part of the platform comes with unlimited crypto strength. So it basically means you can be as secure as you want to be. If you want 16K RSA keys because you're totally crazy, absolutely, you can have that. There's no limitations. You don't have to sort of work around Oracle's downloads and put certain files in certain places. We just give that to you out of the box. We've almost since the inception of the project integrated with Oracle JREs if you want to do that. Over the last year, Azul came to us and integrated great community contribution integrated the Zulu JRE. And currently finishing up in the final stages of the PR submission, IBM's J9 JRE will also be an option inside of the core Java build pack. On the container side, we haven't really added anything or made any significant changes in the last year. And you might see there's what seven listings here of different containers, but it's a bit misleading because things like the Java main, right? Certainly encapsulates something like Spring Boot, but it also encapsulates a bunch of different other kinds of Java applications. Anything that starts with Java main dis-zips another really great one where it's not just Java applications, right? Rat Pack applications could be dis-zip applications, for example, or Scala applications or Play Framework applications. These, while there's only sort of seven broad ways that we commonly support to distribute an application and hand it to us, it encapsulates huge amounts. And so every single day, some new Java framework comes out. This package these days almost exclusively as a Java main. And so while we haven't made any changes to the core code about containers, we are supporting more applications simply because they're falling into these broad categories. As I said, the service integrations are by far the broadest area that we've made huge improvements over the last year. It's so broad, in fact, that I had to break it out, I think, into four or five different slides. And so just APMs alone, for example, we've made dramatic improvements if you're App Dynamics offices in the way the App Dynamics integration works. Dynatrace has both an improved and added another product to it. So if you're any of Dynatrace's customer, we've got a PR that's coming towards the end, just waiting on a little bit illegal at the moment for a new APM that I had never heard of coming out of Germany called Fusion Reactor. We support Introscope and, of course, New Relic, the very first and most widely used integration we have with the Build Pack. But beyond just APMs, which I think everybody probably sort of knows they need, we've also done a ton of work over the last year to make developer integrations a lot nicer. So you can hook the JVM debugger into your container. You just use CFSSH to hook into it. But all the configuration is done for you. The Google Stackdriver debugger is brand new. We knocked it out in, I don't know, a couple of days kind of thing with the Google team. Really, really cool. I thought I'd seen everything in Enterprise Java. But this Google Stackdriver debugger, if you've never seen it in action, the fact that it can just sort of inject log lines or with zero performance cost, takes snapshots and sort of snapshots of data back to an IDE sitting, I believe when we did the integration, I was sitting in California. The server was somewhere in US East 1. And the Googler that I was working with was in Paris at the time. And I'd make some change, and we pushed something. And all of a sudden, a change would sort of a debug snapshot would show up on the other side of the Atlantic Ocean kind of thing. And so I definitely encourage you to go see Analyzing Pets and Live Debugging with Colleen later today in the theater. I believe it's a 225. That thing is crazy. It's really, really good. But beyond that, we support JRebel if you like instant code replacement, JMX. And if you really want to get crazy and you want to do a full profile inside of a Cloud Foundry container, we support your kit for that. And we always are soliciting more improvements like this. If you're a bank or otherwise a high crypto kind of enterprise, we have integrations with three HSMs again. And there are more already in discussions to go in. So Diatic, which is one of sort of the new virtual HSMs on the scene. The LUNA security providers had a lot of work done to it in the last year. And Jamalto has also added support for Protect app in there as well. And then the final slide here is a bit of a grab bag. The Apache Geo session cache, which some of you may remember as Gemfire, is going to make it in. And a PR again, very close to merging. Contrast security framework, which is sort of an active security agent that runs inside of your JVM, inside of container, looking for vulnerabilities, looking for exploits, and taking proactive action against it. And then also, we have technology now to get the Spring Boot metrics. If any of you use Spring Boot metrics, will be written directly out to the Cloud Foundry Firehose to be consumed by the metrics gathering tool of your choice, whether it's something like Splunk or PCF metrics, or however you choose to go about that. So there are three things that I didn't touch on as part of this. We're going to do some live demos. And the first one of these is the container security provider. So the container security provider is really interesting because I didn't think I wanted to do it at first. Over the last couple of releases of Cloud Foundry, let's say over the last year or so, we've steadily been sort of taking a look at what it means to be secure. How exactly we achieve Justin Smith's 3Rs, repave, re-rotate, repave. And I can't even remember what the third one is. But the rotate one specifically comes to mind here. And the idea is that Bosch now has the idea of something called trusted certificates. And trusted certificates are interesting in that it allows you to basically tell Bosch at the very top level, here is a certificate that I want to show up on every single VM in my entire system. And it shows up in a place that's OpenSSL compatible, effectively. It's OpenSSL's trust store. So a bunch of different languages that all integrate with OpenSSL all of a sudden have a certificate that they will all trust. And this means that different things that need to communicate across SSL database connections or something like that can all trust one another. The big issue is, of course, that Java doesn't really play nicely with OpenSSL. And when you add this kind of certificate, trust store, and there's no easy way to get that into the JVM. So we wrote a container security provider. Let's see how my demos are going to go today. And the container security provider, let's do this. Yeah, free start security provider. Let's take a look at what happens when it logs up. So we knew that we wanted to have all of these trusted certificates inside of Java applications. So they would be able to participate in the same kind of, I trust everybody inside of my Cloud Foundry instance. But we needed to sort of get around the fact that we don't play nicely. And there were a couple of stabs at this. We sort of did a thing where we called key tool repeatedly. And the trusted certs normally contains about 175 certs straight off. And if you call key tool 175 times, it takes about 90 seconds to do. And it was doubling how long applications took to stage. That wasn't very good. Then we sort of rewrote it a little bit more programmatically. So we built a key store in memory for this. And that got us to about 3 tenths of a second. But then that just sat on the file system. And if you started rotating certificates and rotating keys, it wasn't good enough at that point. And so the third stab at this is actually a legitimate Java security provider. So you will have seen these bouncy castles and stuff like that. But because the security provider architecture is totally modular, you don't have to implement the whole thing. I haven't gone off and written a bunch of crypto implementations, which would absolutely have a ton of bugs in them. But what I can do is become a trust manager factory. So now what happens is when the system starts up, it notices that a file, this particular file right here, which is where it's OpenSSL's CA certificates file. It's where Bosch trusted certs go, things like that. We will sort of notice that that exists. And if it does, we'll automatically add this dynamically into the system. We will automatically notice, or sorry, your application with no configuration on your own will get all of these certificates. But one of the key things here is we haven't actually taken away all the other ways configure trust store. So if you are modifying the CA certs inside of your JRE, that takes precedence. It basically becomes a delegate chain. And so we're the last person in the chain. So anything that the system would have normally done. So if you do CA certs in your JRE, or you set javox.net.ssl.truststore or something like that, those all still take precedence. But we will transparently add this other thing for you. And the goal is now that if things are going properly, it means applications no longer need to care about how they're getting certificates. It's now sort of a platform kind of decision. If your platform has, and this is super common, has a single CA, right? You're in an enterprise has a single CA and all certificates sort of flow from that particular CA. Adding a CA to the Bosch trusted certs means that now, Java applications don't need to know about each individual system that they're going to connect to, each individual service they're going to connect to. Instead, we will trust up that chain to your CA transparently for you. But even more interesting than that and the primary reason we went for a security provider is there is a big push these days for TLS Mutual Authentication, which you might have heard previously called SSL Mutual Auth. And basically the idea is here that in addition to a server providing a certificate that must be trusted and verifying its identity with some signing using private keys, we're now actually giving identity to Diego containers. So this morning's keynote talked a little bit about the CredHub project. One of the key things is CredHub actually requires this mutual TLS authentication in order to get access to it. This idea that I need to provide a certificate that CredHub actually trusts, that identifies who exactly I am, what my app ID is, what my instance ID, some other metadata about this particular application. And so we knew we needed to do this because there's a lot of stuff that's gonna need to talk to CredHub to extract a bunch of credentials for services and things like that. But we were able to also transparently add this to everybody's application now. So effectively what happens is with no changes in your code, so I think I can say, I still have a cat here, right? So this is sort of the REST template that you know and love from Spring. You can simply say new REST template. And the system knows to go find a key store so that if I make a REST call through any sort of standard Java API out to a service that then counters me and says, okay, you know who I am and you trust my certificate, now I want you to provide identity to me. Without you doing any sort of configuration, your container identity is now presented to that server. And so this is one of the big things that we want is now all of a sudden mutual TLS which might have been very, very difficult to configure because you'd have to deal with different key stores and getting them into the system and configuring them the right way with the Java ops and things like that. That just all goes away. You just get it and the identity you use is guaranteed to be put there by Diego every single time. And I don't think it's quite the right time for it. I don't think we're gonna see it so I'm gonna have to go back to a previous one. But one of the keys about this identity is that it rotates. So the Diego identity can be, you can sort of configure it. This is one of the R's, we wanna rotate these keys and so you can configure it to say, oh, I want a new set of keys for this container once a week or I want it once a month or I want it once an hour or I want it once every 10 minutes or I want it once a minute. And this security provider, because of the way it's done in memory rather than being just a key store that contains a bunch of keys on it, means that we will notice that these files have changed at this certificate and private key that are your identity have changed and we'll just cycle that underneath it. Same thing goes for the trust store. If a new certificate is added to the trust store dynamically, it doesn't require you to regenerate some sort of key store. It doesn't require a restage or restart of an application. This just happens. And we think that this is gonna make a big, big difference to this idea of key rotation because one of the key pillars is you should just assume that every single key you ever create is going to be compromised. There's no way to prevent, 100% prevent this. So you need those keys to only have a very, very small lifespan. 10 minutes, five minutes, one minute, who knows. Now of a sudden we have a system where transparently these keys just go underneath, just get rotated underneath and the next time an outbound connection is created and a new SSL connection needs to be, handshake needs to happen, you'll just use the new set of keys and your application never knew anything about it. We think that's gonna be a pretty significant improvement for security there. So as I said, Bosch trusted certificates, we automatically notice those, Adam at the end of a chain, Diego identity, we notice it again added at the end of the chain. So if you want to use certificates that are not these certificates for that handshake, your keys always take precedence, your explicit or system configuration always takes precedence, we sit at the end as a fallback. I suspect, I hope that more of you will start using this, because I think it's an amazing sort of, oh, we just get a bunch more security for free, right? So the second thing I wanna talk about is the JVM's memory calculator. So I said a little bit earlier that one of the key things that the Java build pack does is it configures the memory of the JVM that runs inside the container. Like, well, I'm a node developer, that doesn't seem, when I develop node applications, it doesn't seem to be a thing or Ruby applications doesn't seem a thing. And that's because those particular JVMs don't have this concept, they'll just run to the end of whatever the operating system is going to allow them to use. If you want a Ruby VM to take up a terabyte of memory, it'll happily do it. But the JVM has always been different. For better or for worse, the JVM, when it starts up, needs to be told exactly how much memory it's going to use. And by default, the JVM and your container will disagree with one another, right? The Diego container has an idea of what it thinks the memory limit is. The defaults in the JVM have no respect for any of that. So we've long attempted to configure the appropriate memory based on sort of heuristics, what we think most Java applications look like and it could be overridden and it was somewhat complicated because it was this thing called weightings, but then you had to take into limits and the limits could be open or closed limits and it was very, very complicated. But the key critical flaw was, while we did what I suspect a lot of Java developers do and we managed the three memory regions that we all sort of know. We know about heap, we know about meta space or permgent and we know about thread stacks. But it turns out there's way more than three memory spaces or memory regions. Turns out there are like six memory regions. So today, so in Java build pack 4.0, we'll go in this direction here. We'll go ahead and restart this. We're gonna see some stuff go by. So in Java build pack 4.0, we introduced this new memory calculator and this new memory calculator calculates against all of these. So instead of just heap or XMX or the meta space, XX max meta space or even stack sizes XSS, we also now control the compressed class space size, the reserved code cache shows up in here as well as the, what the hell am I picking up? As well as direct memory is the sixth one. And so these in a lot of cases were actually completely unbound previously. Direct memory was by far the biggest offender which basically meant that if you did a bunch of stuff where you memory mapped files or you might not even know you were doing it, you were using some underlying API that did such a thing, you could again just grow to a terabyte. There was no bounding on it whatsoever and you'd immediately violate the container. So we now take all six of these into account on your system when the whole thing starts up. Now one of the interesting things about this is of course that I said that it's in Java build pack 4.0. If any of you are familiar with the build pack to date, we've been sort of in the 3.x line. I think we 3.18 is either out or the next one that's going to go out. I can never quite remember what version we're on. And 4.0 is a major version change. We consider it to be an incompatible change, this new memory calculator. And the reason that is is because of this, if I scaled this down to 512 megs, which I'm sure a lot of you probably do, just trying to be optimized a little bit about your JVM. When this system, did I unset in? When the whole system starts up, what you're going to see is this system will not start with 512 megs anymore. And you'd be like what, how have you made my JVM larger than all of my node friends already make fun of it for being? And the answer comes down to, it turns out that in the old days in the 3.x line with the memory calculator we were using, you could absolutely start a JVM in 512 megs, but you could not run to maximum a JVM in 512 megs. But nobody realized that because Cloud Foundry is so amazing about guaranteeing uptime for instances, you just sort of come back and look at PCF metrics or something one morning, you realize it reboots once an hour, right? We go out of memory once an hour, the container kills this thing, it comes back, we never even knew it was happening. And we had this problem with a lot of enterprise customers who, you can only say, don't worry about it, the platform's doing absolutely what it's supposed to do. It's great, Cloud Foundry is awesome, you didn't even know this was happening. So we needed to have another go at it. And it turns out all of these things that we had not bounded were growing beyond what the container absolutely wanted. And so we had to make some hard decisions about this. And in the end, what we decided was we are going to use the defaults that the JVM uses, wherever they are already bounded. And so, and then we also said the same thing for Tomcat, which is far and away the largest container that's currently used. And so what does that mean? It means that like the reserved code cache and compressed class space size, like together are about 350 megs, right? You don't know that, almost nobody knows that, but there is a bit of your JVM's memory space. It just occupies 350 megs worth of space to do optimizations with the JIT. All of you using sort of the defaults in Tomcat are looking at 200 threads, each with a stack size of one meg. So if you went to a full 200 threads, you would in fact have another 200 megs. And when you start adding all of these things up, what we find out is by default the system requires before a single byte of heap is associated or is allocated like 650 megs of memory before you even get involved in the heap. And so from this perspective, we truly believe while this is an incompatible change for some people, it's not really an incompatible change. This was the truth all the time, you just didn't realize your application was having trouble before. So we've decided to make it a 4.0, something that you opt in, it's not currently the default, I encourage you all to use it right now just to kick the tires and make sure your applications don't have any issues with it. But one of the key things to know is that changing these memory calculations has also improved pretty significantly. So nowadays if you set N, here we go, if you set N using sort of standard behavior, so this for example shrinks the thread stack size down to 512K instead of a meg and says, well maybe I don't need the Java 8, sort of 350 megs worth of JIT space, maybe I just wanna go back to the Java 7 version of 100 megs, I don't recommend doing this, there's some great literature from Netflix and others that says doing this really impacts performance, but you could do it and this gets sort of, it sort of uses standard JVM flags and shrinks all of that memory space down to something under 512 megs if you ran this. I'm not going to at this particular point because we're running a little bit low on time and I wanna do one more thing. So the memory calculator covers way more regions than it used to, configurable standard Java flags, incompatible with some previous versions but only in the sense that your stuff didn't actually run properly the first time, you just didn't realize it. 4.x, I believe we've got a 4.1 out with a 4.2 coming very shortly, is not the default, it doesn't come shipped as any particular CF, either open source or distribution at the moment, we're gonna keep it around for about six months I think, just keeping them in parallel, allowing people to test them, you can do a CF update build pack or CF create build pack with these, with our artifacts and you can sort of test it out on your own and get back to me, tell me if you have any issues with it. But the last thing I wanna talk to you about today is a new tool that we call JVM kill. What happens when your JVM does actually go out of memory, you have run out of Metaspace, you have run out of Heap, right? And the answer has been unsatisfying for a long time, the biggest issue, like everybody sort of expects, oh yeah, I just wanna heap dump out of this, but the big issue is inside of a container, A, it's quite likely that your heap dump will be larger than the file system that you run on, sort of by default Cloud Foundry containers have one gig underneath the hood, you can change it to up to two gigs, but there is a Bosch manifest flag that says you can never have more than two gigs inside that file system, unless somebody does some serious surgery, it's like not exposed through UIs and things like that. And so even if you somehow managed to make it larger than the heap dump you actually wanted to, as soon as your application has died, that container gets recycled and that heap dump is just gone, right? And so we took two different tracks to kind of attempt to solve this problem. So I think we come over here. So the JVM kill tool, yeah, good. So I have a little test utility here that's going to make this thing go out of memory by basically creating a bunch of ungarbage collectible byte arrays in memory. And so now what happens is, given the idea that I cannot, oh that's not great, right, Cf scale, let me put that back on one gig and restart this thing. Okay, given the idea that we can't really write out a heap dump given the standard container configuration, what information did you think you wanted from that particular heap dump? And so we've settled on this thing that we're referring to as a histogram. So when we go ahead and kill this application now, we print out and I'm going to shrink this so it renders a little bit better and doesn't look totally horrible. Here we go. We kick out one of these. And it has, for example, in this particular configuration, the top 100 offending types. And so the primary thing you're gonna do, you get a heap dump, you throw it into your profile or your kid or whatever the Eclipse one is called, EAT maybe. And it tells you, the very first place you're gonna look is tell me what I had a bunch of in memory. So now whenever the JVM dies, we're going to kick this out. And as we can see, there's a byte array here and it takes up 309 megs, which is about what the heap is in this particular system. In addition to that, we're going to kick out all the details about your memory pools, which is, again, a nice thing that you might particularly want to get if you had a profiler hooked up. And we do think that in general, this is going to be suit a lot of cases, right? It sort of gives you a pointer and say the biggest defenders were this or I don't know exactly what kind of memory went out. Did I miss meta space? Did I miss, or was I out of heap? And so we'll be able to get all of this kind of metadata out. And even if it's not necessarily enough, even if having a pointer to whatever the top two or three offenders were isn't enough, it gives you, I think it gives you enough of a pointer that you can hook a profiler in and have some sort of targeted testing against that particular stuff. But there are a small minority for whom this still isn't enough. They absolutely demand that they get heap dumps. And so actually, I committed this yesterday morning, so it has not come out yet in a release, but it will be very, very soon. Cloud Foundry recently has added the idea of volume services. You can now mount persistent disks inside of Cloud Foundry containers. And so now if we notice that you have a persistent disk tagged for heap dump specifically, inside of there, in addition to printing all of this same information out, which should be enough, we will also come down here and we will dump a heap dump for you. It tells you exactly where it's gonna go. It tells you exactly what the name of it was and we use unique identifier since scope names are unique only within organizations and application names only within spaces and instance numbers aren't unique really anywhere. So we have a nice little model so that if everybody's using a communal NFS share as the most traditional way volume services are backed, you can go find exactly what your heap dump is, put it into your profile and get it. But again, it's gonna be super rare, I think, that people mount these persistent file systems into their containers, but if it's something you desperately absolutely need to have, you can put in a file system that is not only large enough but also persistent to get this information out. On a side tangent, all the speakers were asked to submit their slides last week being done and I got a nice little joke on Twitter that like we haven't written any of this stuff. How can we submit slides for our talks? Well, we're literally writing this stuff yesterday, right? So yeah, JVM kill notices all kinds of resource exhaustion, heap, metaspace, stacks, threads, basically anything that's going to happen inside of the system and the behaviors are currently pluggable. So today we have the histogram, the memory region summary, heap dump if there's a volume service mounted, if there's something else that you would like to see opening issues against the Java build pack GitHub is a good place to make suggestions. We're open to other ways of doing the same kind of diagnostic information. So in summary here, we have additions or pending additions of new JREs, new service integrations. I can tell I'm really falling behind. We have seven open PRs that are basically all waiting on legal to get merged in. So there's sort of a very vibrant contributor ecosystem going on right now. Even if you don't want to submit a full-on PR I highly encourage you just open GitHub issues. That guy over there pays me to do the work for you. You just have to tell me what work you need me to do. The Java build pack 4.0 has been released. It is not a pre-release in any way. It's supported as if it was any other Java build pack. It is just not the default. So I highly encourage you, if you want to have a say in what this thing looks like six months from now, opt in, go download it, go install it in your either test systems or your production systems. I absolutely think you are able to do it in production systems. See how it works with your applications. See what ways you'd like to see a change with the new memory calculator and the new resource exhaustion behavior. And so that's it. My name was Ben Hale. I'm the lead of the Cloud Foundry Java Experience and I will take questions either here or I can do them privately below. Thank you.