 Thank you for joining us this afternoon. I hope you've had a really wonderful Cloud Foundry Summit Day so far. We are here to tell you a little bit about our newest BuildPack feature, Multiple BuildPack Support. I'm Kiti Gross. I'm a software engineering manager at Pivotal Cloud Foundry out of the New York office. I'm Stephen Levine. I'm the core BuildPacks product lead. So OK, let's get started. Two, talk about Multiple BuildPack Support. First, I want to discuss what a BuildPack is. This is a pretty basic concept. It's core to the system. But judging by some of the questions that we receive in our Slack channel nearly daily, I'm not completely sure everybody does know what we do. So what is a BuildPack? BuildPacks are responsible for setting up your environment and injecting dependencies for your app. It's that simple. So operations, things that relate to deploying your app, we do them so you don't have to. If you've been using CF for any length of time, you've probably encountered Ansi's CF push Haiku before. Here is my source code. Run it on the cloud for me. I do not care how. I like to think that BuildPacks really embodies this sentiment. What we think about when we think about the developer as a user, we're just trying to make it as seamless as possible, CF push. Which brings us to how they work. Every BuildPack as they work currently has three scripts. It has detect, compile, and release. The detect script won't necessarily run. It only runs if you haven't specified a BuildPack in your CF push command or in your app manifest. But if you haven't, if you just CF pushed and let us letter rip, then the detect script will run for each BuildPack in order to determine, hello, is it me you are looking for? And if the answer is yes, yes, I do need a Ruby BuildPack, then detect will continue on to the appropriate compile script for whichever BuildPack was detected. The compile script is responsible for really the meat of what you think of BuildPacks is doing. It's responsible for any of the transformation steps of the app. To give you some examples, and sorry, I'm gonna reference Ruby and Go a lot because that's my bailiwick, but translated into whichever language makes sense to you. If it's a Ruby app, maybe it's determining if you are using a Rails framework or Sinatra. If it's a Go app or any number of other apps, like it might determine which package manager you're using. So do you need Go depths? Do you need GVT? And then it installs that, allowing you to appropriately vendor the dependencies. Then once the app has been nearly entirely staged and set up and running in that staging environment, it passes it off to the bin release script. I looked at the documentation today for what a bin release script is. We say that it passes metadata to the runtime. And I don't think that's the clearest way to say that at all. What it really does is it writes the default start command that the droplet is going to use. And that's it. Like take a look at the BuildPack sometime. It's like that is not a complicated part of it. Everything is just in compile. So this is how we're working today. And this is a good basis for me to begin talking about the droplet component of the BuildPack model. So one of the benefits of using BuildPacks as opposed to other app life cycles in this system is that you have your app bits. They combine with the BuildPack and then you stage it and you get a droplet. And then that droplet you can use with your root file system, combine it and then launch the app. This is significant that those are separate steps because it means that if you want to replace the RudaFast, you can replace the old root file system with the new root file system without recompiling your app into a droplet. That means some significant time improvements as far as how long it's gonna take to update your file system. And then it also means that we can guarantee things to what is it, ABI compatibility that the droplet will run on top of an ABI compatible RudaFast. Which means the app abides. And with that I'm gonna pass it back to Stephen. Yeah, thanks Mike. I'm gonna pass it back to Stephen to talk about multiple BuildPacks. Thanks. So what does multiple BuildPacks mean? So multiple BuildPacks from a user's perspective means that you can specify multiple BuildPacks to apply to the same application. So how does that work? So we've introduced two new scripts that replaced the bin compile script and that contract Katie was talking about called bin supply and bin finalize. The order of the BuildPacks is really important when you use multiple BuildPacks support. So the bin supply script for all the BuildPacks runs in order and then the bin finalize the script of the last BuildPack runs and that's how a droplet is staged. And then like Katie explained, you combine that droplet with the RudaFast and you get a running application. So let's take a deeper dive into how that works. So each bin supply script writes to a special sandbox directory and then the bin finalize script is allowed to write to the application directory and make any necessary adjustments there and then it makes those sandbox directories accessible to the application. The bin supply scripts also have access to the previous sandbox directories so they can make use of any dependencies there during staging. If that seems complicated, I totally get it. So the original proposal was to run them all in order like the Heroku BuildPacks and hope for the best, make them coordinate really carefully. We got a lot of pushback from enterprise concerned users that resulted in our current implementation. I think people were worried about the sort of combinatorial explosion of different BuildPack orders and all the edge cases that could happen there and they were pretty worried about supporting what that would mean. So we think this implementation has fewer edge cases but basically the same use cases as the Heroku one. And this implementation also makes it especially easy for partners to write extension BuildPacks. They just have one script and it follows a really specific contract that does one thing and it works great. If you're worried about compatibility, that totally makes sense too. This is 100% compatible in all directions. All old BuildPacks will still work by themselves without other BuildPacks. And the core BuildPacks that we maintain will remain compatible with old foundations. I'm also gonna talk a little bit later about how you can use a sort of shim BuildPack now to get this functionality in your current foundations. And I'm gonna turn it over to Kiti to talk about who should care about this. Yay, this is my favorite part. Why should you care? I think the most obvious beneficiary of the multiple BuildPack setup are the developers of apps that were previously incompatible with the BuildPacks model. So whether they needed more libraries than we provided or they needed a particular mixture of tech, if they couldn't use one of our BuildPacks that we provided, you have two options. It's either fork the BuildPack, maintain that, deal with pulling from upstream and making sure that you get our updates or using an image, in which case you just don't benefit from the BuildPack structure at all. You don't benefit from our CDE-related updates, security patches, et cetera. We, the BuildPack maintainers, also really benefit from this. So I'm excited. We benefit because we can support a larger ecosystem of extensions. So when you ask what about filling the blank language that I love so much and I use on the weekends, we might be able to respond, hey, yeah, we are looking into that, instead of having to spend quite so much time thinking about all of the possible edge cases for each BuildPack and how it might be used with other tech that might need some other dependency. So it frees up some of our time. It also helps Cloud Foundry partners who would like to distribute their tooling into the container in which the application lives. Currently, vendors that need to do this, let's say they're security monitoring related or just monitoring traffic related, they either have to fork every single BuildPack and then provide that to customers who want to use it or they have to submit pull requests that we accept upstream. But either way, they're making a change to multiple BuildPacks when all they really want is the same change for each different language. This allows them to create an extension BuildPack and the extension BuildPack couldn't be used on its own. It wouldn't have a finalized script but it would just be a supply script that you could include and say, hey, I'll do security monitoring with any of the other BuildPacks. This last one also is implied by the first bullet. There are people who have to maintain the forked BuildPacks for the developers who required the forked BuildPacks and this is a benefit for them too. Whichever whether it's Python or Java or Go or whichever one your company ends up having like 10 versions of you no longer have to worry about that as much so that frees up overhead for them too. Some examples of the tech. So let's say you have a Python app or a static file app or really any of them and you want to use NPM to compile your assets. Right now we have a couple different BuildPacks that we include node in that aren't the node BuildPack for this purpose but that's like our decision. This makes it at your discretion. If you'd like to have NPM available to compile assets you can combine that with any single BuildPack and you don't have to leave it up to us. Also let's say you have proprietary database drivers you've been needing to include especially to support legacy apps. Right now that means that a lot of people are forking BuildPacks again to include these. This means that you could have an extension Oracle say BuildPack that can work with whatever the system BuildPack is. Adding custom PHP modules would be exciting. That's also exciting for us so that we don't have to field your request to add new PHP modules all the time so a lot of this has to benefit us. And RubyGems that use the JVM. There are a few of them out there. People do like them. They can't use them under the current system. If none of these particular examples or technologies resonates with you I hope it gives you an idea of how this applies to your working space because all of these patterns work just fill in the language with another language you'll be able to use the tech that you need together. So Stephen, can I try it now? Why yes you can. So we have a thing called the MultiBuildPack that works on existing foundations. It won't let you use BuildPacks that are installed into your platform but you can use BuildPack URLs. And it's compatible with all current CF versions or recent CF versions. It uses the same code as the platform components. So all the implementation we did in the platform to make it work. It's the same stuff in the BuildPack. And it let us transition the existing BuildPacks over to this sort of new MultiBuildPack world to prepare them for when full support is available in the platform. You can use it to get your BuildPacks to behave nicely before full support is available. And it lets you try it out right now. So I'm gonna turn it back over to Kiti to talk about the future. Thanks Stephen. The future, it's so right. So some of you may be sitting there already thinking about the possible applications of having multiple BuildPack support. Some of you may even be thinking, what if I want a mandatory BuildPack? And that is on the roadmap. It is not in this first release but it's absolutely in our sites as far as what we wanna be supporting in the future. A mandatory BuildPack would be an operator-specified BuildPack that always had the opportunity to run before their BuildPacks. And it would do this by again having a slightly different script structure than the other ones we've described. You'd have a detect script that always runs. So even if the user specifies a BuildPack, the detect scripts for an operator-specified BuildPack would always run, detect whether or not they wanna run the supply BuildPack, or the supply script, and then run the following BuildPacks that the users requested. Some, this might be familiar to some of you. There was a proposal earlier to the Foundation for Staging Policies. This is the same concept, different implementation, and it takes advantage of the multiple BuildPack feature we were already building in. So some examples of this. Let's say you as a company want to be able to run an antivirus every time one of your devs pushes an app. You could include just a thin BuildPack that's always run. You don't have to rely on your developers to use it. And you then have veto power over whether the app stages. At that point, I'm gonna explain why this is important to devs. If you're looking at me and being like, does that mean that somebody can mess with my code every time I push? That sounds miserable. No, the answer is no. A mandatory BuildPack would only have read capabilities of the app route. So it could see what you were doing, but all it can do about it is veto if there's something that doesn't look good. It can, however, add dependencies that following BuildPacks can use and will be available to them. But don't worry, your app won't be changing under you in ways that you can't predict because of something somebody else did. What's more likely to happen is your app fails to stage and gives you a very clear warning as to what security vulnerability is present or something like that. Which brings us all the way to the end of our very brief and pithy talk. Does anybody have any questions? Is there a question person who's bringing the microphone around? There's not a question person. Cool, who raised your hand? Can you say your name? Pranay Sharma from Man Life. When is it expected to come out of incubator? Oh, you don't have the mic, okay. Yeah, within the next quarter. So the work on this, we sound so confident about it because our team's already done the work. So you know how that is. But within the next quarter, it should be available on the platform. That said, if your company isn't expected to update every time we release with the Oso frequent release cycle, it might take longer to get to you. The multi-build pack build pack, interesting. One thing that, depending on how you tried to use it, we didn't describe this about the multi-build pack build pack. You can't use it with system build packs. You can only use it with, say, a GitHub URL that points to an available build pack. If you try to use it with the system build packs, it won't work. Part of that is the legacy of it being something we were just using to develop ourselves. And that was what was convenient for us at the time. If you end up having different questions, please reach out to the team on Slack. Sorry, did you say all these supported build packs are multi-build back compatible at this point? I think Java was left out, right? So it's an ongoing effort. They're almost all converted over. I think Java and .NET Core are the remaining two, but we don't expect that it'll be too much work. What about running a finalized build packs mandatory? So you have mandatory build packs from the start of the chain. What about mandatory build packs at the end of the chain? Like let's say we wanted to tripwire on all the content, including stuff that NPM might have generated. So we've considered different implementations of that. That's this mandatory build packs idea is just the first iteration of something that we'd like to try. That's totally not out of the question. We need to do more user research first to make sure, you know, to validate whether that makes sense. Excellent. Are there questions about how this relates to your specific use case that I've never heard of? That's awesome. Cool. Well, it's been great talking to you all today. If you do end up having questions, as I kind of indicated at the beginning, we're available on Slack and you can reach out and we'll be there to help you with your problems. Thank you.