 Today, we're going to talk about two new features in Bosch, both shipped in fall 2017. One of them is Bosch package rendering, which is modular Bosch packages, and co-located errands. Oh, gosh. None of that is switching slides, is it? There we go. I'm Maya Roscrance. I am the anchor of the PCF Redis tile. Hi. Oh, hi. I'm Maria. Until recently in the PCF Redis tile as well, I'm currently on the team that makes contributions to open search Kubernetes. All right. So just before we start, please have your attention for a quick mandatory fire safety announcement. Please note the locations and surrounding emergency exits and locate the nearest late exit sign next to you. In the event of an alarm, please go on the exit to the concourse area, stairwells leading to the outside of the facility are located along the concourse. And for your safety, please follow the directions of the public safety staff. Cool. So let's move into the first half of the talk, which focuses on co-located errands. We'll walk through the concept of what co-location is in terms of Bosch jobs and how Bosch operators use co-located errands to reduce their footprint of their deployments and their deploy times. Okay. So let's start with a quick refresher of some useful Bosch terminology that will come in handy in a second. So a Bosch job is essentially, well, it's a deployment unit for Bosch. What this means is just a chunk of functionality that a Bosch release offers. And then what a Bosch release is, is essentially a piece of software that is packaged in a way that Bosch can then deploy. There are two broad types of Bosch jobs based on their life cycle. So we have long-running ones, for example, a web server, or the Reddit software that is running on a Bosch deployed Reddit instance. And then we have some short-lived jobs, also called errands. And this could be either one of tasks, for example, the registration of a service broker with Cloud Foundry, or short-lived tasks or scripts that an operator might periodically want to run on their platform, like smoke tests, for example. From the perspective of a Bosch release author, errands look like any other Bosch job, except that in their template, they need to specify a bin run script, which is the entry point for their functionality once the job is deployed. When it comes to composing the Bosch manifest for deployment, so the specification that describes what this job will look like in real life. Aerons also look very similar to any other job. So you can see at the top of the instance groups, we have a long-running, the specification for a long-running instance. And then at the bottom, we have a specification for an errand job. The only difference is that the instance group, so the VM where the errand job will live, needs to be labeled with lifecycle errands so that Bosch knows that it's a one-off task and not a long-running thing. The previous manifest translates into something like that in real life. So once an operator goes ahead and deploys it, Bosch will create an instance and place the long-running process on that VM. And it will just keep running it, no problem. When the moment comes for the errand process to run, so the operator would use the Bosch CLI to figure that and say, Bosch run errand smoke tests. Bosch will then go away, create a new instance, which in Bosch world is almost always means VM for most cloud providers. It will place the errand script in there, will execute it, collect the logs, and then tear down that VM. Now there are advantages and disadvantages with this approach. The main benefit is that it offers an implementation of design of single responsibility in a way. So in this setup, software does not coexist unless it has a good reason to coexist with other software. So that gives us isolation, which in turn gives us very interesting things like security and making sure that processes do not pollute each other, in a way. On the flip side, it's very common for errand scripts to take a few seconds or maybe up to a minute to complete. On the other hand, creating a VM on most infrastructures takes about five minutes or so and is also subject to minimum costs at the infrastructure level. So in the end, this is time and money that's spent that's not really balanced with the amount of time that the errand functionality takes to complete. And five minutes may not sound very much, but when we're looking at running those scripts periodically or when we're looking at upgrading a big deployment like a Cloud Foundry installation with many releases supporting it, then that ends up adding up hours to the process. So then this is the problem that colocation of errands comes to solve. So the main concept here is that the errand job stops being placed into its own VM and instead moves into a VM instance with another pre-existing job. It doesn't get executed until the right time comes, but there is a copy of the script there. So that way Bosch does not create a new VM, it just creates a copy of the script on that pre-existing instance. Once the operator decides that it's time for the errand script to run, they would go ahead and run Bosch run errand tests. And you might have noted that the argument for run errand has changed. We'll talk about that in a second. And then Bosch will go away, find out the job call tests on every instance group where it's located it, execute it, and then go on from there. Let's look at requirements before we see how we'll go about implementing collocated errands. So to start using collocated errands, you'll need a minimum Bosch directory version of 263, which is just in order for your directory to support collocation, a minimum version of Linux stem cells of 34, 45, and above just so the Bosch agent supports the collocated errands on the instance side of things. And then a Bosch CLI of 2031 and above for the increased granularity at the Bosch CLI level. All of these were released, I believe, around August 2017 last year, so they're fairly mature by now. OK, so let's look at what needs to change from the current approach and what stays the same for all parties involved, in a way. So for the release of her side of things, in most cases, nothing much needs to change. So they would still write a Bosch job the way they used to. They still need a bin run script. So in most cases, it's just completely transparent to them. As far as the composition of the manifest is concerned, and that could be either by a human operator or by the system that composes the manifest, you'll notice that the job moves from its own instance group to another instance group that contains other jobs. OK, so there are, again, good things about things about this approach. There are some obvious benefits in terms of time savings and, obviously, smaller footprints means lower costs, which is all good. But it is also a more elegant design for a very specific use case. So in the past, there were deployments where they would have a specific instance with responsibilities to reach out and perform a task on many other instances of the deployment, but not the deployment as a whole. So something like that, really. A very common example is health checks. So we would have an instance that gets configured with a bunch of endpoints and a bunch of credentials for those endpoints, and would reach out to each and every one of them and run a specific health check or a clean-up script or whatever else. This now, with collocated errands, can be converted to an architecture that looks a little bit like this. And what that means is that each and every one instance now has the logic on itself to perform these things independently, in a way. The flip side, and things that places maybe where collocated errands do not make much sense, the main concern here is isolation. So as far as Bolshe's concerned, a collocated errand is just another process on that instance. So at the very minimum, it's worth taking a step back and considering what is the risk here for me when I'm collocating an errand script with another long running and maybe business-critical piece of software on my VM. It can be security. So for example, I might need to consider don't need to change my access, don't need to think about the data on the VM. It can also be considerations around pollution, for example. So when errands were not collocated, you would have a brand new VM to run your errand script on. And then when the errand finishes execution, all logs and things will be taken away in the VM stripped down. That's no longer the case. So it's worth considering what side effects I have, what sort of logs remain on my VM, et cetera. On top of isolation, another consideration is that you now might end up having multiple copies of the errand script, because Bosch will place a copy of the script in every instance of the instance group that the manifest specifies. So there could be scenarios where this is a requirement, like we saw before in the health checks. But it might not. And in the second scenario, it's worth considering, first of all, is it OK? Is my errand script important and doesn't care if it gets called multiple times? And if not, can I leverage the more granular Bosch CLI to better control how many times my errand script is called? Some approaches that we've seen, apart from the Vanilla use collocation everywhere approach, are the following. So a lot of operators have used a hybrid approach where they've got deployments with some of the errands in their own instance groups and some others collocated, so that you sort of get the best of both worlds. Specifically, in process isolation, some operators have used things in combination with collocated errands. Bosch Process Manager is one of them, which essentially adds a layer between the monit process manager on the Bosch VMs and the actual operating system to make sure that processes are namespaced and they don't interact as much with each other. Something else that's also quite common is the usage of more granular commands that the Bosch CLI now offers with collocated errands. And we'll have a look at those in a moment. OK, so invocation can look just like it did before collocated errands, mostly. And what I mean by that is that Bosch run errand x is still a valid command. However, it might have slightly different side effects in a way or slightly different results that you're expecting to. So before collocated errands, when you said Bosch run errand tests, Bosch would look for an instance group called tests and would run any script that it had on it. Now when you say Bosch run errand tests, Bosch will look for any job called tests on however many instance groups you've collocated on and run all of these. As I said, this might or might not be OK. If it's not, you can say, I only wanted to run once. I don't care where, but just once. So you can use the dash dash instance flag and say, I wanted to run on the first instance of that instance group. You may want it to run on all instances of a specific group. So you can say, again, I wanted to Bosch run errand tests on dash dash instance broker. You may know a specific instance that you wanted to run on. And again, you can say instance broker and the instance GUID or in a set of many specific instances. And again, you can extend the previous sub-command and pass exactly the instances that you wanted to run on. OK, so a quick recap. We looked at how collocated errands have allowed for faster deployment and smaller deployment throughputs and how they become a trait of isolation versus speed. And then Maya is going to take you through Bosch package rendering. All right, so we're going to step down an abstraction layer from errands and jobs down into packages itself. So this is also a new feature only supported a few months ago. If we kind of step back a bit more and look at what is a package, packages basically wrap source code and binaries and provide them to jobs. Jobs do kind of your process start, stop, and watching. And then it eventually goes into a Bosch release and then it gets deployed onto a VM. Bosch-vendored package really only makes sense in wrapping binaries, but we'll kind of touch on that a bit more. Before the benefits of vendor packaging really come apparent, it's kind of good to contrast it with what the existing flow is. So to start with, if you want to pull something like GoLang into your release, it's a multi-step process and you can kind of walk through it pretty quickly. So download Go, you're going to add it to your release, then you upload it. At this point, you have a blobs.yaml and a final.yaml that points to where in the blob store this thing lives. Typically, this is off your computer somewhere, but it can be local. And then you have to actually provide it to your jobs. So running this command generates this tree of files and then you populate it. Your spec file will specify any sort of compile time dependencies and the files that will be actually included in your package. And your packaging file will actually set it on the Bosch install target. That's a highlighted line there. And in this case, it also has to untar it. And then in order to use it in your job, all you have to do is specify a package and then you can just move VARVICAT packages and it's there. When you create a release with a tarble path, you'll actually just see this package binary in your release. Now, stepping back a second, this becomes really painful when all of a sudden you have to do this across multiple releases. So if I ask you as like an operator of 10 releases, oh, you know what, two weeks ago, Golang released a one, 10, one patch. So now you have to update that everywhere. Well, you actually have to repeat a large chunks of those commands we just ran through. And you have to do that across all of them. The other thing is you can name it anything you want. So some will call it go, some will call it go, one, 10. Some of them might still be like go, one, eight. And it's actually not that at all. So there's no naming convention and you kind of have to like figure out what it is in all these different places. All right, so now we'll go to the Vendored package flow. This is a two step process. So the first part is you're actually going to copy it down locally. And this release has already uploaded the binary to a blob store. I think these are actually in S3, but we'll get back to that. And then all you say is Bosch Vendor dependency. And what this will do is it'll copy a spec.loc file into your release and that will point to where it is in the blob store. And then you can just use it in your job like normal and you get the exact same thing as you did from the previous steps. Currently, there are five managed ones. They're in this open source repo. I think go, one, 10 might have just been added. So the other point to bring up is that the docs were redone like a week ago and so at least the vendor package docs went from like 10 lines of explanation to like 40. So kudos to that team, but. Another reason to use them is they all support kind of like cross OS. So in them is a Linux binary, a Darwin binary and a Windows binary. So that you can be sure if you're pulling this in and some other team is pulling this in, you're at least looking at kind of the same strain of things. It also simplifies package packaging. Since it is an atomic command, you can kind of script it and it will like bump multiple times but without anything really changing unexpectedly. And it standardized package interaction. And basically, if you wanted to interact with the package previously, you had to set things like go root and go path or Java home if you're using Java. Now these packages all, for the most part, will ship with something called a runtime M or a compile time M, compile M. And basically this will do all the sourcing for you. So it's no longer up to each boss release author to kind of figure out what they need to be doing to interact with these packages. I don't know how many people have seen this error before but it used to be that Bosch could not tell between two packages even if they were identical if they had the same name. So we would give you this error, Bosch cannot currently co-locate two packages with identical names. Awesome. Granted, in November, 2017, Bosch 264, they did ship a feature that says if they are identical, they have the same fingerprint, then it's okay. But, and then the text at the time is to rename one to go one, nine, one or something else, that's not the same. In terms of requirements, you have to be fairly recent with both all your versions. In terms of when not to use them, sometimes the old flow does make sense. If the package is wrapping your source code, it may not make sense to make it reusable. Also, if you need specific control over patch versions, it may not make a ton of sense. Right now, for the going release, you can only specify major and minor. If you need patch control, you're kind of on your own there. Another pattern to bring up is job co-location. Jobs do the start and stop and process management for you. So if you have some sort of requirement, like all my VMs must ship with syslog drain functionality, maybe you don't want to use a package and have to manage that yourself. Or if many versions are valid, sometimes you don't just have the latest that you want to patch with everything. Or there's this thing, there's only five of them, how useful is this tool really? I'm gonna say that's not really an excuse. They're actually really simple to make, so we're gonna cover that next. All it is, basically, is a Bosch release. If you're gonna do anything to the path, you'll probably want a compileM than a runtimeM. And then you just need to upload binaries to some sort of blob store. And this could be a private blob store or a public one. You just have to be willing to share the credentials to it with anyone who's gonna consume this invulnerable dependency. Let me try to drag this over. Oh, that's not working. I have to keep losing my mouse. Well, I can't see what I'm typing, so I hope this goes well. I'm actually using a tool called doitlive, so it doesn't really matter what I type, as long as I can explain what's going on. Actually, do you mind just typing as I, okay. So basically, like I said, it's just a Bosch release. Just hit enter. Smash the keyboard a bit more. These are executing live, by the way. They're just, keep going. Nope, keep going. All right, so this is, we need the Linux strain of this. I'm gonna try to deploy it on Bosch Lite, so. Keep going. Enter. Keep going. So, this kind of looks similar, because this is the exact same flow to add a blob, like you would normally do. And here's the tree. So this is a little awkward. Keep going. Basically, I'm just making the Genco binary executable and adding it into my Bosch install target. Yep, copying the Genco file in, and then saying I'm gonna use a local blob store for now. Upload it. And you are required to create a release. All right, and that is literally it. Granted, I probably shouldn't have called a Genco. I probably should have called a Genco Linux, because that's what it really is. And maybe I should have included a runtime nth, because ultimately, Genco's gonna be a little better to interact with. It's already on the path. Oops. And what I wanted to demonstrate, is that the vendor package command? I do that. All of a sudden, I have a spec file lock, and it points to my release. That's it. All right, in conclusion, can I switch slides? So vendor packages allow you to create more reusable modular packages. And so basically, the toil of it isn't spread among every single Bosch release author. It's also a great space. If you are a open source contributor and you're looking for something to do, there's a lot of things that would really benefit from this tool. Yeah, and they're not that hard to make. All right, with that, I think we'll wrap up. And if you have any questions, yeah, probably. So I don't actually know who maintains them. I know the CF, Capitan, I believe, manages the CFCLI binary. I don't know how up to date they keep it. The Golang, I know it took like a week or two to actually ship, but I don't know if that would change with the CVD. I believe, however, they are sort of open sourcing away. So if it's something that folks are, if there are requirements in terms of maintenance, like anybody can PR and contribute the latest. But as far as I know, they're not officially supported at the moment, at least. Point it to your blob store. Any other questions? Hello? Oh, yeah. Yeah, so the question was, has anything changed in terms of deployment logs when running errands? I don't believe so at least at the moment. So I think the fact that you cannot run multiple things against a deployment is still the same at the moment. It's just the only thing that has changed with collocation is where that script gets run. Yeah, so the question was, can you collocate multiple errands on the same VM? Yes, you can do, you have a complete flexibility of where these things run. Just the one really, the one big change compared to the previous setup is that you now have to call them by job name, not instance script. So you need to just remember what your errand job is called. I think we're actually out of time, but if you have any other questions, come talk to us afterwards. I believe there were a few resources, if you'd like to have a look at that. I think the slides will be uploaded later, but other than that, thank you, and yeah, enjoy the rest of the conference.