 Good afternoon. Thank you for coming. We're here to talk about the Apache Brooklyn project and the service broker and plugin for Cloud Foundry. I'm Alex Heneveld, one of the committers on Apache Brooklyn and the CTO at CloudSoft. Hi, and I'm Robert Moss. I'm a developer also at CloudSoft. And Robert built much of the service broker and plugin, which we'll show you. So first up, we'll tell you about Apache Brooklyn very quickly, and then we'll really focus on the service broker and the plugin and how that can help make working with services a lot easier. So first up, Apache Brooklyn is an incubating project at Apache designed to make deployment and management of applications much easier, focusing on blueprinting, building up a model of what we want to deploy, and then looking after it. So you get a live model of the system after it's deployed. So what we mean by that, the blueprint is built using YAML, it's the Oasis camp spec. I don't know if you can make this out, but plenty of examples at brooklyn.io, which takes you to the Apache project. General idea is you can describe machines we want to create in a portable way, attach some scripts or cookbooks that we want to run. But more interestingly, we can then compose other blueprints with that. So someone's built up a blueprint for React. We can just reference the React blueprint, and that will deploy it. You can also describe the management policies that we want to attach. So under the covers, Brooklyn is an autonomic management system. The model which it builds is plugged into the metrics. It defines some operations, effectors, which we can do. And so once something's deployed, you get a view of the topology here. It's really the management hierarchy where we see the sensor feeds coming in from all of the elements. There are effectors that we can invoke, and then there are policies which close that autonomic loop at every level. So down low, we might have a process restarter, higher up, something that's scaling a cluster out and back, or throttling back performance if we'd rather do that than scale. And then even above that, we might have a fabric which is operating in multiple clusters looking at follow the sun type logic, or follow the moon if you want to optimize for cost. Where Brooklyn gets interesting is where people have been writing blueprints and sharing blueprints. So there's quite a rich catalog included with Apache Brooklyn for a lot of the Apache projects and for quite a few other open source projects. These blueprints can be used off the shelf. We just reference them, and then you can build up a definition which composes many of these pieces together. You can configure them. You can attach custom policies. And many of these, it's worth saying, are quite complex clustered blueprints. We've also worked with a number of the ISVs. So we have blueprints we've developed with Basho for React, with MongoDB, with data stacks for Cassandra. And so it's building up an ecosystem of blueprints where we can grab what we need and quickly build up the service composition that we require. So that's been existing fairly independently of Cloud Foundry for three and a half years. It's been open sourced over a year. It's been in the Apache Foundation. But we're sort of hearing people in the Cloud Foundry world saying, look, we really need a lot of services. And well, we've got blueprints for those. So to try to make it easy to integrate, and we discovered that it was very easy to integrate, and we're kicking ourselves for not doing it earlier, Robert, just a few months ago, started work on the service broker. And so let me pass over to him to describe the service broker. Hi, yeah. So I hope you're all excited now about the prospect of Brooklyn managing your services. So that's exactly what we've done. We've made a service broker for Cloud Foundry now. And the purpose of this is to expose the blueprints that we have, the many really good blueprints that we've worked closely with the ISVs to develop, and bring those into your Cloud Foundry marketplace. And the way that we do that is by mapping the services and the locations on which they're deployed to the Cloud Foundry service and plan, so that you can use these and bind them as you would any other service in Cloud Foundry. So how do we do this? We use the Community Spring Boot project. It's excellent. So if any of you have made a service broker before, you'll know what I mean. This is a good project. We use the Brooklyn Java client in that project to make calls to the Brooklyn server. And we do that by getting all the applications in the Brooklyn catalog and all the locations. And by default, we pair these together to make a service and location service and plan mapping. And we also persist the data in Brooklyn itself. So if any of it goes down, it's easy to get that back again. We also add a few extra endpoints. This enables us to actually talk to Brooklyn itself to do some of the features that Brooklyn's particularly good at. So some of the things that we've got are the ability to pass a YAML file describing your service topology into Brooklyn through the create route. And we can also delete those. We've got a sensor endpoint that enables you to get the sensor information from the Brooklyn server about the service that's running. So that can be quite useful to find out what's going on, how much is being used, or any other of the sensors that are available there. We also got effectors that enables you to control your services in some way, such as scaling out if you've got a cluster. And so two of the routes we've got there, one to list those effectors that are available for all of the entities, and another to invoke them. And we've also got an is running endpoint as well. That's so that you don't bind your application too early. I don't know if anybody went to the excellent talk yesterday about the asynchronous features that are coming. This is a kind of akin to that, where we pull to find out whether the service is running. So how do we deploy it? Typically, we would deploy to Cloud Foundry itself, and we have to set a few environment variables. We have to tell it how to find Brooklyn and how to log in with the credentials. But we also want to set some credentials for the broker. So that's the security username and password. And then we just use it as we would any other broker. So create the CF create service broker, service access to look at what items are in the catalog, and then to enable them to the marketplace. And then we just create service or delete and bind or unbind. So now I'm going to hand you back to Alex, who's going to give you a demo. So let's hope the demo gods are smiling. But I think Robert described in some detail the actual implementation and some of the details under the covers. Many of these you don't need to know. And even some of the ones I'll show now get obviated by what we'll show in the next part of the talk, but not to get ahead of ourselves. Very quickly, brooklyn.io brings you to the Apache site where you can find out more about brooklyn itself. The plug-in and the service broker are in the CF community. They've recently been accepted into the incubator, so we're very excited about that. So you'll find them in the CF incubator very soon. It is all Apache licensed, so download it, play with it. There's a very good blog that Robert wrote on the Cloudsoft corp site. And ActiveState have done a nice job of also describing the process in a blog they've put on their site. So enough with the preface. This is a pivotal Cloud Foundry deployment. I've got some applications here. And this is the brooklyn instance. It's actually Cloudsoft AMP, which is the commercial product. But I'm not going to plug that. I won't mention it again. It looks and feels very much the same as brooklyn. Everything I show here works with brooklyn. And we've got a number of applications that we've deployed. The general process is you can come add an application or describe a service. You can take things that have been added to the catalog, or you can plug any YAML that you've got available. And here is a sample application. This is a very simple one that's just going to provision in Amazon an 8 gig Ubuntu machine and then run a script. And the script is going to pull down another script, which is just looping running netcat. So it's just a very simple server that says hello. But this illustrates how if we can run a command, then we can do almost anything. And if we want to have multiple servers, we can do that. If we want to reference other blueprints, we can. You'll see some examples of that later. But let me cut and paste this and return. And we'll add this to the brooklyn catalog. And at this point, we've got a new instance of that blueprint available at the left. So this is now in the brooklyn catalog, which is exposed via REST. The next step is to make sure that it's available within the Cloud Foundry Marketplace. So those of you know that the catalog import in the marketplace is relatively static. This is a service brokers command. So we see the brooklyn service broker. In order to import this new instance, we need to do an update service broker. This will force it to go out and repart that catalog, bring them in. And it's not happy with that. But we ran through this earlier and grabbed the screens. Once it is imported, it's listed there, but access is none. So the next step is to enable the service access. When you do enable service access, the next time we list it, it is there. And if we go to the Marketplace, we have Alex AWS VM1 available as a service. And the next step, we can simply create the service. And in the interest of time, I'll show something being created live shortly, but I'll skip the creation of this. The point I want to make is that this shows if you're infrequently adding things, an administrator ahead of time can set up this catalog, add the services to Brooklyn, and then run through four commands to get them available within Cloud Foundry itself in the Marketplace. If you're working with services more frequently, then there's a lot more things that you'll want to do. And this is where the CLI plugin comes in. So back to Robert to describe this. Yeah, so the CLI plugin is basically a way to talk to the Brooklyn broker. We make use of those extra endpoints and simplify them into the CLI. So we've got an ad catalog and effectors, invoke the effectors, look at the sensors, and find out whether a service is running. But we also do something else which is pretty cool. And that is to automate some of the steps that Alex showed you behind the push. And that you can list in a Brooklyn section the service and plan in our case location, and it will create it on the fly for you. So basically it looks like this. You've got a Brooklyn section, and you specify a name, location, and service. And what that will do is it'll create it in the background for you. And then it will replace in your manifest the actual service name. And then it will continue with the push as normal. Now, if you want to add extra items into the catalog, we've got an ad catalog command. And you create your YAML blueprint. And then you go through the steps that Alex showed you. And that's quite manual, but it allows a nice clean separation for developers to do that work. But if you've got a bit more admin rights, then you can make this process even nicer still. You can specify a new service topology with a blueprint directly into the manifest file itself. And what that will do is that will automate the ad catalog update broker, enable service access, and create service all in one go. And then it will wait to pull the broker to find out if it's ready. And when it's finally ready, it will then bind the service. And so here's an example of that. We've got a sharded MongoDB running on SoftLayer. And we specify the Java type that we've got from within Brooklyn. And we can give it some provisioning properties, too. Here we've given it 16 gigabytes. And we can specify that we've got clusters of a router, shards, and replica sets. So we've got one router, five nodes in each cluster, sorry, five clusters of shards, and three replicas in each shard. And so once we've got a service that is managed by Brooklyn, we'd like to effect some change within those services. We can see those effectors with the Effectors command and invoke them. So if you've got a cluster, as we did in that last example, of lots of different types of entity, then you will be able to reference each individual one with its unique identifier and perform your scaling out or scaling in needs as appropriate. OK, so we're going to take a look at another demo with Alex. So part two of the demo will look at using multiple different services as potential back ends within an application. And so to begin with, let's look at the manifest. Is that readable in the audience? I'm seeing more nods than shakes. Let's maybe try to increase it by one size. So the basic manifest, this is just one of the example node apps that says hello that's been tweaked to be able to talk to different back ends. And in this case, I'm referencing the catalog type that comes with Apache Brooklyn for a React cluster. Keeping it very simple, just say I want size three, and I'm going to run this in Amazon Virginia. And if I do a CF Brooklyn push, as Robert says, the first thing this does is check to see if that service is in the catalog. If not, it will add it. It then checks to see if an instance has been created. In this case, it has been created. So now it's just done that substitution, and it's binding in the React cluster to the application. I should say it extracts quite a lot of the sensors from the root of the blueprint, the root of the deployment. So things like the IP address, URL, automatically get injected. If you want to have some credentials get populated, have them published as sensors within the blueprint, and they'll automatically get plugged in. One of the things that we're looking at is what type of information do people want to inject? As a system that's keeping this live model, we can collect whatever information is relevant and pass that through. But then it's a question of, well, how much is too much? We don't want to dump all of the stats. I'll show you the stats shortly. So this is being deployed. And if we flip back to our Cloud Foundry view, we will see the CF sample is stopped, but it's slowly starting out. And if we flip back to the Brooklyn instance, we have the React cluster 3 sitting here with the three nodes. But we've also got a couple other manifests which are interesting. So this is a manifest which, instead of React, we're going to attach it to the MongoDB service. So in this case, it's exactly the same configuration Robert described, five shards of three replicas in each, plus the config cluster. And same things. This one has a slightly different name, of course, CF sample Mongo. So the CF Brooklyn push here, that's going to push the default manifest. Push the manifest Mongo. I've given it a different name. So we've got MyMongoAlex, and this one is not there yet, but we'll give it a couple of minutes. And MyMongoAlex is now here starting up. It is, in this case, it's gone. And it created, if necessary, this one also we created just in the interest of speed, but the MyMongoAlex cluster with the five shards and three nodes in each shard. Now, just as an example of what Brooklyn does, when we talk about the sensors, we can go in. And for an individual leaf node, we can look at the stats that are being pulled out of Mongo off the box. Those are getting aggregated. And there are some root stats, if you want, high level, is the service healthy? What is the mean queue length? You can also attach policies. So that information can be used to drive scaling out. So with each cluster, if we wanted some logic that's looking at, well, how busy is that particular cluster or shard, the policy can monitor that sensor and network queue length or transactions per second could drive a policy like auto scaler so that an individual shard or cluster could get scaled out. In order to affect that, it's just invoking this operation resize. Give you a flavor of what that does. I can resize it to five. And two machines are immediately getting spun up. We can track their activity down to provisioning, down to operations, which it needs to run. Standard in, standard out is all there. So you've got full visibility into what it's doing. But meanwhile, the service, Mongo is quite happy to keep running while we scale it out. And the other command line extensions, besides CF Brooklyn push, I tend to just do the Brooklyn push now because I can describe my service descriptions right in the manifest. I don't really need any of the other enable service access. And for the operations, I tend to come to the GUI. But if you prefer to use the CLI, the nice thing is you have the CF idiomatic user experience to invoke any of these sensors, to look at any of these policies. So interestingly, this one has failed. And it came back and showed us why Amazon gave us a 503. So I don't know if it was us or someone else making too many requests. But this is a good example of where you might have a restart or a retry or policy so that if there are failures, stick them into the quarantine group and keep on working with it. So final piece we have is a couch base manifest. And this won't look terribly different. We have our application. And then in the Brooklyn section, we have described the services. And in this case, it's just one service, but it's the couch base cluster this time. But within both the React and the couch base blueprints have quite a lot of config which they'll pass through in the right way. So I can do things like setting up the application, in this case, specifying top secret credentials, creating the buckets that my application might need, as well as describing that I want a very quite a powerful machine. And then in the final section shows how within Brooklyn we can define policies. So this is one of the auto scaling policies. Attached to the operations per node, averaged across a cluster, keeping it in a given range. And creating that is the same as just a CF Brooklyn push. And we'll shortly see the couch base get spun up in the same way that these two servers got created. So we're not quite sure exactly how people want to use it. Some people want to continue to manage individual services separate from applications. And so the create service and the add catalog, the atomic operations let you do that. If you're working on a more complex stack, we think the two languages, the two YAML varieties of services and applications, going to marry quite neatly and do let you compose blueprints in one file that might describe a very complex application tier. So just to summarize, we've shown what Apache Brooklyn is, a project for blueprinting and management of services. The service broker, which connects these services according to the low level APIs, which are nice and clean, easy to work with. And then the CLI, which lets us put a more elaborate experience on top of that for the users. The import of this is that you have access to all of the services that are in the Brooklyn catalogue, things like ActiveMQ and Cassandra through to React and MongoDB, as you've seen. In the ecosystem at Brooklyn Central, there are quite a few projects for more elaborate systems. There's a Brooklyn Ambari project that lets us create Ambari nodes and then leverage Ambari to stand up a lot of the tools in the Hadoopik ecosystem. There's a Brooklyn Spark project sitting there. One of the other benefits, though, if you've got custom services that just we need to run some scripts to get something running or we need to build up a blueprint for something that we're using, the Brooklyn YAML language lends itself to constructing these blueprints from whatever first principles you need. And those also can be easily added to Cloud Foundry, like the Amazon shell scripting example. And that, of course, could just be stuck into my manifest. One of the difficulties that we've seen with services, though, is that when you're working on these blueprints, getting them just right can take some time, because you do a deployment. And unlike in Cloud Foundry, if we're working with a cloud, we're waiting for machine creation. And so we can wait 20 minutes to get a round trip time to discover that we misspelled one of the parameters. There's a project clocker.io, which is itself a Brooklyn blueprint for Docker hosts. It lets you set up several, a cluster of Docker machines, which we can then treat kind of like a cloud, a very rudimentary cloud, similar in idea to Brooklyn Swarm, but focused for integration with Brooklyn, and also focused on the management, the scale out. There's some work in progress to figure out how this can relate and reuse Swarm. But one of the nice things that it does, and anyone who's worked with Docker knows that setting up networks between containers on different hosts is a little different to the usual way. So in order to get around that, one of the values of clocker, and it builds on Calico, who've nicely come along today, which is an open source networking, it can also use WeWorks. It will set up networks for all of your services. So within your blueprint, you can describe network topologies that you want, and it can isolate individual services should you need or grant some network access to different services. So clocker's worth a look. And the final piece is that services don't just get stood up, they require care and feeding to keep them running. Where we want to get to is where a lot of this care and feeding is automated. So the management policies let you describe that. And finally, what's next? And this is kind of one of the reasons that we're here. We want to find out, well, what should happen next? And the biggie that we learned is that this needs updating for the new async and parameters work. We glossed over one of the uglier implementation details, and you may have noticed, when we did the CF Brooklyn push, it's calling directly to Brooklyn to create that service and then polling it, waiting for it to fully exist before we then try to bind it to the application. It's kind of going behind the back in order to achieve some asynchronous, long running creation. Well, with the async API and services, we can now do that in a much more principled way. Similarly, when we have that big yaml, that's a big perimeter. We discovered pretty quickly that we could not just gonna do some sanitizing of that string and try to create a plan with the name of the yaml. It exceeded a plan length pretty quickly. But with the parameters API, you can pass a JSON hash map. And so we should be able to use the parameters to have all of the calls go properly through the API, cut down the work that's needed in the plugin. Some of the other things that we're interested in is figuring out how when these services get deployed, they can be plugged into arbitrary monitoring and log aggregation, whatever you wanna put into the blueprint. But we would like it if the automatic default was to wire it into the Cloud Foundry services for doing health checks for aggregating logs. And we know that Bosch has a lot of those capabilities, so we're interested in it. Can we reuse that and actually make sure these services are by default built on the stem cells and plugged in in the idiomatic or the natural way? Diego starts to let us do some interesting things. In some ways, analogous to Clocker, there's already some work where you can specify Docker images or Docker files or compose syntax right within your blueprint and have that pass through. We'd like to be able to do similar things with Diego. That seems a natural fit. Coming at it from a different tangent though, one of the requirements that we get in the real world is that services have more complex relationships with each other and with applications. We may, the default, I think, is that we'll create a database system and then different applications when they bind get a database created. Well, it may be that some applications need their own database system and then they might need to create three databases within it and then make some of those databases read-only accessible from other services. The new keys feature helps with that, but it doesn't entirely solve it. There's a lot of work within Brooklyn at tracking services at different scopes, so I think we kind of, together, we come close to having the answer, but we're still trying to work out exactly what the question is. So if that's of interest to you, please come talk to us. We'll be here afterwards. We have a booth here that the Cloudsoft booth will be for the next couple hours. And then the logical conclusion of that though is that if we've got a lot of pieces that we want to describe, it gives us a very nice way within our manifest to describe richer microservice architectures where we can describe all of the services that are needed and possibly just update some of those. So thank you very much for coming. We've got a couple of questions and we can go over if there's interest, but do we have questions from the audience? We've got a gentleman here. Several different policies. There's a restart or policy, a replacer policy. Different, you discover pretty quickly, different services need repair in different ways. And if you're running some of the old Cassandra ones, when we repair we actually need to do quite a lot of other work in the cluster. So there's quite a lot of work on policies to be able to do the right thing for different systems within that. But it's not tied into the Bosch Health Check, which I think would be a natural way for us. We can get these metrics, these sensors from any source. That'd be a nice sensor to drive the integration. And I thought you were coming to kick us off, but I'm glad we've got a couple of minutes. There's two sides to this. The first side is like who's allowed to access it and can the keys give you one way to enforce who could use certain pieces? The networking work that's been born in the clocker project but is moving into Brooklyn gives us other ways that we can enforce only certain other pieces can use it. But I think the other part of your question was who actually is using it? You can track that, that's simply down to whatever metrics or API gateways that you're installing. Brooklyn out of the box is completely hands off. We just do what you tell us to the box. So we're not automatically gathering any of that information. But it's a natural thing people wanna do and there are quite a lot of ways that people do that. So one of the most common requirements for in-life management, kind of people start with how do we keep it running and then it's how do we maintain it over time. And it's a very complex area and a lot of tools that people use. Our favorite in a cloud first world is when you throw it away and create the new one. But often that's not realistic. So kind of one of the beauties of the abstract approach is that we can and we have implemented quite a few different strategies. If it's on box upgrades like patching, sure you can give a sabash command and we'll run it. There are tools that do this for a living so kind of you probably want to investigate this kind of server config management. There's good integrations with Brooklyn where we'll use Chef cookbooks or puppet manifests in order to tell nodes to converge. So you're not tied to bash. If you are working in that world though that gives you an elegant answer for the server upgrade. Where it gets more interesting though is where you might wanna roll an individual server upgrade through a cluster. And so there are policies which operate at a cluster level which maybe if we're dealing with a simple load balancing case, take some out of the load balancing pool, update them, test them, stick them back in. They run as effectors in Brooklyn in the same way as some of the low level effectors. Those effectors and policies apply at the lowest level where we're typically looking after a processor, a machine or the much higher levels where we're describing to figuring out well which clouds around the world should we be running in based on demand and cost and various other concerns. I am now getting the stop. So thank you all for coming and I look forward to talking to you.