 Hei. Mae'n gydig i'n gwybod. Felly genna'n gwybod o ddiwethaf yn gyfnodd gyda chlawdfawr i'w ddau'r cyflwyn i ddawr diwethaf gyrwch ffawr ar gyfer y operatoryniad i'r brifesrwyr Rhaglen. Mae'n gwybod o Robert Moss, dyma'r ddau'r cyflwyn i'r ddau. Felly, yw ddweud cyflwyn i ddau'r cyflwyn i ddau? Well Apache Brooklyn allows you to model, deploy and manage services and it does this using a blueprinting mechanism so you can declaratively specify what your service should look like. You can compose blueprints either using supported components that are part of the Brooklyn ecosystem already or indeed write your own. Brooklyn automatically configures the components and wires them up so if it's a service that has multiple machines they need wiring up it'll take care of that for you. It'll also monitor key metrics about the service and perform policy-based management so you can have confidence that while the service is running it's going to be taking care of rather than just deploy and forget about the service. So the way that we bring these services into Cloud Foundry is with the Apache Brooklyn service broker and that allows us to create and deploy Brooklyn managed services using the same blueprints that are stored in Brooklyn's catalogue but slightly augmented with a syntax that the service broker understands. So one of the key abstractions in Brooklyn is a location. This is the way that Brooklyn irons out many of the differences between different clouds. Usually these different cloud locations the provisioning tasks will be carried out by Apache J Clouds behind the scenes. You can configure up the locations to include a wide range of configuration properties such as images and machine specs. So this is what setting up a location looks like. It's very simple yaml. This is in fact a Brooklyn bomb file which is just the Brooklyn object model. It describes items that should go into Brooklyn's catalogue. So in this case we've got an open stack location so you can see from the item type that it's a location. The driver that it's using is a J Clouds driver for OpenStack Nova and it's got three basic configuration properties there an endpoint, an identity using the tenant and user name for OpenStack and a credential and then some further configuration that you might want to include for this location. So some of these additional configurations could be the image ID that you might want to deploy the software using the hardware types so that's the flavours. You might want to set the users that will be used to log into the image any floating IPs and then some further templating options such as any networks that you might have configured floating IP pools and security groups or key pairs. So once you've got the bomb file and you want to add it to Brooklyn it's just a simple VR catalogue ad and then the name of the file. So that gets your location into Brooklyn so you can now use that as the deployment location for any of your blueprints that you might want to model your service using. So how do we go about modelling a service? Well it's very similar to setting up the location again. It's a catalogue item. This is a very special catalogue item because it contains a section called broker.config and that means that the service broker will be able to read part of this blueprint and formulate its catalogue items so the plans of the service that might be required. So much of the metadata here is picked up by the service broker and it creates the catalogue according to the Cloud Foundry service broker API soon to be the open service broker API. And the way that plans work in these blueprints is the plan config section defines how you would like the service to be configured differently for the different plans that you might set up. So we'll take a look at that in a minute. So I mentioned that there are some supported components in the Brooklyn ecosystem that you might want to fill out that services block at the bottom. So if you're looking for a kind of search type service you might use Apache Solar or Elasticsearch. If you need a relational database, MariaDB, MySQL, Postgres or if you're doing messaging, perhaps Apache Cupid or Apache Kafka, ActiveMQ or RabbitMQ. There's various NoSQL solutions available as well such as Apache Cassandra, Apache CouchDB, Redis React, Couchbase, MongoDB and others as well. I've got Apache Storm, Apache Zookeeper and Bind listed there as well. At blueprints.cloudsoft.io we've got 60 plus blueprints that can be used. So if we're going to compose a blueprint from these supported components then we just need to set the service type to the namespace and type that's included in the component and then fill out that broker config plans section with the particular config that you want for the plan. So in this case we've got a clustered service and we want to set the initial size of the cluster to three. And that's done with the config key there, cluster.initial.size. So you can see this is a MongoDB replica set service or you can compose your own blueprints as well. So there's a particular component called a vanilla software process where its config keys allow you to specify the launch commands or the check running or stop commands that will be used while the service is running and these are simply bash commands. We've also got sensors and effectors that are key abstractions within Brooklyn as well and that allows you to monitor what's going on in the service and again these are set up typically using SSH commands so the sensors will be continually running the SSH command to watch what's going on in the service or the effector might run a command to change it in some way. So once we're ready to deploy we need to first import the catalog into Cloud Foundry and the way that this is done is the Apache Brooklyn service broker queries Brooklyn for Brooklyn's catalog and then it creates its own catalog based on those. It uses only the catalog items that have that broker.metadata field in it and this is done every time a service broker creates service broker command is issued or updates service broker. The service broker itself can be configured with a default deployment location such as the location that we created earlier using OpenStack and this is very simply done by setting an environment variable but if you wanted to have individual plans that deploy to different target locations you could also specify that within the plan metadata. So these Cloud Foundry commands should be fairly familiar to most people. Once you've created the service broker you can have a look at the access of all of the plans that have been imported and enable those particular services to the marketplaces of the different orgs within Cloud Foundry. So once we're ready to create a service we issue the create service command using the standard syntax cf create service with the service name, plan and instance name and when this happens the broker looks up the relevant catalog item and it reads that broker.config section to get the plan details it then generates a new blueprint that's specifically configured with the config section from that plan and fills in the location This generated blueprint is then sent to Brooklyn for it to deploy to the location specified. The state of the broker is stored in Brooklyn in a particular entity called the repository we'll talk a little bit about entities in a moment. This simply stores the mapping between the Cloud Foundry ID and the Brooklyn ID so that the broker can query the state of the particular running service at any point during asynchronous provisioning or update. Once the service has been created we can do a cf bind service to bind it to the application and then restage to ensure that the environment variables are updated with the credentials that are passed back from the broker. The broker queries Brooklyn during bind to get all of the sensors that are monitoring the service at that particular time and it sends a snapshot of those sensors within a credentials block and that sends it back to Cloud Foundry to populate the VCAP services variable. You can specify within the broker.config section if you like a whitelist or a blacklist to make sure that certain sensors are not sent back in the credentials object as well. On to management. After all, this is one of the key reasons for using Apache Brooklyn. I just want to explain a couple of key abstractions. The main abstraction that we use is called an entity which just represents a particular resource under management whether that's a particular virtual machine or a software process. These are arranged in a hierarchical fashion and they have events and operations with processing logic. They also have a life cycle such as start-stop and these are tracked with tasks and that allows operators to look inside the service and see how it's been processing up to this point. I mentioned sensors earlier. Sensors monitor the state of a service and these are typically done with SSH commands defined in the blueprint as we saw earlier and they can run periodically or indeed just once and they're often used to expose endpoints or credentials of the service as well as any other key metrics. Effectors, they change the state of the service so again using an SSH command that is defined in the blueprint it will run whenever it's triggered and these could be used for instance to scale out clustered services by adding nodes or change the service in some way and policies, so policy-based management allows you to combine the power of the sensors and effectors to act automatically. So you could for instance monitor for failure within the application and then effect recovery or you might want to monitor for usage and then autoscale the service. So let's take a closer look at the autoscale of policy. This increases or decreases the size of a resizable entity such as a cluster and it does this based on an aggregate sense of value. The current size of the entity is monitored to check if it's between an upper bound and a lower bound and if it goes out with this the effector will automatically be triggered to correct this. So again this is just simple yamel that's put into the blueprint using Brooklyn policies section and it's configured with a metric such as the sensor name and given those high or low watermarks. So if you want to as an operator look deeper into the service you might want to use the BR command line tool and to do this you might want to list all of the services that Brooklyn's managing at that time with BR app or look closely at it by specifying its name or any of the entity's sensors or a particular entity. As well as all of this Cloudsoft provides a UI around all of these components which we call Cloudsoft Service Broker. This allows operators to add services without using the command line so a nice UI there for those that are not so comfortable using the CLI. But it does more than just that it also talks to Cloud Foundry to give a richer user experience so after adding services you can then control the visibility using the UI or display the sensor values that are coming in for that service and you can also use it to add services that use a particular blueprint for connecting to pre-existing databases. So this is a particular blueprint that we created and on the UI it will fill in the configuration of that blueprint using the user input to allow them to connect to a pre-existing database. So typically in Cloud Foundry people have been using user provided services for this task but that means that the user has to get the credentials from operations rather than getting a new user on demand and this is not so good if you've got compliance to do. So if you want to have a blueprint that will connect to that pre-existing database and also happen in a self-service way then this blueprint is pretty good for that. So just to reiterate Brooklyn can easily deploy services to OpenStack modelled using the blueprint syntax. Apache Brooklyn Service Broker makes it simple to add services to Cloud Foundry again modelled using an augmented form of the blueprint and Apache Brooklyn can autonomically manage these services. So what I mean by that is a kind of self-healing kind of automation that uses the sensors and effectors combined using policies and the Cloudsoft Service Broker wraps all of this together to provide Cloud Foundry operators with a simple interface and we provide enterprise support for that too. So that's the sole folks. I've got the link here if you're interested in the service broker but other than that I'm happy to take questions. So this will see whether you are paying attention to Jeff's talk or not. Joking aside I really like what those guys are doing with ffysal and furnace and creating what you might call a cloud native Cloud Foundry. Cloud Foundry running on Kubernetes which is entirely legit in my view although I nearly got thrown out of a Cloud Foundry pivotal meeting in China for saying that a few months ago but anyway that's by the by. But in that scenario one of the things that Jeff did talk about was the range of handling the ecosystem of services. So in other words the services that you're talking about here. Now obviously I work for Cloudsoft so I have some knowledge of what you're talking about but would it make sense to use the service broker to then deploy services onto Kubernetes itself? I say location in other words. Yes I think so. We certainly have a location that we've been working on in Cloudsoft for deploying to Kubernetes and I think that could also you know you could for instance have your Kubernetes running an open stack and then configure the location to target that and deploy services there as well. I don't think that's the wrong open stack by the way but they'll probably call security at that point. Thanks. Mae newbie Cloud Foundry forgive me the question is not very interesting. I reckon that the service broker will eventually align to the open service broker API. That's right. So the open service broker API is currently developing the first spec to be released but they're basing it off the current Cloud Foundry spec just an incremental release when it comes out so it should align very simply. Thank you.