 Hello, Boston, and thank you all so much for indulging me today. This is my first time in Boston for about 22 years. It's a real privilege to be back. Let's get started. If you're not here for this, you're in the wrong room. So you know how this goes. In summary, walk, don't run, follow the nearest exit sign to the concourse, and head downwards towards the stairwell on the street level. And please remember to remove any high heels or whatever so that you don't tear the emergency slide on the way out. That would just spoil the fun for everyone else. So if you just want to zone out and catch the gist of what I'm talking about, here's a GitHub repository. You might want to take a snap of this, or if you're quick enough typing, go grab it now. We're going to talk about this towards the end of the presentation. So a quick introduction to me. My name is Alan. I work at Pivotal as a principal technical instructor based out of London. I am the primary contributor to a course known as PCF admin or operator soon to be. I consider myself to be very lucky in this industry. I do for a living what I used to do for a hobby. So hence the comment there about the 10 year old hacker. I used to work in finance. I wrote trading systems for investment banks for years and then more recently became an instructor. And it's a great path by the way if you ever fancy doing that. That's a really great path to take. I'm interested in all things cloud native these days and I use a little bit of whatever makes it work. So I don't consider myself to be an expert in anything in particular. I just get the job done. I'm expecting that you are existing or pivotal customers. I would say that you're probably looking at cloud native at scale. You might consider yourself rock stars. I'll let you decide. You could be interested in automation or just nervously curious about this type of stuff. You could be a fellow educator in which case this might also help you. Friends and colleague as well. Thanks for coming along. So I'm hoping that some of these bullet points will resonate with you. We want to achieve operational efficiency and develop a productivity. Perhaps even operational productivity as well. Who knows? We want to make our actions repeatable. We don't want to have to repeat our actions because that sounds like toil. We don't like that. We'd like to embrace the command line interface. You're operators I'm guessing. So you're probably there already. And we're thinking that maybe we want to leverage some of this CICD tooling that's been around in the application space for so many years. Part of the reasoning for this talk here was I wanted to bridge a gap between staying manual with the ops manager and just dipping your toes into platform automation. I think there's this gap between what you can do very easily by standing up PCF and what you can do if you go to all the concourse talks. And they'll talk to you forever about all the wonderful things that they're doing. And I think it's quite difficult to make that leap. So I just want to dip my toes in. So I'm an educator. So that's my challenge on a day-to-day basis is by stepping people through slowly in a way that is meaningful to them. I think some of the solutions that are out there are fairly opaque. They may involve you learning more languages. You don't necessarily want to do that. And I think I feel quite strongly that their version of vanilla might be different to yours and to me. You might want to do something that's very particular to your industry and your environment. And it's important to stay flexible with regard to that. Because there's no one-size-fits-all solution here. Now maybe I'm part of the problem by putting this presentation together. I'll leave you to decide. So I picked up on a couple of educational use cases here. One involves the operators. The other is the developers themselves. For the operators, we tend to give them a blank environment and say, hey, stand up a platform. And for the developers, it's the instructor's responsibility on, say, a Sunday night to stand up a platform so they can use it on the Monday. And in the first case, we find that the students' experience is rather error-prone because we're teaching them manually how to use the ops manager. And in the second case, the instructors find it rather boring to waste their Sunday manually standing up an environment which you feel like you should be able to do better whilst maintaining control of that environment. So automation can help with this, but it doesn't have to be a binary choice. You don't have to have everything manual versus everything automated. You can mix and match, and you can see cause and effect, and I think that's an important educational tool. So I wanted to make use of automation for both the instructors and the students. And this work that I'm working on now, this presentation, is helping inform some of our future direction with regard to initial client engagements and education in general. I'm hoping there might be something in here for you as well. So a couple of the items that we will be covering, I think it's important to know what the boundaries are here in 30 minutes. It's not long to work with. Ops Manager UI on Google Cloud Platform. We're going to look at the OM and the PIVnet CLI tools briefly. We're going to talk probably a little bit more about the underlying APIs for those tools, and a couple of bash scripts which will help us glue it all together. We're going to be swerving platform upgrades. I think there's other talks to deal with this. We're also not going to talk about Bosch. Ops Manager is an abstraction upon Bosch, so we'll leave that to the big boys, and I will concentrate on the top layer that we have. Concourse. I'm not going to go into detail there. If you want to grab your time machine and go back to yesterday, you'll find a nice talk about the concourse platform by Yuri, Therese and Ryan. We're also not going to talk about other eye-azes. The whole point of having the platform is that we're abstracted from that. So this is a familiar view to you of the Ops Manager installation dashboard. It has things that are green and things that are orange, and our task is to turn orange things green so we can hit the big blue button. It's very heavily mouse and keyboard-orientated, but it does allow you to have full configuration over the tiles which we download from Pivotal Network. More about that later. Is some of the justification for using the Ops Manager? Well, Bosch is hard. The Ops Manager isolates you from Bosch. Bosch is all about distributed systems. We know they're tough, and the barriers to entry are very high. We could use a hand in this respect. A typical Bosch manifest could potentially be thousands of lines of Yamel. I'm not sure about you, but I see Yamel as a bit of an ugly baby. It's not really doing it for me right now, but who knows, I might get there. I'm slightly concerned that actually a lot of that complexity is difficult. You realise it's very hard to retain your best staff, but only your best staff are going to really be able to get their hands dirty on a daily basis with something as dangerous as Bosch. Isolation from Bosch is one aspect of using the Ops Manager, but it's not enough justification. We need more. That's where Pivotal Network comes in. The Ops Manager works in conjunction with PivNet, and this is where the Ops Manager for me really shines. It's that collaboration between the two pieces. PivNet is a one-stop shop for products as well as third-party offerings, and it provides us with shrink-wrapped Bosch deployments for easy consumption through that Ops Manager interface. Another key part here is that Pivotal attempts to respond to critical vulnerabilities within 48 hours, but that is on the assumption that you're using PivNet. If you're using PivNet, the assumption is you're also using the Ops Manager, so these two things go together. But let's be pragmatic about this for a second. There are some concerns that come with all of this niceness. The web interface appears to make our configuration seem easy. It's not. It's still hard. We still need to understand about the detail. Web page form validation won't catch everything. Sure, you can put a regular expression against an email address and check that that actually looks like a regular expression, but what does that mean to the outside world? It doesn't matter if it's wrong, it's wrong, and you're still going to get it wrong, and that could end in a silent failure, which is probably worse than a catastrophic failure, because you won't find out about it until much later. So it's important to protect ourselves from fat fingers. Trust me, I work in the field with lots of students, and for them it's like a week off work to do some funky stuff, and they forget how to type quite a lot. Sorry, that's the truth. Operations staff often prefer using the command line. This is an important point, and there's various other reasons why we have alternate viewpoints about the operations manager itself. So we want to narrow the scope for error here, and we want to find out if there's a better way of doing this without losing all the benefits that I just described. So in this typical configuration, the browser is obviously on your laptop, and it could be sat on a plane above the Atlantic, which it was a couple of days ago when I wrote these slides. The bandwidth requirements here in the browser are quite high, because I need to download from PIVnet, which is essentially a database, in order to import to ops manager, which is essentially a database. So I've got a lot of data flying around on networks, and that increases the local bandwidth requirements. A better solution is to use a jump box as our primary interface via SSH. So if we put the jump box into the cloud alongside the other two databases, this helps us reduce the local bandwidth requirements. The jump box itself could be a Windows machine, but it's typically a headless Linux box. So the bad news is now that our comfort blanket has gone. We don't have a browser anymore. So we're going to talk a little bit about APIs. We can consider our own applications that we use as abstractions that help us get our work done. Similarly, APIs are abstractions that help our applications get their work done. So there's like a chain of command here, or onion ring architecture. There's definitely layers involved, and we want to peel off a couple of those layers. So in this case, we're looking at the CFCLI. You're probably familiar with this. So the top command there is just a CFDemanes. That's going to do what it does and brings back some results and prettify the output. If you inject the CFTrace flag, this allows you to see what cloud controller APIs are being called under the hood. So the CFCLI is an example of a command line tool which abstracts an API, makes it more digestible. The apps manager, if you're familiar with that, is using the exact same API. So different applications, the same API. And it turns out that the ops manager is another example of an abstraction over an API. So back to a traditional view that you might have of the operations manager. I can tell you that this is not the path to operational efficiency. This is a nice starting point. If you take a look to the bottom right-hand corner of the screen, you'll see a link through to API docs. It's kind of hidden away, but it's there. And if you click that link, it will take you straight down a rabbit hole. And it will take some time, but it's worth the investment. So we're going to see what's behind that link and try and take the cover off the ops manager. So historically APIs weren't built around standardised wire protocols. We used statically linked libraries, often using exotic data types which made them difficult to consume. I'm looking at C++ here, by the way. Loosely coupled REST APIs built on top of HTTP help us enormously, but the details can get really scary. We have to deal with things like headers and authentication, payloads and formatting. And in the post-XML world, where we're wholly dependent on good documentation, shall I finish that sentence? It's difficult to know sometimes what you can and can't do. You don't necessarily have the best help available to you all the time, and it feels like you're scrabbling about in the dark a lot of the time. So CLIs might be able to help us. If you take a look at this screenshot from the first page of the ops manager API, which is accessible at slash docs, this is us down that rabbit hole looking at a post method for slash API v0 installations. And this endpoint represents the apply changes button. So if you send a post message to this endpoint, it will actually effectively click that button for you. Now if the slash API v0 syntax there seems familiar, you're not wrong. You have seen this before. Do you remember the first time you used the CFCLI and you had to extract the admin password? And you had to go digging into the credentials, and then you said, I want to find the admin credentials for UAA, this strange thing called UAA. And up popped a screen that looked like JSON. And if you look carefully at the address bar now, you'll see in there slash API slash v0. You've seen this before. We've probably been using this lots of times. And what's happening here is that the ops manager is saying, you're cool. You know what you're doing. So I'm going to lift the lid for you. So I'm going to now assist the ops manager and lift in the lid a little bit more. So it's worth mentioning here that the ops manager is not alone. PIVnet has also got an API, and that's critical to this. But more about that one later. Focusing our attention on the director tile, the IaaS tile, the thing that abstracts you from the infrastructure, we have here two views upon the same information. And I've picked on this one, the VM Resurrector plugin, because it's disabled by default. One of the first things you'll do is probably come in here and click that box. Now if you want to see what that looks like in slash API v0 format, take a look at the link that I put up on the page. If you pop that into a browser, because it's an authorised get request, it can just pull back that information for you. And you'll be able to see whether the checkbox has been clicked or not because it will say it's true or it's false. So we're starting to lift the lid. Moving our attention to the products. And in this case we've got MySQL, but it could be any product. We find that the product configuration behind the product tiles is broadly subdivided into four parts, each with their own endpoints. So the network config endpoint is where we can configure our tiles to isolate certain workloads. The errands endpoint is where we can configure one-off tasks like pushing the apps manager and registering a service broker, for instance. The resource config is where we size our VMs and scale horizontally our VMs. And our properties endpoint, which is the big group in the centre, which is product-specific, and it represents arbitrary groupings of properties that relate to, in this case, MySQL. Now these endpoints are all accessible via a browser, so you can see these. So we've covered get requests quite easily in the browser. The important thing now is to know how to actually make a modification using these same endpoints. So returning to the director tile, we're going to show you one-half of a set of CRUD operations. There's a get and a put combination for that checkbox that we were talking about. Now we don't need to see all of the director configuration. We just need to see the elements that have changed. So that's why the last line there when we're performing the put operation is just a very small snippet of the full set of properties. Notice also at the top how much work is required to authenticate us. We have to use the CFUAC command line tool, or the CUAC command line tool to authenticate ourselves with the ops manager. This is going to produce us a token that we can pass around on a header, but it's detail that we don't really want to have to worry about. So that was us looking at ops manager. We said we need to gain control or API control over the pivotal network as well. So it's a similar story when interacting with PIVnet. And in all these examples, we're just using basic curls. We want to try and simulate a typical workflow that you might encounter when using pivotal network. I want to search for a product. Well, the first line is searching for my SQL. The second line is looking for the latest download available. And then once you've got that, you need to accept the EULA. At this point, you need your PIVnet API token. You can get that from PIVnet. And then finally, you need to download the product. And the download is going to take some time, but at least now it can come to your jump box and not to the browser. So we spent a bit of time talking about the APIs there. And we looked at curl and we talked about the problems of using authentication and using payloads. And we want a bit of help. So it was rather painful doing it that way. We'd rather use an abstraction, something like a CLI. So shown here from top to bottom, again is a typical workflow. Downloading from PIVnet, uploading to the ops manager before being configured and finally deployed is the equivalent of an apply changes button click. The APIs work fine, but the CLIs are just nicer to use. I've done all the hard work for you. So there's a lot of information in this repo, which I encourage you to take a look at. I would definitely look at the readme's. It's going to tell you how to set up the jump box, how to set up the ops manager, and how to start making use of these scripts and introduce you to the configs whose sole responsibility is turning orange things green. So if we're using the APIs or indeed CLIs directly, we need to provide lots of contextual info on every call that we make. My scripts abstract this away into variables which capture the specifics of your own environment. And this is what the typical environment file might look like. So from an empty ops manager to a fully operational elastic runtime or PAS in eight lines of bash script. That sounds a bit grand, but actually behind these scripts, in some cases, it's really just a one liner. What I'm trying to do is abstract away all of that authentication stuff, the bit that makes your six character line become 600 characters. First one, configure authentication. So this is setting up the admin account on your ops manager. Secondly, configure the director tile. It's important to do that in isolation because other tiles have important dependencies like on the networks, the things that have been decided in your abstraction across the network and down to the infrastructure. Then we're going to need to create a key. There's a bunch of different ways we can do this, but I've thrown in a script there which generates a self-sign certificate. And then we're going to import the product, stage the product, configure the product and apply the changes. In this case, the product is elastic runtime, and that's the little elements you see in yellow there. We're interested in things called product slugs, product versions, and product file IDs. We'll come back to that in a second. What we get from using this, it gives us the ability, the flexibility to tailor this for your own requirements. So if you want different config, you can write different config. If you want a different product, you can wire up different products. The ability to learn by seeing step-wise causality is really important. In an environment where we're trying to teach people, I think it's dangerous to give them a black box and expect them to be a master or everything. You want to encourage them to experiment. I think that's what giving them these scripts does. It just lists a lid a little bit. On the next page, we'll talk a bit more about those yellow environment variables just because I think that needs clarification. These environment variables are in addition to the variables which describe your environment, so they're product-specific. If you want to find out how to identify the slug version and file ID, you take a look at these magic numbers in pivotal network. That's where you're going to get this stuff from. This screenshot comes from the readme in the repo, so we've got it in there as well. Pick these values out, put them in the variables, inject them, call the script, and the rest is just done for you. I've got a few minutes now just to do a little demo. I might need to reconsider my configuration here. I'm going to try and mirror my screens. Different resolutions, so I need to change that. Put that down there. These up here. Fingers crossed. First of all, I'll start with a working environment. I've got three environments here. This is a fully working environment. I'm going to show you my favourite setting of all in the elastic runtime or Paz tile. It's this X button, this X setting that you need to specify that you acknowledge and you understand this message, which obviously we've all read. This X is obviously data. It sits in a database somewhere. I want to try and locate that piece of information. If I cut across here, I can navigate to... I don't know how easy it is to see that. I can navigate here to API v0 staged products. In doing so, I can find out what the GUIDs are for the installations that I have available here. In this case, interestingly, it considers the IaaS tile, the Bosch director tile as a product. Sometimes they're not considered to be a product, but here they're considered to be a product. It's certainly an installation and it has a GUID. I'm interested in this CF one, which corresponds to the Paz tile. If I cut across to this screen here, you'll see that all I've done is I've just extended that products to include the GUID that I've just pulled out. If I do this one, it's going to tell me again this is the correct one, this is the installation that you were looking for. Now I can start looking at things like those... Remember I said each product has got four subdivisions of config. I can start looking at things like properties. Properties is going to bring back all of the properties that exist for, in this case, the Paz tile. As luck would have it, a quoted X only appears once. If I scan down through the file, I'm going to very quickly see which key value pair it is. If I wanted to automate this, that's the key I'm looking for. I'm going to move on to a couple of environments which need configuration. In this one, nothing has been set up yet. I need to firstly set up authentication. I need to configure authentication. We've seen we've got a script for that. Really under the hood, I promise you, you can have a look at this. I'm making a very simple call down to the on, but we're not having to pass all these variables to it because the scripts make sure that they're available and injected in. Make sure I'm on the right environment. I want to... How much time have I got? Can't see now. I've got a few minutes. I want to configure this authentication. First thing I should do while I'm here is I just prove to you that I've got some environment variables. If I just pull out the first four lines of this file, you'll see that I've got some environment variables in here. I'm going to not show you my password. Now that I've shown you that, I can show you which scripts I have available because I've cloned this repo. There's the scripts. Those are the scripts that are available to you. I want something called configure authentication. I'm going to fire that one off straight away. It's just scripts, configure authentication.sh. I think it's got everything it needs. Just double check, spring one. This is the right environment. This is going to take a little while. While this is doing this, I'm going to flick across to my second environment, which has already been preconfigured. In this second environment, we'll see nothing has been turned green yet, but I do have a bunch of imported tiles. I did this import because the imports take forever. Some of these downloads are like 10 gigs. That's why you don't want it on your laptop. It also takes time to get it installed in the ops manager. I've done that hard work for us. What needs to happen now, the very first thing is I've got to configure that director tile. If I flick over to my second environment here, and again just double check that I've got the right settings in here, that looks like spring two, so that looks correct to me. I'm going to run scripts configure director. This again has everything it needs. It's all built in there. It's got the environment variable to tell the script which environment it's targeting. It's referencing 2.1 configuration. The configuration matches the version that you're targeting. This again is going to take a few seconds. While this happens, I'm just going to throw it open to the room. Is there any questions at all? I've got a microphone if somebody wants to come and grab this. Please, can you? They won't be able to record it on the recording. Let's get this down to you. Is there any thoughts around multiple tile deployments through Opsman? Right now it's all single threaded. Sequentially one after another. Right now you have one tile. What about if you have ten? Is there any thoughts around running multiple at once? As I understand, the tiles themselves do have some interdependencies. There's a degree of parallelisation which could occur, but I'm not aware of anything that's been done to make that happen. I'm not sure. That looks like it's configured the first tile and it looks like the second piece. Let's go back to the spring one environment here. If I do a quick refresh, you'll see that this has been configured. I'll know it's been configured because it will be asking me for a login. That's configured the tile. In the second example, I configured the director tile, so a refresh there is going to tell me that that's gone green. Now I can choose any one of these products. I think I'll go for something really simple. I don't know my sequel. Let's do that. Actually, I think I've got an example in here. I'm going to do the paths. I'm using the imported name. This could be different from the downloaded name. There's a script here to help me. This is going to make a call down into the om to tell me which items I should find available for me to stage. You'll see that CF is in there and the product version is 211. I can create those variables and import them into the stage product script. When I do this, it should happen fairly quickly. As I flick back to the other screen and do a refresh, you'll see that the paths tile has come in. There's a orange paths tile. If I jump back in to my shell, my SSH session, I can now configure that product again by combining my variables with my script and the config that knows how to configure a 2.1 tile. After a little while, remember, it's got a few bits to do here. It's got to set the properties, set the network, set the errands and set the resource config. There's a fair bit to do, but after a couple of seconds, you'll see it now working through the VMs and sizing them and setting up load balancers as well. Anything that you might want to do is available via the API. That's done now. If I now refresh that, you'll notice, again, it's gone green. I haven't touched the UI at all. The final step is... It's gone to sleep. Come on. Scripts, apply changes, .sh, and that is going to kick off the big blue button. I'm going to come back to that before I finish. I'm going to wrap up very quickly. Let's see if I can get this working. Where to next? If you want to learn about concourse, lots of people are talking about concourse now, so go dip your toes in there, see where that takes you. PCF pipelines, if you want to consolidate some of what you've learned into pipelines that you can put into concourse, go for that. Dell's Pivotal Ready Architecture. This is a fairly new offering in the on-prem space. We have William with us. If you want to speak to him, I think he's in the room. There he is, William. Otherwise, go catch him down at the Dell booth. Where next? Maybe you want to go to Washington, because we'll be there in September. Here's a discount code if you want to get some money off the ticket price. Just to finish off, if I do a refresh here, I'm hoping that this should be mid-install. Sure enough, it's on its way. I am done. I'm out of here. Thank you very much, Boston. This has been a wonderful experience for me. Thank you very much.