 Welcome, thank you for coming. This is, I know, all that's between you and the very end of the conference. We're going to try and get through all sorts of stuff very quickly. Those of you who tuned into the Cloud Foundry Foundation webinar that I did with Swarna, we'll find some of the stuff in here is the same. Basically, I'm talking about how to customize things. But Bo Yang from IBM has actually done real work with this platform. So I'm Troy Topnik, product manager from SUSE. My product is Cloud Application Platform, which is a Cloud Foundry distribution. And our team was the one who brought Stratos into the foundation. So Bo, did you want to introduce yourself? My name is Bo Yang. And I'm the project manager and lead in the open source development of auto-school services for Cloud Foundry. I also work for IBM Cloud on the delivery services. OK, and so some of you are sort of Stratos converts already, but some of you might be new to it and want to just give you the basic value proposition of why you want to use Stratos. And it's because it's great. A lot of people are adopting it as their go-to interface to Cloud Foundry, even in preference to the CLI. People are using it in addition to the UIs that come with their distribution or their hosted version of Cloud Foundry. And it's proving itself really useful for a lot of people. And that's why we gave it to the Cloud Foundry incubator. So it is an official Cloud Foundry project. It came initially. The work started a long time ago. It's at HPE. Then it continued at SUSE. And SUSE is very much into an upstream first contribution model, so that came into Cloud Foundry. So if you want to work on a UI that has something to do with Cloud Foundry, doing it in Stratos means everyone in the Cloud Foundry community can benefit from it. To that end, we've made it as extensible as we can and intend to make it more easily extensible as we go forward. Because Cloud Foundry is gaining some functionality, we want to make sure that we can very quickly reflect that in the Stratos UI as it evolves as well. And we'll help you. So there's a really, really responsive team of developers that are working on this, mostly based in Bristol, UK, and a very active Slack channel and really, really, really helpful people that want to see this adopted by a lot of people and having a lot of people extending it. We can do a number of cool things with Stratos. I'll get to a little quick demo of it in a minute. The cool thing about Stratos, one of the many cool things is that it's API-driven UI. So instead of being closely coupled with an element in Cloud Foundry, it actually just communicates with the API, which means you can create additional endpoints. You can have multiple Cloud Foundry endpoints and multiple different kinds of endpoint types for different APIs to bring it into Stratos, including your own APIs. If you've got your own project that you want to bring in to your single pane of glass on your Cloud Foundry environment, you can bring that API in as well. But the demo I'm going to show in just a second is more about how you can customize Stratos. If you're running Cloud Foundry, I'm kind of surprised IBM didn't do this on the main stage the other day. If you're running Cloud Foundry and you're using Stratos, you can make it look like it comes from your company. You can make it look like it's your own. And these are the elements that you can actually customize there. Login screen, about page, and various views. And the way you do this right now, this is just for simple customization, but it's also for deeper customizations like the stuff that Bose team has worked on, is you fork the repository and then you add your custom code in this custom source directory, which sits outside of the changes that happen as Stratos is updated. So as you can keep that in sync with upstream, keep pulling in the new changes, the interface between that and the custom source structures should remain stable enough that you can get the new stuff from Stratos without merge conflicts when you're updating and probably with good backwards compatibility too. We'll see how that goes as the project evolves, but it's been a stable interface for the last several months. It's a new project, so it's changing. This custom source folder that I'm talking about, if you have it in the root directory of the project, when you build it, this is basically what I just talked about. You just run npm run customize in that directory. It'll bring in your customizations if you've built them properly and add them to Stratos. These are the main ones for doing simple customizations. You can stick your own favicon on, the logo that appears throughout the login screen and the about page. There's one specifically on the top of the nav that you can change. You just have to make sure that it is within a certain range of sizes and the background image for the login, which I can show in just a sec. You might have a different EULA for your foundation. If you're running it yourself as a service provider, you might have certain terms and conditions that you want people to see. There's a EULA page. You can extend that using a structure like this. An Angular provider. I am, I should apologize, I'm not an Angular developer myself, so I'm gonna probably defer to Bo when I hit a really hard question, because you're not an A. These extensions are done with Stratos Decorators in Angular and we're gonna show, I'm gonna breeze through these pretty quickly because we've got some good stuff to show in a second. But you can add, you can change the login screen. You can change the side nav menu. You can change, put new tabs on some of the views that you get. The application view, the cloud foundry view, that is a view on the foundation. Maybe we should change the name to foundation. The organizations page and the space view. And these other things, new actions. And this is what I'm gonna show in just a second. So to get started with the project, first thing you do is you install the Angular CLI, clone the repo, go into the repo and do an NPM install. And there's a whole bunch of NPM packages that get installed. And then you start prepping to make your own extension. So in that custom source directory, which you'd wanna make, you create a new front end. And if you need to, a new back end. But that's beyond the scope of what we'll talk about here. Generate them with the CLI for Angular and then NPM run customize. So this is what you would do to create a new decorator to create an Angular component for the tab. So go into the front end app custom directory, generate a component, have a lot more typescript expertise than I do and go in and edit the app tab example. And I should mention that everything I show here, which I'm gonna blaze through pretty quickly because I'm gonna actually go through the directory to show this stuff. Is all available in the examples directory of the Stratos repost. So you can go in and do what I'm about to do and see how all this stuff works. So you can poke around. This is the end result. What you see once you do that build, you get an extra tab that appears and it comes up in the side of your window. So why don't we try doing this right now? I'm just gonna quickly, I'll actually get to that in a second. This is the command I will run to start the all-in-one Docker image. The reason this is the easier way to get started is that Stratos has two important components. It's got the Angular front end and it's got a go-based backend called JetStream. If you don't want to, if you wanna do front end work, but you don't want to build JetStream separately, you can just run this all-in-one Docker image. You'll always find it under the Docker Hub orgs platform, SUSE platform group, slash Stratos. And the latest, you can run with this command. The important thing to see here is that the UAA endpoint and the Auto-Reg CFURL should be set to something that's real and working, whether that's a Bosch light thing locally or something running on Minicube or what I'll be running is an instance running on the public cloud. But it has to be a working UAA endpoint and a working CFAPI endpoint. You go in, NG serve, AOT false, and then the code that you're working on will become available on localhost 4200. That Docker image actually also exposes the vanilla, whatever is current upstream, the vanilla Stratos interface on port 5443. So you can switch back and forth and see what's running. And for other extension types, the approach is the same. I'm not gonna get into showing this side nav additions, but again, from this example directory, it's got all of the code examples for hello worldish kind of things. Another great place to look for examples of extensions is our downstream fork for SUSE Cloud application platform. So we actually have some things that are not in core cloud foundry. They're still all open source, but they're not generally relevant to everyone who's using this for cloud foundry. But you can go into the SUSE Stratos repository to see the difference between vanilla upstream and the extensions that we've made are also in this custom source directory to keep them separate so we can track it. Anything I say here is said better in the documentation. The team has done a great job at listing all the things you need to know to get started working on this. And why don't we take a look at the custom source directory now. So I think this is crazy. So I'm gonna run this Docker image, is that big enough? So you see here, I've pulled this earlier, I'm gonna connect to a demo cluster that I've got running up on EKS, SUSE Cloud application platform. And it says starting HTTPS server at 443, but we know that was actually mapped somewhere else to, this is gonna be tricky. Oops, bear with me a second here. I'll make it bigger in a sec. So this is vanilla Stratos. This is the latest build from upstream. I can see the applications that I've got running on my demo system that I'm using for the summit. We've got the various application views. And this is where you can add some tabs. And we can see generally what it looks like. This is the upstream version. The SUSE version has SUSE branding. I'm logged in as an admin, so I can see administrative functions that I wouldn't be able to see as a regular user. I can modify user permissions. All the things basically that you can generally do with the CLI, you can do with this because you're using the API. Now I'm going to go to the right repo, make the font bigger. And I'm going to do, I don't have, there's this example directory. And that has an example custom source directory. So all of those examples that were mentioned earlier are listed here under the app directory. So you can go through those and look. What I'm going to do here is just move that into the root directory. So now I have that in custom source. What did I say the commands were? NPM, run, customize. And that just pulls in the customizations. And then we should be able to do ng serve aot equals false. And this is going to reuse the back end that I have running in the Docker image. And it's going to use that for communicating as the API communication point with the UA and CF API endpoints. And once this is done, we should see this running on localhost 4,200. 4,200 and we'll see, not running yet. Okay, compiled successfully. I think I'm having, this looks completely different. I don't know why. If anyone was at my talk in Basel, I had even worse UI issues, projector issues. So we'll go back to the login screen where we should see that some, we've rebranded it ACME. So this is your company. We could log in and see that we've changed the logos all around, not all of the logos clearly, but we'll see on the application view that we have got this example. It's basically a hello world sort of thing and one here as well. So if you want to see how these are put together, go through that example directory. You can build it the same way. And this is the flow for when you're doing development in Stratos. So these are the basic building blocks of how we work in Stratos. And the team has done a great job of putting some examples together that can show you how it's done. Now, those are all very simple hello worldish applications. I'm really glad that Bo's team picked us up and decided that Stratos was gonna be the way they were going to build or rebuild, I should say, a UI for a bottle scaler. So firstly, I'm going to introduce what is an app outskillers project. Actually, this project we started two years ago as a collaborative builder. So I guess everybody knows what outskilling is doing, right? So basically what we provide is kind of a capability so that you can adjust your number of instance of your collaborative application based on the policy you define. So we support two type of policies. One is recording the dynamic scaling. So basically you can scale application automatically based on the performance metrics like CPU memory support response time. And the other type of scaling we call it schedule scaling. So you specify the time slot. You can specify no matter it's a recurring or just a specific time slot. Then when the time comes, then we automatically scale your application to the instance number that you defined. So it's actually last month we graduated this project to be a core Cloud Foundry Foundation project from incubating so it's not an incubator anymore. There are several deployment in production and also starting from the version 2.1 IBM Cloud in IBM Cloud Foundry Enterprise environment it's already included this outskilling service. From a user perspective, we also provide a command line interface as a CFCI plugin so that you can manage your policy. You can see what's going on with your performance application performance metrics. You can retrieve the scaling histories when your application is actually scaled and what's the number of instance, some sort of this thing. So besides the CLI actually, we also have a project in the Cloud Foundry community to build the web GUI, right? This is the one that we built previously. So it's a separate project and it's a standalone UI so you can deploy using Bosch separately and we actually doing the logging stuff through the UAA, through the SSO. So when you have a Cloud Foundry web GUI, something like a Stratos, basically you go to this URL and there will be a SSO so you don't need to sign on again. So at least the project ripple here so you can go there to find that. So some of the screenshots, so it looks like it's nice. So what's the reason why we actually rewrite? And because the mid-reason that actually Stratos is now becoming the defective web GUI for Cloud Foundry. And by extending the Stratos, we got quite a few benefits. The first thing is that actually you don't need to deploy a separate UI, you don't need to go to another URL to manage the application also in policy, right? You just go to Stratos and manage all. So it's a pretty unified experience and while the local and field will be consistent given you customize your Stratos with a different and a local field and for the auto-scaler service, actually it will use the Stratos local field. So it's pretty similar integration, the similar local field with a better user experience. And also we can simplify the development and deployment. Actually Stratos already provide a very good foundation for you to build the UIs. So we don't need to build a sort of code and what we can focus is just the UI side and some of the extension on the backend to invoke our auto-scaler service API. So we have less work, right? And for the deployment, we can leverage Stratos development model no matter. You can use different options to deploy this piece of work like you can deploy the Cloud Foundry app or deploy through Kubernetes help chart or Docker compose or even single Docker image, right? So it's pretty flexible. Well, the only concern is that how we can involve this UI separately. Now today, as Troy described, actually you have to fork the project and add what you want to do. It's still you need to do at the code level. But if you, because now the extension goes to the customer source directory and there is a symboling to the core source directory of Stratos. So when you do the extension, when you, if there is some change on the upstream master branch of Stratos, you rebate from that. So there will be no code conflicts. So you don't need to deal with the conflict. So it looks like it's not a big deal. You still have a kind of flexibility for that. I guess there will be a further work on how to make it more. We've got to slide a little later for that. So before I go to some of the details, how we extend Stratos, I want to show what it looked like when you have the other scaler, a web GUI in the Stratos, right? So now I can, you know, from the Stratos, I can deploy an application and run the build pack and it's up and running. And then you can go to the application dashboard. What you can see is like here, we have, you know, additional tab we call the other screw here. This is our extension, right? Then when you click the other screw tab, it will bring you to the conversion page. So because it's the first time, you haven't attached any policies, so that shows that policy undefined. Then you can then define the policy from the very beginning. You set up the minimal and the maximum surface instance and also the amount of scaling policy. In this case, I want to scale based on throughput. So I define the scaling policy. This means that if the average throughput is less than 10 for 60 seconds, I remove one instance and cool down one minute, this means that after I scale down and I will wait at least one minute before the next skin action kicking in. I also define another policy for scale out, right? So then you can specify the, you know, schedules for this type of scaling. In this case, I specify, you know, recurring scaling basically mean that I will scale at least, you know, to five instance during every, during, you know, 10 a.m. to 6 p.m. every Monday, Tuesday, and Friday. And you can also add specific dates for schedule scaling policy. Okay. Then you save the policy. The school, all school service will retrieve the policy and start to collect the, you know, your performance metrics and then trigger the scaling action. So then you'll go to the, you know, metrics page. You will see the throughput because you define the demand scaling based on throughput. Let's give some load to the application and then you can see the throughput goes up and it's up to the upper threshold. So after some time, there will be a scaling action showing that your application instance scale from one instance to two, and it will continue to scale out if you still load the application. Then I stopped the benchmark client. So there was no request going to your application. Then your application will scale in to the initial instance that is one instance. If you check the scaling history, the scaling events then basically shows that firstly it grows from one to two and two to three and then they shrink from three immediately to one instance. So this is the vertical demonstration. Let's see. Where is the? I know, right? Okay. Where did it go? No, how did we lose that? Just close. Did you close that? I don't think this is, I think we can work with this. Yeah. Sorry about this, folks. Why don't we do questions now? So the next is, I will go a little bit of details on how we extend that, how we implement this. Give me a second. Then that's already, is the wrong one? It's the wrong one. Weird. Yeah. There we go. Okay. So the students actually have a front end. It's anglo-based and the back end is written in Go language. So you can extend both the front end and the back end. So actually try, folks on how to extend from the front end. So actually there will be, for the back end that we call it jet stream. It's basically a proxy. So it received a HTTP request from the front end and doing some processing forward to the back end CF endpoint. Let's first say how we are extending the back end first. So the jet stream proxy have the plugin interface. So if you want to extend the back end, you just implement the plugin interface. So there will be different things you can extend. You can add another middleware to handle your HTTP requests. You can add another endpoint. You can also customize your routing for your HTTP requests from the front end. In all the skillet case, actually, we don't need the former one. What we did is actually we extended the routing kind of things, basically. We got the HTTP request from the front end. We just composed the right API endpoint for all the service at the back end. And we proxy that to the back end. It's pretty easy. And after you extend this plugin, implement this plugin interface, what you need is just add this plugin to the list so that it can be initialized. So the next is extend the front end. So what we did is go through the same way that Troy described, create the customer source directory and use the Angular command lines to generate the module and customer module and component. And then using the NPM run customized to actually link the customer source directory to the threadhose core source directory. And then what we do is decorate the component so that Troy knows that it's actually extension of the application tab, right? The here in the scene is we have kind of requirements to detect whether the auto-scaler is available because the threadhose manages multiple clover wrap environments. So some of the clover wrap environments may not have auto-scaler service. So we need a way to detect whether it's available. And if it's not, then we will not show the tab. What we did is we add the action and this is a health check. And for this actions, we have the NDRX effect to retrieve the health information from the backend to detect the availability. If it is successful, then we push the tab list to the tab to the list so that it will show up. If it is not, then we just skip. So that is then the next, how we are going to render the page for auto-scaler. I'm going to show a very simple case how we render the policy. So here, what we do is firstly, we create action for the auto-scaler policy service so that we can fetch the auto-scaler policy from the backend. And then we register an effect to list on this action. So this effect is actually fetch the policy and store that to the state tree so that the front-end can get the policies from using this entity key to this group here. Then next, what we do is actually, we define an observable variable for the auto-scaler policy service. And in the DOM tree, when the DOM tree is rendered in the browser and this observable variable is actually, sorry, is requested. Then it will trigger the app auto-scaler policy service, the action, to retrieve the policy. And then the front-end, we can get that policy from the state tree and then render the whole page. So these are the basic ideas, how we are extending that, how we render the policies in the application, the auto-scaler tab. So there are many others, right? We follow the same way to extend. So you can get more details of the code. Actually, we push the code, not our own repository, we push the code to the auto-scaler branch of Stratos. So if you want the details, just go there to see, to see all the code. Yeah. So I'll wrap it up really quick. I know we're a little bit over time. Sorry about that. We are working on a way to make it so that extensions can be published in separate repositories so that we have a proper plugin mechanism so that we don't have to do this thing with the custom source. It works pretty well right now, but long-term plan, we want people to be able to maintain extensions for Stratos completely separately. And we just want to do that because it's going to make it easier for users to add and subtract components and easier for maintainers to keep their code separate. We want to improve the back-end plugin mechanism. We want to improve and extend the documentation. Neil, the project lead, wrote that line item. I think his docs are pretty great, but he's always looking for continuous improvement on those. And also making it easy, just like when you create a component with the Angular CLI, to make something that will make a Stratos-specific, to generate a skeleton for a Stratos-specific extension. And then one of these things, the availability check that Bo was talking about, looking to see if there was an autoscaler to talk to, there's probably something internal with that. So I got to check with Neil if maybe there is something like that. If there isn't, we'll make sure it gets added. So thank you very much for coming.