 Felly ydych chi'n gwaith yma, a dyma'n credu chi eich chi'n eu hwnnw yn y cwestiynau yma, yma'n gyffredin ni'n ddysgu'r Michael Dawson, yn ymgyrch yn ein cyfnod yn Cwbonettis. Ychydig yn ei gweithio. Rwy'n gweithio, felly mae'n gweithio i'r cyflogion ffaith mewn cydwyr. Rwy'n gweithio i'r cyfnod yn ddangos ysgrifennu a'r cyllidau yn ei ddych yn ymgyrch ac mae'n gweithio i'r cyfnod yn ymgyrch yn ymgyrch, yng Nghymgu Cybinethaf. Felly, rydyn ni'n Chrys-Baile. Rydyn ni'n ddigon i IBM o gweithio a gweithio a gweithio'r ddylchol, gwagorio'r ddweud o gweithio ar gyfer a bydd eich gweithio'n gweithio yn ymgyrch yn ddweud llunio a gweithio'r ddwylo Felly, rydyn ni'n gweithio i'ch bod yn ymdill o ffaz o'r ddwylo ar gyfer y Chrys-Baile. Felly, mae'n ddysgu'r ddweud'r diogel The horizontal axis is like ease of infrastructure and scaling. The vertical axis is ease of development and delegating logic in implementation to run times and making it simple to develop new answers arrow on it. It has infrastructure as a service, container as a service, platform as a service, and then function as a service. Has anybody seen this before? OK. The carna scope here is, if you're doing virtual machines with something like OpenStack, you have to bring the entire stack of software, so you are creating a virtual machine with an operating system. With an install of a runtime like Node.js, you then bring an application that has all of the packages that you want, you develop the code. You then deploy the virtual machine. If you want multiple instances, you deploy multiple virtual machines, you deal with a proxy load balancer in front of it Esylwys, mae'n gweithio arall. Ond mae'n cyflwyno cymryd i'r ddweud y gallu gydag y fawr. As you move across the axis, you move to Kubernetes, this will now orchestrate multiple containers for you. So, you don't actually have to worry about managing virtual machines or instances that can be done by the platform. You're bringing a docker container that is an operating system which you've then pre-installed, pre-configured. You're bringing your own node runtime, you're bringing a node application on top of that. You move over to platform as a service, you don't have to worry about operating systems or installs of runtime, you just bring it all in an application and when you get to FAS, you don't worry about any of that, you just bring individual functions, right? So, individual small sets of lines of code and that's it. So, I as, as I said, most complicated, most control. CAS container as a service, you start to delegate scaling to the platform and some of the configuration. Platform as a service, you now focus on applications and not how they run and functions means you get to concentrate on a function and not even how the rest of the application works. So, Wikipedia defines function as a service as something that provides a platform that allows you to develop, run and manage application functions without having to build and maintain infrastructure. So, that's pretty much what we've just described. It also says it's one way of achieving serverless, right? Traditionally, when people hear about cloud functions, they think serverless and that's true because functions and functions of service is one type of serverless, but it's not the only type of serverless. So, if we look at this, this axis, you have servers in the bottom left hand corner. This is where you build servers yourself, you deploy servers yourself, you manage everything. At the far end of it, when you move to functions as a service, you have serverless in terms of scaling. When load comes in, you actually have instances of that function based on the load. Serverless is actually effectively an on-demand scaler. The more requests that come into the system, the more it scales out to handle the load. So, you can actually scale from zero to infinity as long as you don't mind paying a large enough bill. Now, on the other axis, which is that you get from FAS, is functions programming. This is reducing the amount that you actually have to develop and maintain yourself and delegating a lot to the stack. So, the horizontal axis is all about scaling, not having to deal with multiple instances and load balances, and the vertical axis is all about ease of development. When the two come together, you get functions as a service. So, functions as a service is both serverless and functions, but you can actually have them completely independently of each other. So, FAS is serverless plus functions. So, how does FAS actually work? How does it let you write small units of code and scale to infinity under it? So, this is the model for Apache OpenWisk, which is an open-source implementation of a function as a service platform. And what happens is, as a developer, you can write a function. So, this is the function signature that you've got for Apache OpenWisk for Node.js. You have a function which you then export. So, you do exports.handler equals the name of your function, and you declare something that receives one parameter, which is params, and you return. So, that's the function signature, one parameter in, one parameter out, and you can make it do whatever you like. But that's the entire function signature that you get. When you write that code, you generate what they call a function handler. If you then want to test that function handler, you have to deploy it to the function as a service platform. So, you deploy your function handler, you get the ability to do run, test, debug on the function's platform. And that function handler is now running in live. It runs on top of a runtime because something has to invoke your function when a request comes in. And then you can have a real user, and that real user can make a request, and your function gets invoked. And that's how OpenWisk largely works. And at the results on the right-hand side, it will scale on demand. As new requests come in, it will make sure there's enough run times to handle it. And the simplified development is, I didn't need to build a Docker container. I didn't need to deal with versions of Node. I didn't need to have a package.json if I don't want to. They do let you require in packages if you want to. But I only had to write a single function declaration, put some code into it, and it works. Now, let's actually dig a level deeper and see how that actually gets put together. So, the OpenWisk runtime is actually a few layers itself. At the bottom layer is a Docker container. So, it is actually building a containerized runtime. On top of that, it has Node.js running an HTTP server. And inside that HTTP server, there are two rest endpoints. There is a post on slash init. And what happens is, when you deploy your function handler, it actually calls the runtime on slash init and gives it your code. And that runtime then becomes initialized with your function handler so it can be executed. And when a request comes in, it calls a post request on slash run and gives it the parameters from the request. So, you're actually running a full Node.js stack, a full microservice in order to run that function. And the full microservice under it is responsible for making it run on the platform, being able to do monitoring, being able to do request tracking across functions and so on. So, what you've actually got there is a full microservice that's just been built for you. In terms of scaling, the way it works is when you have a second request, it actually spins up a second instance. So, if I have four requests, I have four instances of my function that are running. So, it has a model of one request, one instance. So, what are people using FAS for? Well, the biggest use case for serverless and FAS is actually to build REST APIs. So, the number one use case, 73% of people surveyed about what they're using functions for said, I'm building REST APIs. Another 26% said they were building mobile backends, which is essentially REST APIs. So, the largest use case by a long way for functions is actually just to build REST APIs. But the problem with that is the function signature. Who has built REST APIs using something like Express? So, you're used to a request response next. You're used to having a bunch of APIs that you can then call against the request to deal with headers if you want to. Response lets you set headers. Unfortunately, this just gives you one parameter in, one parameter out. And it's not designed to help you build REST APIs. It's designed to be flexible instead. It's designed to be that it can be invoked by any kind of request. It doesn't have to be REST. It could be a cron job. It could be a change in a database. But if you want to build REST APIs, it's not a very helpful API. And OpenWisk isn't alone. This is the same for all functions implementations. This is AWS Lambda. This gives you a slightly different function signature. You've got event, call context and callback. But again, you've got this problem that is not designed for a specific use case or a domain. You have a generic API, but you're probably going to be only opting to build REST APIs. So wouldn't it be nice if you could build functions using the APIs that you're used to? And given that, it's actually going to build it into a full microservice anyway, we could build that microservice using well-known frameworks like Express. So that's what I'm actually going to show you how to do. So there's two technologies I'm going to cover. One of them is called Absidy, which was talked about by Michael Dawson in the previous session, to a certain extent. And the other one is Knative, which is a project that builds on top of Kubernetes that provides serverless. And when we go back to that axis, what happens is Absidy will let you provide a functions programming model. So it basically allows you to delegate lots of development to a stack, to a runtime, and then let you develop on top of it. And that then builds a microservice and then you can use Knative to deploy in a serverless fashion, so it scales on request. Now that then says you bring the two together, you bring the two axis of FAS, so functions and serverless together, and you can build a FAS platform that does exactly what you want it to, but lets you use Express APIs if you want to. So if we start off with Absidy, it consists of three things. There's a CLI that lets you do development, and there's also some IDE plugins for it. There's then a sense of stacks. Now these stacks are actually available at multiple levels. I'm going to show you the top level which is building a function. But what Michael Dawson showed in the last session was how you can build an application with this style, but you can build an entire Express app. You can have access to the Express app object itself, you can work with routers, you can register middlewares, and it will still deal with metrics and monitoring and how you deploy to Kubernetes for you. And there's a lower level where you can say, actually I've got a node application, I just want you to deploy it and deploy it serverless. And then there's a bunch of technologies around how you deploy and manage it in Kubernetes. But rather than kind of talk about how it works, I'm going to show you how it works. So if I bring up the console and make sure my directory is empty, yep. So I'm going to make a new directory for a back end. And then I'm going to do abs.d init and I'm going to tell it the type of project that I want to create, which is node.js functions. Now this is doing two things. It's going to create a very, very simple project structure for me. So I've got something to develop from and then it brings down that functions runtime that I'm going to use. If I then open this in VS Code to see what I've got and let's make this a little bit bigger. So what I've got in my project is I've got a package.json so I can add dependencies as normal. I've got a gitignore file and so on. I've got an abs.d config file that gives my project a name and it says which version of the runtime I want to use. And then I've got a function. And this function says forget requests. So module.exports.get says if there's a get request on the URL of slash run this function which is hello from abs.d. So if I then just run it and I can do that with abs.d run and what that does is it just takes my code and applies it to the functions runtime locally. So this is all going to run locally on my laptop. I now have a project that's running on port 3000. So if I go to localhost 3000 I get hello from abs.d. So I just built and deployed a function into a container and I've got a local running version of this. Now it all to do development with that function runtime. If you're used to using something like node model so you make changes it gets reflected immediately the same system applies. So let's say I want to start making some changes to my function. So let's create an array with some data in it and I'm going to have an orange and I'm going to have an apple and I'm going to have a banana and then instead of saying hello from abs.d let's get it to return a random entry. So hopefully and people can correct me if my programming is wrong. We want math.floor so math.random and then we want times that by a fruit.length and hopefully that look okay to everyone? We'll see. I'll go back to my browser and it says apple it says orange orange apple banana. Okay so it's working. So it's immediately reflected the changes that I am making in the container as I build and run it. So that's given me a function that I can run locally. Now as well as giving me a function programming I can also in terms of iterative development I can set a breakpoint if I want to. If I just close that down and then say I want to rerun in debug mode I can run as a debugger and now that's up and running I can connect the debugger it's connected and if I reload it it jumps to the breakpoint. So even though this is all running in a functions runtime inside a container I've got everything that I'm used to running. So I've just let that go again and disable the breakpoint. Now as well as giving me this iterative development environment one of the advantages of me not owning the entire application is the rest of the microservice can do stuff on my behalf. So one of the things it gives you is very simple it's this it's a health check endpoint. And that's because this is going to get run inside Kubernetes and Kubernetes will do a couple of things for you it will automatically restart any application that is struggling it checks for liveness and when it pings your liveness endpoint if it doesn't get a response it restarts you but you can also register a number of callbacks which you can do as promises to get some checks running against your application to determine whether you want to tell Kubernetes to restart you as well as liveness there's also readiness works in exactly the same way but rather than being restarted it will take you out of load balancing until you report that you are ready to receive workload. So that's built into the functions runtime for you. You also have the ability to say I want to have metrics so there is a metrics endpoint this isn't designed to be human readable you'll see what it's for later and during development time we also add a metrics dashboard no that's not it that's the URL there we go so what this provides and I'll just hit the endpoint a few times is this provides an inbuilt view of the performance of my function so it tells me about the CPU usage of the function it tells me about its memory usage it tells me about incoming hdb requests and my throughput it also knows about the javascript heap it knows about the event loop time latency and every other request and this socket IO request here is this dashboard itself running as well as that I can turn on profiling and as part of profiling it will start to generate an inbuilt set of flame charts for me so it's got full profiling of my function as part of the function's runtime so if I now disable the profiling and in fact I will just stop my app so that's now inside my function now one of the differences about building functions this way is I have a full framework and that means I don't I can't it's not just one function I can run I can put as many into the application as I like so if I drag in another function and what this function does is if I just expand the screen so this is going to make a request and so it hosts a function on slash proxy and what it does is it makes a request against the host and reports fruit plus whatever the response was from the host by default the host is localhost 3000 it will also look up default backend URL from an environment variable and you'll see why we're doing that in a second so if I again go and run that and do you an absd run this is doing exactly the same thing it's just restarting the run time which I stopped from debug mode and I put back into run mode and now when I run my application and I go to slash I still have my random fruits if I go to slash proxy it's just got an annotation in front because it's calling back to itself so I've got two functions running inside the run time this is all running locally on my laptop so the next thing we're going to do is we're actually going to deploy it to Kubernetes now the way I do that is super simple I just say deploy it now this is doing two things on my behalf first of all is building a best practice docker image with my application in so it starts off by all of your local development uses a standard node image but for production we're using the node slim image so you don't need things like npm in a production image because all of your modules are already installed so this generates a production image that's 190 meg in size rather than 900 meg and then it's deployed it to Kubernetes running locally on my laptop and that says it's now available here so if I go to that URL on slash I have my random fruits and if I go to slash proxy I have exactly the same thing that I have locally but this is now running inside Kubernetes now a way of showing you that that is true is I can go to this which is a Kubernetes application viewer and it has now detected that I've got my deployed back end I can click on it it's deployed as two resources so in Kubernetes terms you have deployments and then you have services and services expose a deployment so I have a deployment and an exposed service it knows that it's a project called back end that it's made with node.js functions version 0.1.6 if I actually create a Git project for this it knows where the Git project is what the commit was and it will jump you back to the Git project I can also go to the liveness endpoint that I showed you earlier if I want to or I can go to the metrics dashboard or can I this is always the danger of doing things live isn't it what happened to my metrics dashboard ah there we go I just need to make that connection available all right so let's reload that there we go so this is my metrics dashboard so it jumps me through to the metrics dashboard inside Kubernetes that knows about my application so the registration that I've got an application that has monitoring that's done with Kubernetes automatically that's part of using the functions runtime it knows about request durations in the URLs that they're on it knows about request count CPU usage memory usage etc so I've written just a simple piece of code but the whole part of getting that to Kubernetes and scaling it has been done for me now this is just one microservice but I want multiple ones to talk together so that's actually pretty easy to do as well so for my my backend what I'm going to do is I'm just going to take its its configuration which was generated for me and I'm going to say that under the service I want to provide something that people can connect to I'm going to set its category as open API so I'm saying it exposes a rest endpoint and by default I would like people just to connect to slash now once I've done that I can do a run task of an absody deploy absody deploy to redeploy it and now I've got a function which I'm saying I want other functions to be able to use so whilst that's happening I'm going to start a second project so if I create a new project called front end let's call it front end there we go and get it yep and do absody in it no js functions so this is doing exactly the same thing I did when I built the backend project it's just building a front end function if I then open vs code for my new project what I'm going to do is very simple I'm going to go to my last project where I had this proxy function and I'm going to drag it over now what we said with this proxy function is that it decides where it connects to by using localhost 3000 all this environment variable default backend URL so what we're going to do is we're going to make the front end function look up that value when it's deployed to find my backend function and connect the two together now I'm just going to do a build to make sure that it builds properly which it should do and this is going to result in a local docker container but by building it it runs the tests as well and then once I've done that I'm just going to set it up to connect to the backend and once I've done that I can deploy it I should be able to connect to the front end and the front end will know about the backend but actually just to make it clear let's call it let's change the text that it prints as well when we're at it so I'm going to take my deploy configuration and I'm going to go to the service and I'm going to say that this wants to consume something so it consumes something of category open API that has a name of backend and it has a and it's in the default namespace and then if everything comes together I can do an absurly deploy and I'm now going to have two functions deployed and one will connect to the other again this is building a production docker image that docker image you can actually run any way that you like you don't need to be using this deploy system to run it because it is just a full microservice oh dear let's try that again check I didn't typo so I did typo yeah you're probably right yes I mean kubernetes deals with all of that for you because it sees each of these as its own containerized runtime this is one of those things where given the time I'm going to have to anyone that wants to see it working I will show them afterwards but I will quickly drop back to the slides just to show you that so that's kind of the function programming piece what actually happens here is in the CLI it does a look up of the the types of applications that I want to build we built a function there's local dev debug cycle it pulls down the stack as you build and then it takes that function and once you've built it builds a productized version of the stack and then deploys it to kubernetes and then it integrates with everything in the platform so we've got the monitoring we've got the dash bordering and so on now what Knative does for you just to finish that piece off is when you deploy Knative its style of serverless and scaling is very different to OpenWisk when you have one request of Knative for serverless you have one instance of your microservice but one of the other advantages of using a full framework is so who here runs 50 parallel requests current requests through express I would hope most people don't worry about the concurrency of express it's more than capable of handling multiple requests at the same time so by default Knative will say well if I have a second user right I don't need two instances of express to handle two concurrent users it still only has one in fact if there's four of them it will still only use one if there's a hundred it will still use one if there's a hundred and one it'll add a second one that is configurable you can make it any number you like you can make it one if you wanted to but because frameworks already provide concurrency it lets you put more requests through a system so it's a system where it does serverless based on full microservices and absente helps you build those microservices and that's where I'll leave it so those are the two projects I'll be at the IBM booth today tomorrow if you've got any questions and everything's open source projects thank you