 Welcome to the next presentation for the 101 Essentials Cloud Track. Ned Jameson, who works for a Cloud Guru, will help you get started with serverless computing. After the presentation, there will be some time for a Q&A. Feel free to type in any questions you may have for Ned. Ned, take it away. Thank you. I'm going to share my screen now. And we will just go down here. So bear with me one moment, share my screen. There we go. And we're away. So hi, everyone. Welcome to this session in the serverless 101 Track. I'm Ned Jameson and I'm an educational design architect at a Cloud Guru, a Cloud Skills training provider. You'll see the Linux Academy logo on the slides as well. And that's because we're all big one family now. So we're a big family. Part of my role involves looking at what technologies our customers and industry more broadly are investing in. And that involves looking at what businesses have planned and what focus areas we'll need to address so that industry has the training available for their workforce to achieve transformation they want moving forward. So I've spent about 15 years in the tech sphere in a variety of roles from development and architecture to strategy. And probably most importantly here though, I've been involved in many production projects involving serverless architectures. I'd like to set some expectations for this session from the outset. This session is targeted as an introduction to serverless through an open source lens. So we'll be going through what we mean when we use the term serverless, how it's different from other architectures and patterns and some appropriate use cases and some of the benefits and drawbacks to the available open source options. I'll then do a bit of a hands-on demo of deploying some functions using one of those options as well. So bear with me as we cover some of the more fundamental concepts so that we can get everyone up to speed. So what is serverless? The term serverless has been around in a production sense for about five years and often when people are talking about serverless, they're talking about what's called FAS or FAS or functions as a service. The most basic thing to get across here is that serverless doesn't mean that there's no server. Really, there are many, many servers and they might be in your data center or they might be part of a cloud provider's infrastructure and these servers are needed to deploy and manage your application. As part of this sort of broad grouping of serverless, there's a couple of concepts that we need to cover. Serverless computing and serverless platforms. The term serverless computing is really about the shifting mindset in how developers approach building and delivering applications. This involves abstracting the application infrastructure away from the code, which simplifies the development process as well as providing cost and efficiency benefits. Whereas the term serverless platforms on the other hand refers to platforms that provide APIs that users or that allow users to run actions or code functions and return the results. These platforms might provide HTTP or HTTPS endpoints to allow a developer to retrieve the results of those functions and as we'll cover later on, the endpoints might be used as inputs for other functions, thereby providing a chain of functions that each perform a particular task. On most serverless platforms, the user will create and deploy the functions before executing them. So everything is ready to go and the platform will execute it when it receives an event that meets the criteria it's been configured with. For example, an HTTP request or a file upload. To really understand serverless though, there's two parts to the history that we need to understand. The first is the evolution of computing infrastructure and the second is the evolution of our software architectural patterns. And bear with me here. Originally back in the early days of the internet, there were hosting providers distributed across the globe and you'd rent space or servers from them. As things got bigger and applications got more complex, data centers became far more prominent and scaling to meet demand required essentially adding more servers. And for the most part, you were responsible for managing the application stack, the OS, the storage, the networking, as well as the hardware. Upgrading resources on a server required someone to be there physically doing it. As things matured, infrastructure as a service. So IS platforms came along. Platforms like Amazon Web Services, Microsoft Azure, Google Compute Engine or OpenStack became really prevalent. And what these platforms did was they abstracted or hid away those infrastructure components into essentially APIs for managing compute resources. So they could be bare metal or VMs, object and block storage, networking services and they charged you based on how much you used. So your data centers became virtual because that underlying infrastructure was abstracted away from you. Instead of having to add more servers, you scaled your capacity by allocating more resources like virtual machines. And when it came to management, you'd be responsible for the application stack, the data, the OS and the infrastructure as a service provider would be responsible for the virtualization, the servers, the hard drives, the storage, the networking. We then had the widespread adoption of platforms as a service, PAS, which added another layer of abstraction on top of the IS components. It did this by providing a unified computing platform with a self-service portal to deploy applications in most cases. Examples of these platforms are AWS Elastic Beanstalk, Heroku or Red Hat OpenShift. PAS abstracted away the management of infrastructure services with a focus on scalability, high availability and multi-tenancy as its core principles. Under the PAS model, consumers managed the application stack and data while the PAS provider managed the OS, the virtualization, the servers, the storage and the networking components. And then containers came along and containerization and containers as a service, CAS extended the virtualization solution by making it lightweight using far fewer resources which meant faster boot times. It meant the developers could create portable runtimes across OSes, providing a really lightweight mechanism for distributing applications and their dependencies. It sat between IS and PAS. Many PAS platforms use containers to manage and orchestrate applications. So there is some overlap there and the containerization approach really contributed to the adoption of what's known as the micro services architecture pattern by isolating functional components as their own services. You've almost certainly heard of container service providers like Docker or Amazon ECS or Google Cloud Platform. Container runtime engines abstracted the OS. In the container approach, consumers managed the application stack and the data while the container service provider manages the container engine, the host OS, the servers, the storage and the networking. Of course, there are open source systems that also take care of a lot of this. Kubernetes being the most widely adopted and we'll touch on containers and Kubernetes a bit later on. This all led to the past five or so years where AWS introduced Lambda which sparked a desire for serverless computing platforms and implementations. The functions as a service approach, so FAS, uses self-contained stateless chunks of code managed into what's called or packaged into what's called functions that can be run or triggered in containers and we use events to trigger these functions. And this is all done without managing the underlying infrastructure or the language runtime needed by the code. In the FAS model, consumers manage the application code as individual functions while the service provider or sorry, serverless provider in this case manages the execution environment and everything that supports it. So the host OS, the storage and so on. But this idea of separate functions is pretty different from how we might traditionally have developed software. So let's take a look into how that has also changed over time through different architectural patterns. The traditional architectural pattern is the monolithic architecture. This pattern meant that applications were developed and deployed as a single unit on shared infrastructure. You may have segmented the code in the development phase but those were tightly coupled so incredibly dependent on each other that everything had to be deployed together. And monoliths can be practical in some situations but when a platform's code base grows to the point where you have multiple development teams, some tasks can become really quite difficult. When you do versioning, you did this at the application level and what this meant was that if you had a problem in just one part, the entire application would need to be rolled back to its previous state. The implication of this was that you often had pretty thorough approval processes for deployments and there were just fewer of them. Eventually this was replaced by service-oriented architecture or SOA. And this was where the three-tier architecture concept became popular. So your architecture was split into separate tiers, presentation, business logic and data. The presentation tier was your front end, what users would see and interact with essentially. The business logic was kept decoupled in the logic tier and the data sources were abstracted behind the data tier. Because each layer was decoupled from the others, they were remotely accessible via web service APIs and by design they could be deployed and scaled independently of the others. This meant that releases for each tier could be versioned and when something went wrong, rollbacks could be limited to just that applicable tier. Then we saw another pattern emerge. As I mentioned a few minutes ago, containerization enabled the adoption of the microservices architecture pattern. So what is that? It came in response to the inefficiencies of managing servers and scaling applications because what a microservices architecture allows is for developers to split up an application into smaller limited scope services which can communicate with each other. So each component works together as part of a bigger system but can be developed, tested, deployed and scaled separately. And it was the adoption of this approach that enabled the move to serverless functions because you were able to have a microservices architecture without having to manage any of the underlying infrastructure or scaling. So the way to think about serverless architecture is that they're really just opinionated microservices architectures. It combines functions as a service for compute and backend as a service offerings for roles like authentication, databases, caching and so on or caching if that's how you'd like to say it. In many ways, adopting the serverless approach changed the focus of application development from being infrastructure-centric to being code-centric and driven by events. So we've mentioned functions as a service or FAS but what does that look like as a developer? As a developer writing code that requires any backend you're asking a bunch of questions like how can I make sure that my servers respond to client requests with low latency regardless of where the client is? Is there the possibility for human error in my code deployment process? Can the servers running my software handle rapid increases in request volume without wasting money or falling apart? And how much time and energy am I prepared to spend monitoring infrastructure? The thing about developing for serverless is that as a developer you don't need to worry about this because you've got a serverless architecture. What you do need to worry about though is how you handle redundancy because by design the platforms will handle it but you need to make sure that your code can. What this means in practice is that your application logic for the most part needs to be stateless because if your code is running on a platform with multiple edge locations you need to make sure that if traffic is switched to another location or you deploy a new version of the code that the experience is consistent for everyone. If you're storing state in a function you're going to have issues. You also don't need to worry about deploying updates to each location because serverless orchestration tools or cloud platforms if you're going down that path will handle automatic deployment for you and there's no potential for human error involved in the deployment aspect. Your code itself is a different story and you don't need to worry about autoscaling because serverless functions by their architecture are designed to autoscale infrastructure capacity up and down to meet demand without manual management. So we've covered some of the benefits but what about the drawbacks? A big one? It's difficult to understand what's happening because you don't get the same amount of insight into how your functions are running particularly when we're talking about cloud vendors and because serverless really embraces this concept of event-based architectures some of the tooling for observability on this is relatively immature. It can be difficult to do things as basic as stack traces. There are of course third-party monitoring tools that do a pretty good job but you're getting into vendor dependency there so it's something that needs to be planned out in advance to work out how comfortable you are with that as a core part of your architecture and they may never get to the same level of detail because they're hard to debug due to the stateless nature so you're often limited just to logs and the other aspect is related to this and that is because the team developing and deploying is using a cloud vendor or isn't directly responsible for the server then you lose control over the runtime updates and deprecations which can potentially impact your development timelines quite significantly. If you're coupling your approach to a particular database you need to consider how easy it is to migrate or you're likely to experience lock-in. You might have heard me say earlier that serverless is not just technology. It's also a mindset and some take quite a structured and strict view of this. A few years ago David Poates and AJ Nair from AWS published their serverless compute manifesto and I'm gonna quote them here. Essentially serverless means functions are the unit of deployment and scaling. There's no machines, VMs or containers visible in the programming model. Permanent storage lives elsewhere. They scale per request and users cannot over or under provision capacity. You should never pay for idle. So no cold servers or containers and the associated costs. They're implicitly fault tolerant because functions can run anywhere and that goes back to the redundancy that I just mentioned. It's about bringing your own code and metrics and logging are a universal right. You may feel that these are appropriate or you might disagree with them but what I would like to get across is that serverless isn't a lazy approach. It really does require consideration one way or the other and there's four aspects that make up serverless applications. There's near zero administration and I say near because in the vendor world it is almost zero but in the open source world you're likely configuring your own environments your own Kubernetes clusters so you may be involved in that. However, unlike abstractions like containers or VMs which still require configuration like servers a serverless approach takes a lot of that administration out of the hands of developers. When it comes to deployment you don't need to provision anything beforehand or do anything afterwards. And then there's pay per execution. You may think well if I'm running the infrastructure outside of a public cloud vendor this doesn't really apply but that's not true. When you're building a serverless application you're always thinking in terms of efficiencies in cost or time because you don't want to be paying for resources that you're not using. You wouldn't be taking a serverless approach if you did want to do that. And then the function is the unit of deployment. We've mentioned this many, many times but this is core to it. We know that serverless architectures are a form of microservice architecture. They're composed of small independent bits of code that work together but are loosely coupled with the advantage being that the components of the system are contained and can be developed and deployed independently. And then finally they're event driven. And this is a really underappreciated part of serverless architecture because serverless applications are stateless. They need something, anything, an event to trigger their actions. In triggering the function they pass on to it the information it needs to perform its task. If there were no events to trigger the function they just wouldn't do anything. So what are some of the use cases where a serverless approach really shines? The first might be auto-scaling web applications or APIs where you don't need to worry about traffic spikes because the backend will scale automatically with demand. For example, maybe it's a scoring system for a mobile application or a game or maybe it's an API to get the latest stock count for a shopping website. Another might be event streaming. Maybe you've got a lot of data coming in from IoT sensors and traffic patterns are unpredictable or logs from something that you need to be able to process. So you set up a serverless pipeline to add them to analytics systems or update data stores. Or maybe you need to dynamically resize images to create thumbnails or change video transcoding to different target devices for video streaming. Or maybe you're performing image recognition on number plates like one of my colleagues did for fun. Serverless is a really great option here. Another use case for serverless is those situations where you need to integrate with third parties. It might be authentication or payment processing or sending SMS notifications. The way you do this serverlessly is by subscribing to events and then using them as triggers. For example, one use case I've been involved with was sending receipts to customers after they'd made a payment with Stripe. So performing actions in response to events using functions is perfect. Or maybe you've got a situation where you need to split up your architecture into a hybrid environment. Maybe part of it's on-prem but you need scalability for a particular task. So you put that to the public cloud. By decoupling the components you're able to scale those parts that you need to scale. That would be complicated though and you'd run into issues but it's possible and it's definitely not for beginners. On another use case we could use it as part of a continuous integration and continuous deployment pipeline which of course allows you to incrementally improve your code and ship more frequently. Taking a serverless approach to this could involve code check-ins, triggering builds and automatic deployment or you could even have PRs triggering automated testing. So there's just a few examples of the types of use cases where serverless approaches can be really beneficial and I'm sure you're thinking of things now where that could come into play. Before I talk about platforms and orchestrators I'd like to talk about a framework that gets mentioned a lot. Confusingly it's called serverless framework but that's just a naming thing. Serverless computing is not dependent on it but you will come across it a lot. The serverless framework isn't a platform and it doesn't run any functions. It's an open-source software development kit for serverless providing an abstraction and packaging mechanism to deploy your functions to a range of providers or platforms such as Lambda, Google Functions, Azure Functions, Apache OpenWISC, OpenFAS, Cubeless or the VFN project. For those that want to deploy to public cloud providers it provides a legacy or sorry a cloud agnostic approach that makes it easier to deploy cross-cloud providing code portability essentially. When we look at it from the perspective of open-source serverless frameworks for some of them it means we can try one out and if we don't like it we can deploy to something else without having to change any of our code. So that's pretty cool. So that's serverless framework with I guess a capital S for SDK. Just think about it as it's the SDK here if you choose to use it. Now I really won't spend much time on them because we're talking about open-source approaches today but just so that it's clear what the comparative offerings are I'll quickly run through them or just list them. On AWS there's Lambda as the function as a service offering. It's been around since 2015. So it's been used pretty widely in production environments or there's Microsoft Azure Functions which has been around since 2016 or Google Cloud Functions which has been around since 2017 or even IBM Cloud Functions which is IBM serverless compute offering. It's based upon Apache OpenWisk which we'll touch on in just a moment and that launched in 2016. So woohoo we're at the open-source options so now that we've covered what serverless is where it came from and cloud vendor implementations we can get into the exciting part which is the open-source frameworks. As a starting point let's look at the serverless landscape from the Cloud Native Computing Foundation. This is a great document. You'll find the QR code for it there on the slide which will take you to it. This maps out what the landscape looks like as categories so tools, frameworks, security, hosted platforms and the one that we'll focus on installable platforms. If we drill down into that and thank you again Cloud Native Computing Foundation for creating this we can see that the majority of these use Kubernetes. The ones we'll look at today will be Apache OpenWisk, Vision, Knative, Cubeless, OpenFast and the FN project which isn't listed there but has a major sponsor so I've included it for a bit of context. So let's take a look at each of these individually so that we've got a bit more of a clearer idea around what they do. The first we'll cover is OpenWisk. It's a pretty mature serverless framework supported by the Apache Foundation and it's backed by IBM. OpenWisk is used as the basis for IBM's Cloud Function Service as I've just mentioned and the main committers are IBM employees. There are many underlying components involved in the implementation which can make it more complicated. It uses CouchDB, Kafka, EngineX, Redis and ZooKeeper. The advantage in using it is that there's a big focus on resiliency and scalability but it does require knowledge of the underlying tools which you or your team just may not have. And another drawback is that there is duplication of features that are already in orchestrators like Kubernetes such as autoscaling. Ultimately functions are bundled into Kubernetes into Docker containers that run alongside the framework. OpenWisk can be installed using a Helm chart but it does require some manual configuration. You can deploy your applications using the CLI or even using the serverless framework that we mentioned. When it comes to monitoring, Prometheus metrics can be exported. So the way to think about OpenWisk is essentially it's the big blue of the offerings. It's the enterprise end of town. The next one we'll look at is OpenFast. OpenFast, however you'd like to say it, excuse my Australian accent. OpenFast is an easy to use serverless framework. It's on par with or even more popular than OpenWisk and commits are made on an individual basis. My understanding is that VMware employs a team to work on OpenFast full time along with individual contributors. The founder Alex Ellis is heavily involved and he runs a business supporting it that uses the same name. The architecture of OpenFast is reasonably simple. The API gateway can be invoked synchronously or asynchronously via Kafka, SNS, Cloud Events, Cron or other triggers. And asynchronous invocations are handled by NAT streaming and auto scaling is performed using Prometheus and alert manager by default. So it's got a lot of the good stuff there. But this can be changed to use horizontal pod auto scaler or HPA when using Kubernetes. OpenFast has a supported Kubernetes installer which is available via Helm or Cube CTL, including an operator allowing use of custom resource definitions or CRDs. Applications can be deployed using the CLI or using the serverless framework, but that's not considered desirable. The serverless framework that is an official function store is also provided, which is a curated list of functions that you can deploy in one click to your existing OpenFast API gateway using the web UI. This doesn't require access to the API gateway. Like OpenWISC, Prometheus metrics are available and are exported without requiring much in the way of configuration, so that's OpenFast. Next, we'll look at Cubeless. Cubeless is a function native framework for Kubernetes and it's supported by Bitnami. It works by adding the idea of a function as a custom resource definition in Kubernetes. It then runs an in cluster controller that watches these custom resources and launches runtimes on demand. The controller dynamically injects the functions code into the runtimes and makes them available over HTTP or via a PubSub mechanism. One of the advantages here is that it allows you to take advantage of all the Kubernetes primitives. So in short, this means that it turns Kubernetes into essentially a function machine with no add-on complexities like a messaging bus which you find in the other frameworks. You can manage functions like typical Kubernetes objects, which means that all the usual Kubernetes stuff just works. So Helm, Arc, etc. Interaction is performed using the standard Cube CTL so there's no need for extra tooling and it has serverless framework built-in or support built-in, sorry. The general consensus though seems to be that Cubeless isn't widely used enough in production for it to be relied upon at this point in time. But some people are quite bullish on it and they do feel that it will be a big player quite soon. So that's why we've included it. Another option based on Kubernetes is Fission. And we can see the screenshot there which doesn't tell us a lot. Fission is dependent on Kubernetes features but it's not entirely integrated which theoretically means that it can take advantage of things that Kubernetes is good at, so autoscaling, but it can take a different approach to get better performance when needed. Fission maintains a configurable pool of containers so your functions have very low cold start latencies and that's one of the things that they really push. It's backed by Platform 9 and it's sponsored by InfraCloud and Source Mesh. It can be installed via Helm. It uses Influx DB to handle state and provides logging out of the box. It also uses NATS for its message bus and reticification or caching which isn't really done by default in other frameworks. Fission has a feature called Vision Workflows which is a workflow based serverless function composition framework and that's quite a mouthful. Unfortunately, it's in maintenance mode though due to just time constraints of the core Fission team. Fission appears to have releases every month or so but those time constraints are something that you do probably want to bear in mind because it indicates that they may not be able to get things out as quickly as they might like. Another option is the FN project. It may be pronounced Function. I'm not sure but I'm gonna say FN so things don't get any more confusing. The FN project is a container native open source serverless platform. Its primary contributors are from Oracle and that's why I've included it here. The main workflow uses the FN CLI but the base of it is Docker containers. The documentation isn't great but installation can be done with Helm. The most important difference with FN is the way in which you work. So FN is focused on being easy to use but some feel that it does this at the cost of being quite opinionated. It provides hot functions which are pretty standard fare across the other frameworks and streaming functions which are a bit more unique. The FN platform has function development kits, FDKs which are a set of helper libraries to handle the system internals. They're available for Java, Node, Python, Go, Ruby, all the usual. You can also use the serverless framework with it. Finally, what about Knative? I've left it to last because it's a little different from the other frameworks that we've covered. It's a set of components that to quote Google's announcement here focuses on the common but challenging parts of running apps such as orchestrating source to container builds, routing and managing traffic during deployment, auto-scaling workloads and binding services to event ecosystems. So Knative is really about providing a solid and fast foundation for functions and it may end up forming the basis for many of the other frameworks here that rely upon Kubernetes. Developing in Knative requires almost exactly the same set of steps as any other Kubernetes application would take. There's a stream of thought out there that this isn't necessarily serverless if a developer has to manage the Docker file or build against Docker locally or deal with base images. But the interesting part of Knative is the serving component and this is where it really does make it more serverless. Here Knative takes a different approach to the other frameworks. Rather than creating a function as a file and deploying it through a CLI command, Knative makes any service available as a function. It also allows a service to scale to zero after a configured period of time. So this means that the service stops running. So there's no CPU cycles or disk activity during the idle time until it's called again. This essentially means that it's de facto serverless functions and you can then create a restful service that handles say four routes and it will be a serverless function. The other aspect to serverless is how you trigger your functions. So eventing in Knative allows you to fire off services by using events. What this means is that you can put events into a queue and you get an application with an event-driven architecture. It uses services including GRPC and Istio service mesh to make it easy to create networks of deployed services with load balancing, service-to-service authentication, monitoring and all that jazz. This image really just broadly explains how those parts fit together. Like many of the other frameworks, you can use serverless framework to streamline configuration and deployment here. So that's it for the theory. I hope you're still hanging by. Let's get into a simple practice or practical example. What we're going to use for this is OpenFAS and we're going to be doing that with Docker Swarm which you don't want to use in production. It is being deprecated. You would normally use Kubernetes but Kubernetes doesn't really setting up Kubernetes for this doesn't really work within a 10-minute demo. So we're going to use Docker Swarm which is just a great easy way to get into it. To make sure nothing went wrong that would push us over time, I've prerecorded this part but I'll be doing the instruction and the narration over the top of the prerecorded video. So it's just the video. I'm still live with you. If you don't want to use your local environment you can use this site playwithdocker.com which allows you to log in with your Docker credentials and use a lab environment. I found it quite slow so I haven't used it for this but it's you're worth checking out. The first thing we're going to do is we're going to install the OpenFAS CLI so it requires me to enter my password there. That's what I've done. We've got the CLI installed. I then clear my screen and run Docker login and provide my credentials and fortunately hopefully it doesn't show my password. Good. Then I initialize the Docker swarm and I clone the OpenFAS repo. So we've installed the CLI. Now we're installing the actual framework. So we're cloning that now. You can see that there's a folder that's been cloned and what I'm going to do is I'm just going to clear my screen and then I'm going to list the directory contents. Yep. And then we'll run deploy stack from that folder to get OpenFAS going. And there we go. You'll see that it's created some credentials for us and it's also given us an echo command there to set up the CLI with those credentials. I'm going to copy and paste that and then put that in so that then we can save them as part of the CLI. Then very quickly after that, I'm going to jump over to a browser and I'm going to go to the local host address for the GUI and insert the credentials that it's giving me. So here we are. We're inserting the credentials that it has provided as part of running that function. If you can't work out the password, just go back, copy that and paste it in like I had to do. And this is the OpenFAS GUI and let's add a function from the store which I mentioned earlier. This one's called figlet. And once we've added it, once we've deployed it, we have to wait for it to be ready. So we'll click on it. We can see there that the status is currently not ready but very quickly it will fire up and it'll be ready to go. Once it's ready, I'll type some text down here and then I'll invoke the function. And what it'll do, you'll see very soon is it converts it to some ACR. Then we'll type something else. We'll invoke the function again. Just make sure that it's all working and let's see how we go. Yep, it is great. So now we're going to try another function. We'll use the GUI again. We'll deploy a new function. We'll go to the store. We might pick sentiment analysis this time. And this will determine the sentiment of a given sentence or piece of text. And so again, it's not ready. It will switch over to ready and then we'll put our text in the request body. And we'll be able to see here that it's gone polarity, 0.8. Okay, so it's quite positive. Then we change it to not happy to see that it's actually working and polarity is minus 0.4. So yep, it's definitely working. Now what we're going to do is we're going to delete that function using the GUI. So we go up to the top there because we don't need it anymore. And then we're going to jump back to the CLI and what we'll do now is we'll use the open fast CLI to list the currently available functions. So we can see Figlet's been invoked twice and then very, very quickly there. I installed some additional functions. You don't need to worry about those but we're invoking, we're going to invoke one of them and that one is called the markdown function and it converts text or markdown to HTML. So I'm sorry this happens very quickly but you can just see that we're adding and invoking functions using the CLI and using the GUI. So there we can see that it's done its job. We're going to, yep, it's converted it to HTML. Then we're going to run the list command again. We can see that it's in the list now. Markdown there, it's been invoked once. We're going to run it again just with an example just to see that, yep, it increases those invocations and then we're going to run the list command again. So we've got that, we're going to list and we can see it has done that. What we're going to do now is we're going to remove that function. So we use the GUI to remove one before. Now we're just going to remove it through the CLI and we're going to list again and make sure that it's gone and it's gone. Now we've used some sort of boilerplate ones what we're going to use now or ones from the store. We're going to create a directory and we're going to create our own. So we make a dir called serverless demo. We go into it and then what we're going to do is we're going to pull down a whole bunch of templates for languages. And you can see there now that it's done a list of them there that I'm highlighting. So Python node and so on, Java. We're going to use Python three. So we'll say fast CLI new and we'll specify language, Python three then we'll give our function a name. Here we'll call it serverless high and then we have to give it a prefix. And so this is the Docker username from earlier and the reason it needs this is because when it's building the function it's an essential part of being able to push it to Docker. So we're adding the prefix. What this does is I speed up this next part but it creates the folder. It creates a folder called serverless high. It creates a YAML file called serverless high and if we run cat on that YAML file we can see the contents of it and you'll be able to see in there that it's included the handler and it's also included the image name with my Docker prefix, Docker username prefix. And then we can see the files that it's placed in the directory, the handler the requirements and what we're going to do is we're going to change the Python handler. So we're just changing it so that it includes some texts and return includes some text and returns the input text. So we'll do that now and this is probably the most time consuming part of everything we've done so far. Just me slow typing the text that needs to be inserted here. So we've got that we're putting it in. It's just some Python three and we save and exit. And then what we do is we run the fast CLI up command which will build and deploy it. So we clear the screen, we run the up command and I skip over the delay in this because in practice it took about five minutes but you don't need to be sitting around watching that load for five minutes. So and it's done. We can see there that it's deployed that function and it's given a URL for that function. And then what we're going to do is we're going to invoke the function and we're going to put some text in it. So we go fast CLI invoke serverless. Yep. Hi. We type some text in and this is going to happen very, very quickly but we'll jump straight to the GUI. Yep. It returned a result but we jump to the GUI and we'll do the same thing here and we can see that it's returning the correct output of that function. So it's a boring function. It doesn't do anything but we can see that it works. Then we're going to jump back to the CLI. We're going to check the stats to make sure that they're there just like we've been doing for the other things. And we can see that, yep, it's been invoked twice. And now as we've done before, we're going to remove the function again. So fast CLI remove serverless high and that's it. It was very, very fast paced. I'd highly recommend checking out the OpenFast workshop that the OpenFast team have created. I was really impressed with it. The URL you'll find at the top there. What we've done today is a few isolated parts from a selection of those workshops. Be sure to check it out. It's really great. So that's it. I hope that's given you an understanding of serverless in general and a starter into the open source options and you've been able to see just how easy it is to get started, particularly with OpenFast. So thanks for coming along. I will stop sharing my screen now so that I can see the rest of it. Okay. Just checking through the questions. I can't see any questions there at the moment but if you do have questions or if there are questions and I can't see them, maybe the engineers can tell me but thanks for coming along.