 in DevOps track. Our next speaker is Michael Bright. He is a developer evangelist at Containers and he's going to be talking to us today about serverless computing. So without further ado, Michael. Thank you. Okay. Good afternoon. So I'm going to talk to you about serverless computing. I can't change slides. Okay. Okay. So about myself briefly, I'm British. I live in Grenoble in the French Alps and I recently joined a company called Containers who are developers of traffic. It's a fairly well-known widely deployed reverse proxy load balancer which you will see a bit later. I'm also a Docker community lead and I run a Python user group in Grenoble and I'm crazy about open source and cloud technologies, serverless, unicernals, orchestration, all that stuff. I don't know what my slides are advancing. Okay. So just a word about traffic which you will see a bit later. So I say it's a reverse proxy load balancer. It has an advantage in that it was designed for the beginning for doing hot reloads of its configuration. So interfaces with various back ends, so things like Docker swarm, Kubernetes and so on. It can pull out the information about the services running these systems and it also massively configures itself to front-end those systems. It has let's encrypt support with automated renewal of certificates. It's a pretty nice feature. As I mentioned, it's very widely deployed and it can act as a Kubernetes ingress controller, something I demonstrated in Kubernetes tutorial on Friday. Okay. Anyway, we're here to talk about serverless computing. So what I want to do is give an overview of the space of serverless. So that means, well, what is serverless? By guess you generally have an idea about that. A review of a different cloud provider offerings. I mean, very brief say that they exist. It's important to see. The fact that the cloud native computing foundation is also working on this. They have a working group to tie in and encourage standards in the space. Very briefly, open source tools. And then they look at various open source platforms which exist, which would allow you to do on-premise deployment of serverless, for example. On-premise, for example, or on a yes cloud provider. And then I'll do a demonstration of OpenFast, one of the main platforms integrated with traffic. Okay. So what is serverless? Well, first of all, does it mean no more servers? Well, of course not. Our software still has to run on somewhere. It's going to be physical machines. But it's about taking away the responsibility of maintenance of servers for the developer. The developer now is just involved in writing the business logic, the main functional code. I see it as the ultimate cloud native if you see that we've moved from bare metal to virtual machines, containers, getting lighter and lighter. And the developer has less and less responsibility for the overall infrastructure. And serverless is the next step on that journey. Serverless also is not just about the functions. It's actually functions as a service and back end as a service. So functions as a service is really the developer just providing his functions, his business logic. In practice, running in the cloud in particular makes sense to use a lot of back end services via their APIs. They may be APIs running in the same service provider or elsewhere. And so serverless is about writing glue code as well to glue those pieces together. This does mean that there's a risk of having lock in, but actually no one's forcing you down that route. If you want to develop for AWS Lambda, which is one of the leaders in this space, quite likely you will choose to use some of their back end services like DynamoDB or the messaging system. But you could use other services or not. And so on. You've chosen that lock in. Okay. Otherwise, a lot of people don't like the term serverless. I do. It's the paradigm, which is both function as service and back end as a service. But it's also a company and a tool that I don't like. But, okay. Okay. Just another look at it from another angle. So as I say, it's developer focusing on his apps. And then it's the platform provider who is responsible for provisioning the platforms, auto-scaling them if needed, generally maintaining when there are problems like meltdown and spectre. It's the platform provider who will be responsible for upgrading the systems. And you might not even see any downtime given the way that these things will be load balanced transparently. It's very much a pay-as-you-go platform, in principle at least, with several of the cloud platforms. You typically get one million requests a month free, which of course is a huge number if you're just developing. Once you're deploying, then you'll start paying. But it's still not too expensive. Though I say in theory, in practice, you will be wanting to use back-end services. And one of those is something called the API gateway, which can have costs. There's no initial investment. You can just open an account on AWS starting using Lambda. If you're not using the back-end functions, then you can just use your free tier. So it means there's no capital expenditure buying any servers and no operating expenditure either for the actual server installation. You get high availability for free if the cloud provider is doing his job of just doing the scaling out as needed. And you can deploy the same functions across regions in the cloud. So in that way, you will get high availability free, and you won't be paying more as long as you're not using the back-end services. It allows short time to market, enables innovation. You can create a new service extremely quickly if all you have to worry about is just the functions of your code. And this can scale massively. So the overall architecture is like this. You'll have a developer who's going to develop his code and load it up into the platform, that be a cloud provider or not. The functions, nothing's running until events start arriving. So events is a completely event-driven architecture. Events could be coming from some of your back-end services, maybe a trigger in your database. Files being uploaded. So with Amazon, for example, a user might upload an image file to S3, and you might have a trigger configured to do some processing on that image and maybe store it back in S3. Or you might have messages coming in from different sources, including SMS, why not, an external provider. And then, of course, web requests coming in over an API gateway. So these could be just standard browser requests or maybe jit webhooks or Docker hub or whatever. So your functions are responding to things happening externally. If we look at the use cases for serverless, so obviously event-driven, and that could be regular events that represent a real peak, like a monthly payroll or processing end of day accounting or something. The important thing is where serverless becomes interesting is where these are real peaks of load. So you're getting a better utilization rate of, because there is hardware somewhere, you want to be having efficient usage of that hardware. So if you're using serverless for just peaks, then you're getting to be effective. If you've got a fairly constant load, well, you should maybe be looking at other technologies like platform as a service or just straight containers. It's something that's applicable to quite a few domains. There will be DevOps, CI CD, banking, IoT. What's important is that we have certain characteristics. So as I mentioned, some sort of peak processing. Your application has to be latency tolerant. I think we're typically looking at like 100 milliseconds response time. And those treatments should be relatively short-lived. In Amazon Lambda, for example, I think the Cull services functions after about five minutes. Okay, let's look very briefly at some different cloud providers. There are more, but we can see these are fairly standard cloud providers. AWS Lambda, I've got another slide on them, so I'm talking more detail, but they were the first in this space. Since then, the last two years or so, we've seen Microsoft with Azure functions, Google Cloud and IBM Cloud. Azure is pretty nice platform with a nice user interface. Of course, they understand developer tools. I think we can expect interesting things from Azure functions. Google Cloud, I don't find very clear where Google are going with serverless. I'm not convinced that they're convinced. IBM Cloud Functions is quite interesting. They started an open source project called OpenWisk, which they then donated to the Apache Software Foundation. And so their IBM Cloud Functions use now Apache OpenWisk, which is something that you too can use. You can deploy that yourself on premises. So if you look at AWS Lambda, just an example, there's a certain number. They launched it in beta at the end of 2014. It wasn't till about the end of 2015 that it was stable. But there's been a huge developer uptake on that. And they did this because they realized that more and more functionality that developers were using on top of Amazon EC2 was sending to be a lot more glue code of the back-end services that they have. So suddenly having this sort of low-cost function as a service model isn't so low-cost after all. It's actually very interesting for them as a gateway through to the higher-paying back-end services. There are probably a number of language choices. Typically, these platforms are based on container technology that we don't necessarily know how Lambda is implemented exactly. Some platforms tend to allow you to bring any container to them, or others are more limited in language choices, maybe because of the need to have bindings to back-end APIs as well. And recently, there were some new features announced, which is Go. As a new language, though, this is Go as a static binary. So this is very effective. It's quite performant, but on the hand, you don't have the same integration of the IDE currently. And they also provide some tool for offline debugging, which is something that was missing before. Okay. And there are potentially huge cost savings for service. There are some real examples where services have been implemented at like 5% of the cost. And there have been other examples where the promises looked good at the end of the day, like the API gateway, which is built differently on some platforms was quite costly. So in this slide, sorry, it's a bit small, but the bottom line in that table is showing that the API gateway is free on IBM Cloud and Azure Functions, but it's paying on Lambda and Google Cloud. So I mean, just to say there are characteristics you need to be aware of that could bring in unexpected costs. I mentioned that the Cloud Native Computing Foundation has created a working group, so they're creating a lot of materials around this space. They're trying to get to the point where there's common terminology and hopefully common definitions of maybe events or this sort of thing. Tooling, so very briefly. Actually, there are hundreds of open source tools. And if you search on JTub for awesome serverless, you'll find a very impressive list. And there are some particular tools that are worthy of note. So serverless, which is the company, serverless.com, provide a tool that interfaces with various Cloud provider platforms. So it gives you a tool where you have a framework. You can easily deploy code to the different platforms using the same deployment commands. Maybe the code won't be exactly the same because you will have to adapt. All these systems are quite homogeneous, but at least at the deployment level, you can have the same processes to roll out across these different platforms. There are others, Apex in particular. There's also Chalice from AWS themselves, which is open source, which is specifically Python for Lambda. But if you look at the open source platforms, so Lambda is proprietary, as is Azure Functions, Google Cloud Functions. IBM is the only one using an extended version of an open source platform. But there are these open source platforms that you can deploy on-premises on a YAS platform yourself. So let me see. Kubler, some fission, a couple that run on Kubernetes specifically. It's Apache OpenWisk, originally from IBM. One detail about OpenWisk is it's bring your own container model as is OpenFast, which is quite nice. That allows a lot of innovation, as you will see. The FN project from Oracle and Nucleo, which is quickly suited to high-speed data stream processing. So they're focusing on the latency issue. But I'll look at just one of those and I'll demonstrate OpenFast. OpenFast comes from the Docker community. It's bring your own container. And that's really nice. It means there's been a lot of innovation using this tool. It has a portal and now with a function store integrated into that portal, which is quite nice. A command-line tool, as most of these platforms have. It also has a flavor that will run on Kubernetes. And it's quite easy to get started. There are a lot of guides available. One example was a colorizing project from Fidian Anderson. He did a blog post on colorizing video with OpenFast. It wasn't a real-time thing. This is actually frame-by-frame conversion of images. And I actually have a demo of that natural colorization of images. Okay. So time to have a quick demo. Let me just introduce very briefly. So I'm going to combine OpenFast with traffic. Basically, traffic allows you to front-end your set of services. It's an environment where you have a lot of microservices running within some orchestrator and the Docker swarm, Mizzas or Kubernetes. Traffic allows you to do several things to root domain names or paths or even ports or some combination, basically URLs, through to back-end services. It allows you to control how those appear when you connect to them from the Internet so you're not exposing your internal infrastructure. I mentioned earlier, it can also do lead-syncrypt certificate renewal. And it's a load balance, of course. So for this demo, I've integrated it with OpenFast. With OpenFast, we'll actually have a set of functions. In this case, I've configured just my local host to correspond to each of these functions. And traffic, then based on the host name to which we're querying, can feed through to the appropriate function running in our serverless platform. And this is running on Docker swarm. So I'll go to the command line to show this. In fact, basically just how I integrate traffic itself into the Docker compose and then how I integrate one of the functions with the traffic-specific characteristics. Okay, so this gets tricky with the microphone. So this is the Docker compose for OpenFast into which I've integrated traffic itself. See how to do that, I'd remove that earlier. And basically, we're configuring to access the dashboard of traffic itself to connect to the Docker socket to... We connect to the Docker socket to be able to interrogate Docker swarm and discover the services already running in our platform. We're going to have a web interface. We're using the functions network. Basically, OpenFast itself creates a network apart for all the functions. And I'll show the dashboard of traffic now. Okay. So traffic provides basically two tabs at the top here, providers. And so we have Docker as a provider. And traffic has automatically discovered the services running in our platform. So in fact, we have a front end for each of the services, which of the functions we have. We can see here, there's things, base 64, colorize, echo it, all of these functions. And then we have a pointer to the back end, which is actually service we're going to root to. We also have a health tab where we can see actual request statistics have been going on. Okay. So this is the portal of OpenFast. So some just fairly simple functions included to start with. If I just do a hello world, if I can type my left hand, then we have a simple word count. Zero lines, I didn't do a carriage return. Two words, 11 characters. And OpenFast has a deploy new function where we can manually define a function or we can take it from the store. And one of those is this colorization application. I'm not going to install that one. I've actually already installed it here. I'll just show you now accessing platform from the command line. Oh, dear. Getting out of VI, just left handed. Okay. So let me just show an example of... One of the functions is markdown. I've scripted it just because I want to send just multi-line import into the curl command I'm going to use. It's easier that way. So I'm... Okay. Just sending some simple markdown via a curl command to my server. And we can see that it's generated some HTML. Okay. So just look at the curl command itself. The minus user demo is some basic authorization, authentication that traffic has performed. We could configure that to go out to some backend authentication server. But here it's just basic authentication. We're doing a post request against the functions. Remember on the initial slide, I showed that traffic is translating from some domain name. It's actually mapped onto local host, in my et cetera, hosts. But traffic will detect that host name and then forward that to the backend markdown function. And then the minus T is just passing in the post data, the markdown that we saw there. Okay. So I'm just going to do a display of an image. Okay, a black and white image, which I'm pulling just from a local web server on my machine. And I'm going to do a curl again with the user login, posting this to colorize.fn.fast, sending that same URL of an image. And so that's now going to be processed by the OpenFast. I've redirected the standard out to a file face. Okay. And so we now have a colorized version of that image. And so I didn't say much about that application, but that was basically a container someone has created which is actually using machine learning inside it. So I think it's quite powerful. You can see several things put together here, we can easily have a machine learning function available just like that. We can also route through to services in our infrastructure through the use of traffic. If I come back, my presentation. Okay, so serverless is still a young technology. We see it evolving rapidly. There's a lot of uptake with Lambda, but there are other platforms that are very interesting, including open source platforms. I think we'll see a lot of evolution in capabilities, performance and pricing models. There's a lot of promise from any workloads that there are constraints on latency requirements, this sort of thing. There are real cost savings providing these services, some sort of unpredictable peak traffic. All the major cloud providers are investing in this technology. There are many deployment choices, and so we need an open cloud. Today, there's not a lot of homogeneity across the platforms for the moment. So thank you. If you have any questions, please take them. So we don't have any time for questions right now, but I'm sure Michael will be around and willing to field anyone's questions. If the next speaker, Hector Martinez is here, please come up and we'll be starting as soon as he shows up. Thanks.