 Welcome to my talk, Fass and Furious. Zero to serverless in 60 seconds anywhere. Now, I want to talk to you about what serverless is, what you can use it for, and also how you can deploy it in 60 seconds. And then we'll move on and look at some cool demos and talk about how some of our users are going on a journey to production with it too. Now, when we talk about serverless, what we're really looking at is a new architectural pattern, a way of designing systems. So, we're all familiar with the monolith. We used to build these three-tier applications. They had a database. They were very slow to test, very hard to deploy. They did far too many things, and we broke those down into microservices. Now, those microservices had more constrained responsibilities, but at the same time, they're actually quite hard to manage, and you look at something like Istio and Open Tracing that have come along to bridge the gap. More functions are the next step in the evolution, and your function is even more specialised. It tends to do one thing and one thing really well. It's a discreet reusable piece of code. You can be deployed and you can pretty much forget about it. So, when we're looking at functions, it's that next step in the evolution. Now, they do not replace your existing microservice and monoliths. In fact, they work best as an integration, as a kind of connected tissue to bring them together. You'll see how you can build an event driven architecture quite easily. An example might be an event from a payment gateway that needs to go into a larger part of your system. I'm actually going to show you a real-life example of that later on. So, this is a cloud-native landscape, and we're all quite familiar with this. We've been seeing it every morning and evening. OpenFaz is actually on here now. I think somebody's head's in the way there. OpenFaz is on there, and I'm really proud to say that this is kind of the chart of all the technology and upcoming projects, and what's kind of hard is knowing how to navigate it. And so, you'll find many of these projects work together to create new value. We make use of Prometheus and some other technologies there, like ContainerD, to add value. I'm just going to specialise on OpenFaz today. I just want to talk first a little bit about something from the industry, from a cloud provider, about AWS. Now, I have an echo dot upon the stage. This is a voice assistant, and it's a great example of serverless applied to an IoT device. You speak to it, your voice is uploaded to the cloud. Probably Monolith will parse that, and then it will invoke a function, a short lift function, and that's it. We'll see it makes use of a weather service. It's probably Yahoo! Weather, another Monolith. The way this works is you upload a zip file, you maybe install dependencies on your local laptop, and then that's it. You upload it, and Amazon will look after the billing for you, but there's some restrictions here, and that model of editing code in a web editor, or throwing a zip file across the wall, is kind of falling short of what we've come to understand as cloud native. Now, you can actually replace Lambda with OpenFaz and Docker, and then you can get that whole ecosystem that we've seen on that landscape and bring that in alongside your existing applications. But why would that matter? Well, you should be able to write functions in whatever programming language suits your business and your team. You should be able to run them for however long is necessary, and that means more than five minutes. And you should be able to run them on whatever hardware you have available. Now, when you come to use confidential data, it's often really important to run the code where the data already lives. With OpenFaz, you can do that today. Now, let's take a quick look at a skill that I built out, and the Wi-Fi is actually being a bit patchy here, so let's keep your fingers crossed. At ADP, I have to go through around 20 clicks to be able to book a day off. Maybe similar story for you guys, and I've built a skill to make that easier. Alexa, ask self-service how many days I've got off. You have nine days remaining. Nine days. Alexa, ask self-service book a day off. There was an error. There was an error. Alexa, ask self-service book a vacation. Sure. Which day would you like? Friday. I will book off Friday right now. Alexa, ask HR how many days I've got. We'll come back to that later. What you can now see is a new interface, and we're able to completely swap out that code and build a conversational UI. Now, you may be thinking what are the use cases for serverless, and this is a very common question. It comes up a lot. Now, when you look at OpenFaz, you can actually make any binary, even a C++ binary from 25 years ago, into a serverless function without changing it in any way. Any binary for Windows or Linux will actually work that way, but I want to talk about some specific use cases and then go into them. So machine learning, you may have seen colorized bot in a closing keynote with Weave yesterday, and we're able to take a black and white photo and turn it to color just by packaging it up as a function. Batch jobs, many of us will be having our payroll processed at the end of the month. Again, that will often be done with a batch job. If you have a function with OpenFaz, you get batching for free and asynchronous invocations. You don't have to build that yourself. Image and video conversion is a very popular workload, and you'll find things like image magic are still being used widely in the industry today. Well, you can take that binary without writing any code and package it as a function. This is a perfect example of a mobile back-end and then chatbots, and I'm going to show you a really exciting collaboration with the Moby Project around that. Now, any function you create directly gets a HTTP API, it gets metrics through Prometheus, so you have observability. There's a little bit about the story so far. I started this project by looking at how to build skills for the Alexa, and I found the experience somewhat lacking. Uploading text, uploading a zip file on a song dependencies locally, that is not the container native way. I was a docker captain, and I thought, let's try to do something. I wrote a POC, it was quite popular, and I figured out that I hadn't done everything right. I heard about the dockercon called Hacks Contest, which was actually in Austin, and you needed to push docker beyond what it had been designed to do. Now, I ended up pivoting the whole project and rewriting in Go, and they accepted it, and this is a bit of a homecoming, so I was actually here on the keynote in May, so it's great to be back here. But since May, I've carried on going, and it's become the top trending project on GitHub overall for about a day. Gained over 8,000 stars, and best cloud computing software from Info World, along with Docker and Kubernetes. Very, just humble by that. There are people taking this to production, and then we have the Kubernetes support, which came in a few months ago, along with Nomad, DCOS, and Cattle. Now, this project would not have the... It would not have this much traction. It would never have grown as big as it has, unless we had a community behind it, and that's a key difference with this project. So, we have over 65 contributors, and I've got some of the highlighted ones here, and top influencers. So, up on the left here, we have... On my left, we have Stefan from Weaveworks, and he came along to try and do a cool demo as a dev advocate, but he actually liked the project so much. He stayed, and he's contributing. Finian with the sunglasses there, you may have seen him in the keynote, he's 17, and he helped me build Colourizebot, and have a story for each one of these guys, including Burton, who's a local here, and we actually got to meet face-to-face. The community is a very important part of this project. We've got over 1,400 commits, and a lot of momentum right now. So, just to go into that a little bit more, we built this Twitterbot, myself, Finian, and Ollie over there, and this is us at DockerCon a couple of months ago, staying up late, helping them add error handling to the bot, making sure that it was productionised ready to go, and this is an example of what it can do. That's just been a great experience. I met Finian in February this year. He'd seen one of my Raspberry Pi clusters, and I advocated for him with Docker and managed to get his expenses paid so that he could come and present at community theatre, and then he came to Copenhagen and spoke again. So I think I'll open ffaz, and personally I just love to invest in people and see them grow. So let's talk about the stack. Now this really comes in one flavour, Cloud Native. The API gateway is written in Golang. That is the central point where all of your functions are defined and all your traffic is routed. It's like a load balancer. There's a restful API, and every function you deploy will get a route. Prometheus is baked into that, which means you don't have to do that yourself. So as each of your functions are being called, we're actually tracking statistics on that, and we can use them then to call the Swarmil Kubernetes API natively to horizontally scale for demand. Docker's image format, I think we'll all agree, is the way we should be building and shipping software. We don't want to be going back to dropping JavaScript files into a black box. We've learned from that, and the function Watchdog is a component which we stick inside a container as a shim, a sidecar, if you like, in a similar way to Istio, which allows it to become serverless. Your application doesn't actually have to know anything about it. Now this is the function Watchdog. It works a little bit like CGI. It becomes the entry point for your container. And as we get a web request, we'll fork a process and push all of the body in through Standard In. So it's something that every application can work with and read a response through Standard Out. In this example I'm saying Python and Main3, normally you would have had to build an entire Flask application, build your own Prometheus metrics. You get it all for free and that allows it to do some really smart stuff. Now OpenFaz is more than just an open source project, another Faz framework. We have values that we use to kind of drive where the project should go. So this is developer first. It should be unsurprising. We do not want magic and widgets. People come to this project and they say we tried the alternatives and we just didn't know how to debug it when things went wrong. We have a CLI in a UI and they're actually really treasured by the community. Most of our contributions have come around the CLI because it's where the developers come first. It's the first touch point for them. Operational simplicity which means it's very easy to deploy in 60 seconds. It's portable, it can run on ARM, it can run on the cloud. We're even looking at technologies like AKS and ACI by Microsoft to allow you to run OpenFaz very easily in a managed Kubernetes environment. Because it's an open platform the community can come around it and add things that they need. Which has led to developer love. Burton, one of the guys who is local here he actually has been working in his own time to do a POC because he's so into the project and functions and he presents that back to his architects within the team and they're really excited about moving forward with functions. That reminds me of the story of Docker where I did the same thing at ADP. Now this is the rough architectural diagram if you like, 10,000 feet we have the CLI, the UI or any web service coming into the gateway and then you have your functions on the other side. Now the key difference between OpenFaz and other projects is actually you can swap the orchestration framework out. Everything else works exactly the same. And that's what enabled me to make a really tight native integration with Kubernetes in just a few days. Now we've been running that for a few months now. We have support for our back, Helm charts we use services deployments and Kubernetes secrets so there's nothing surprising. It's native integration. So let's look at deploying this. It's going to take 60 seconds and I'm going to show you that just right now. There's two of the demos that you'll see. The first one will be a collaboration with a Moby project and this is where we've built a store that can create you a Linux kit VM and will pick which ones we want and will have several functions that will then talk to Linux kit and a database and eventually deploy to packet.net a live VM. The other demo that I'm going to show you is how we've as a project scaled the community and being able to address some flaws in GitHub's permission system. So let's go into that now. So over here I have a terminal and I'm logged into cluster and you can see there are no pods back into that. Now I just have the YAML here and I'm going to run the YAML in using and I think we've got the consensus it's now called Cube CTL heard it from Kelsey and if we take a look at these pods Cube CTL get pods we're good to go. Now I can show you the open-faz UI and we see we've got no functions yet. Now this is brand new, we actually have a function store and the community is creating this I can click on a function this is the inception function it will tell you what's in a photo and hit deploy that I really like again this is from Weave and this is an SSL checker. So what I'm going to do is I'm not sure if I have automation on my certificate I'm typing in the request body at the bottom I'm going to hit invoke and we've now got a response back from this search checker it's telling me that I've got to the first of the 4th of January to re-register my certificate. Now you can imagine how you could build a pipeline out of this where that is data that's programmatically used and sent to Slack or JiraTicket or something similar. The inception function is what we've packaged with TensorFlow and what we can do is take an image and paste the URL here hit invoke and we get back a categorisation this is 48% C line Now it might not be right the whole time but actually what we've been able to do is take something very complex and in a few lines of code just package it as a function and you can imagine how you could then commoditise this. So let's move on to the demo of the Moby Store Now Abbey in the front here has worked on this with me and what we're going to do is select a couple of containers, is it Redis for instance it's quite a quick one to build and I've added that to the cart now click on the cart and I can see that Redis is listed there up at the top Now the concept of Linux Kit is that you will build multiple containers into an immutable virtual machine and so when you come to upgrade your server you just replace the entire virtual machine there's no upgrades there's no vulnerabilities I'm going to hit build and deploy and I believe this takes about 30 seconds and then over on my packet project 30 seconds we'll see that pop up So let's just go on to the next demo Now does anyone recognise this github repository Do we know what's in there No This is Docker's codebase and the bot that we built for OpenFaz to help our contributors manage the code without actually having full right access is now running on the MobiMobi organisation Now I can take I can create a new issue here test for kubicon and because I'm a Docker captain I've actually been put in a special file that says that I'm allowed to curate these issues and I can say Derek add a label in valid and the bot will pick up that via a webhook he'll then apply the label that you can see there and then I can do things like Derek close Now he also does smarter stuff for our project like checking that people have signed off for their licences so he won't allow us to merge a pull request until they've said yes I agree this can be licensed under MIT and you can see he's closed the issue there and let's take a look at the packet it's actually popped up we can see Root Linux Kit and it has an IP address there as well and that's booting up now on their infrastructure Now the demo that we built, the Mobi Store we see the cart will be empty now is actually a Node.js application and we were able to build this very fast by putting functions together that wrapped Linux Kit and talked to MySQL and that's what that architectural diagram was about Now one tenant of a serverless system is the ability to scale for demand Now this is figlet, it will give you an ASCII logo What I'm actually going to do is I'm going to call into this with Apache Bench and I'm going to try to give a significant amount of traffic and what we should see is because our metrics are being tracked by Prometheus that we'll start to see some data appearing within our graphs and there we go and the function rate is a bit faint it's just shot up here and the replicas are on the right hand side and as that traffic continues to be monitored and observed we'll actually see the function scale up and that's horizontal pod scaling and we've just gone up to five replicas there and now we have more capacity to process that more compute When that finishes it will back off and it will go back to one replica again So you may be wondering how would I actually write a function myself? Well, we have a CLI that can make it really easy for you and we're not going to put you into any boxes you can actually use a full Dockerfile and put the watchdog in manually but we have some templates that make it very easy This says We'll then combine that with a best practices Python template that has an Alpine Linux base a non-rope user and anything else that we think is necessary for your image We'll then combine them and build the image which can be stored in something like harbour, twist slot Docker registry, even key and from that point you have an immutable artifact and this is a key difference between other projects that can be deployed into dev, qa, staging and production and you know you've got exactly the same thing Once you actually start to get a few of these functions you can put them in a YAML file and we generate this for you So here I've got something that will ping a URL and something that will notify me to Slack You can also specify my secrets here So we're using Kubernetes secrets to keep your API token secure We can put constraints here we can set memory limits and then FAS-ULI build, push, deploy That's your workflow So let's talk about how you might invoke a function and you might be surprised by the simplicity of this We have a route for a resize image since we've deployed the function We have our data binary on the client side and then we'll just call the gateway of the URL, posting the binary It will look up the function in the service catalogue with dds post to the watchdog shim and then it will respond immediately synchronously with the TCP connection with your data, that's it It's very simple and this will probably be enough for most people We don't need Kafka, we don't need to involve complex systems However sometimes you may want a deferred execution and this is where you can use an asynchronous way of running your code I talked about batch jobs earlier In fact our colourised bot takes around 8 seconds to run so we don't want a persistent connection we run it in the background So your image comes in to the gateway and it will be queued somewhere Our default implementation uses NAT streaming, which is a very lightweight fast message bus A queue worker will then call the function for you but then how do you get the result back Normally it would just be lost or you'd have to code something to store it somewhere We have a built in mechanism xcallback URL that you could pass with your original call and then when it's finished it will come back to you and that could either be to a function or it could be to an external website or later a great example is to use the inception function in the store and request bin or something with ngrok and you'll be able to get the result back to you as soon as it's ready We also have a Kafka connector that's been built by the community a connector for event grid and we're looking into AWS SNS as well So I just wanted to tell you a little bit about colourised bot because it's kind of great to get the theory but when you put a system together you learn a lot of things very fast We have a tweet listener here Now that's actually a microservice because the API will not allow you to keep connecting you need to have something persistent and that's fine Don't use serverless functions for everything please do not You then have that invoke the gateway but we don't send the image in we actually store it in object storage in Minio Now that black and white image will be there we'll send the URL into the gateway and synchronously invoke colourised because it takes eight seconds you really can't afford to have that transaction open and then once it's invoked it will pull the picture from Minio it will then tweet the picture for you and if we needed to do anything else like notify Slack we could do that we've been able to build a dashboard out of it too because the metrics are just that and you can try this later on the spelling is a little bit odd it's American colour and it's English eyes so one thing that you can do once you start to build functions I think this is where the value comes is cookie cut things so I spoke earlier about a payment gateway right well Dan Cohn in his keynote said that there's this cycle that makes open source projects successful he says you have a project creates a product generates profit that goes back to the project again now we haven't quite figured that out yet but one thing we have done is created a Patreon campaign where people come back to the project $10, $15 and just help with stickers and bits and pieces like that so I created this function that takes a webhook from Patreon every time somebody submits a payment we verify the origin of the message using HMAC which is the same as what GitHub uses and then tweet using the API token that I've stored in a Kubernetes secret and you can take that and you can modify that however you like in fact if you start the open class repository the same sort of code will tweet your avatar to my Twitter feed which is kind of fun as well and you see there Wesley kindly helped us out so this is more than just a POC when it was launched in May there were very grand ideas now we're actually getting much more feedback from people about what it takes to production I'll talk about the University of Calgary we still have around 10 minutes University of Calgary has a HR system and they also have a research management system now when a paper gets approved they need to link that to their HR system they actually have a Kafka cluster there already and already have events that are generated they use serverless functions to connect them together now we didn't have a connector at the time and they just wrote 90 lines of Python code and it just worked and that goes back to what I said earlier about having this open platform I think you'll find the same for yourselves Contiama are a startup out of Berlin and they do a lot of data science now they will take something like a Jupiter notebook and they will build it deploy it and run it as an open class function and give you the results back ADP have been experimenting with machine learning to see if we can detect customer churn and what we were able to do was figure out the Python code of scikit-learn that we needed in a console app and then just wrap it in a function and it just worked it took about 5 minutes so it's very very quick to get that experience to build things up and to start going to production with it now there are a lot of integrations over on the other side of the screen some of these have been done by the community and some are actually done by the project so HashiCore has spent developer resource their developer advocate Nick Jackson has really helped the project and he's created a nomad backend so I said you can swap the orchestration layer HyperSH who I believe are here too created a backend for their cloud which gives you per second billing which is kind of exciting and then Rancher and DCOS came from the community and we have some other integrations that are happening there too now I've put Qualcomm on here because they've actually got a really cool demo with a very big Alexa over on their booth and you can ask it about openfaz and it will show you a slide so it's very cool so go and check that out later so what's next for the project and we have a few minutes for questions too well the function store has actually launched it launched so quick to update this slide so go and check it out there's no more searching through github we said serverless functions made simple and that actually challenged us to deliver on it and so now you can one click and get a really useful function Derrick as a public github app is now installed on MobiMobi and I hope that it will be able to install on other projects too so you can start leveraging the benefits of fine-grained permissions we have a documentation site and there's a little preview of it just ironing that out and then on a technical level the things that are kind of really interesting and also some things that are really helpful so integrating with cloud events SNS queues from AWS you push to S3 you get an event it's currently quite hard to do but I think we can build a connector for it multi-tenancy is something that people ask time and time again how can we create secure isolation between functions well I think some of the technologies that we've seen in the keynote could really make that very easy observability with open tracing again could allow you to build up a very detailed picture of all these functions that you're now managing where are the bottlenecks and then scaling to zero and I think this is largely overcooked people look at serverless and they say well look doesn't lambda scale to zero you should do that too I don't have hundreds of thousands of concurrent users with an over provision cluster I have a team with a payment gateway that needs to run 24-7 and have high availability perhaps it's okay if we have one replica running the whole time it's only 10 meg of RAM right so I think we need to apply some common sense but these are things that we're looking at this is a direction that we're moving in and of course we're open to contributors and I've met some of you here today and had great conversations with you really open on the project whenever anyone wants to join Slack we'll give them a warm welcome we'll introduce you to the community and help you get your first pull request matched whether that's code or just helping in some other way so thank you very much for listening really appreciate it and I think we may have some time for a few questions who's first Abbie can you run Mike thank you sir can the functions can they be developed in other languages is there a way to create a plugin or something that you can use other languages you can write your language you can write your function in any language any binary for Windows or Linux can be made into a function we have templates for about eight common languages that just make it even easier for you a question at the back so OpenFast looks to provide back-end implementations for a number of different providers if I'm looking at this as Kubernetes first and running a service platform on it does OpenFast support RBAC or execution in different namespaces can you repeat the last part of your question so does OpenFast support Kubernetes RBAC for limiting access did you say RBAC yeah we use RBAC and you can choose what namespace to deploy into so you could have two concurrent deployments so I'd deploy two different instances OpenFast then in two namespaces or one instance and it can deploy into multiple namespaces you could do two instances in different namespaces at the moment but if that's something that is interesting to you please get in touch and we can see if there's another way of doing it there was a question up the front as well do you just want to shout it and I can repeat it do you wish yourself yeah is a sample for which one yeah so is a sample for ffmpeg available there are lots of samples available ffmpeg is there we have a community repository called Fasen Furious and that's where a lot of the store images are so yeah you can get hold of it and you can actually get the gif maker you put a YouTube URL in and it will give you a gif from it okay well look I think people are trying to get off to other sessions but if you want to come up to the front and ask me anything else and tomorrow as well my github account is alexlsuuk and openfaz.com thank you