 Fy greu i am ymwyaf i'r ysgol yw'r Fyglwys. Mae ydych chi'n gweld ffaith o'r ffordd oherwydd a'r byd o'r ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd oherwydd. Fy datblygu i'r orycle, ond y gweithredu yw'r ymddangos i ddefnyddio'r dweud. I'm going to talk about FN project, which is an open source project that Oracle sponsor that I contribute to, and try not to let my boss know how much time I spend on it. So, serverless. Is anybody here using serverless at the moment? What are you using? No, but which platform? Okay. Yeah, serverless. The CNCF definition of serverless, and I've put some of the words in bold, is it refers to the concept of building and running applications that do not require server management. So, it's not like a traditional application. We're not going to kick it off and then worry about how it runs after that. And it also describes a finer grain deployment model where applications are bundled together as one of more functions, and then they're uploaded to a platform, and then they're executed and they're scaled and built in response to the exact demand needed at the moment. So, not only do we not want to know about them when they're running, but we actually just want them to die when they've finished. This is actually my favourite working definition. And what this says to me is it's like you want the nice thing, but you don't want all the crap that goes with it and the unpleasantness of, for example, killing a cow or building a whole load of that. So, it's really about abstraction. You've still got servers, you've still got network, you've still got all of the infrastructure and the operations associated with it, and what you want is the capability that that delivers, but you don't want to have to look after it yourself. And that enables us to focus in on delivering the functionality that the business want. And we're going to deliver that functionality using functions or functions as a service. So, if we're in the faz world, what we're doing is writing small functions. If they get big, then you should reappraise whether you need more functions or whether you should be doing that. They should do one thing, preferably well, and they should be easy to understand so that should make them easy to write, easy to get right and easy to fix. And then we're going to run them on top of our serverless platform, which is abstracted infrastructure. And one of the upsides of the developer is it avoids this kind of situation. The business has asked you to go and build something and you know there's a whole load of setup work that needs to be done beforehand, like installing a database or an LWAP server or whatever. And then the boss comes along and thinks you've just been sitting around wasting time for three days because he can't see anything on a screen to play with. Another advantage is in terms of resource utilization. Because our functions are only going to run for a short period of time and then go away, we should be able to get better utilization of our infrastructure. We're not going to have stuff running when it's idle. And about a year ago, University of Berkeley did a report on serverless and they said, customers benefit from increased programming productivity and in many scenarios we see cost savings as well. So this is lovely. Sorry. There are no silver bullets. There are only double-edged swords. And for the record, I am not telling you to go and write everything from now on as serverless and go and rewrite everything you have as serverless. I didn't say that. So amongst the downsides, it's shiny. It's a new thing. People go, hmm, I would like to play with this new thing. And this leads to what I would call CV++ programming. I want this for my next job, so let me go and write this other thing that doesn't need serverless in a serverless way. The other problem that we tend to come across is one of locking. You can't just take a serverless function that's written for one serverless platform and go and run it on another one. You're kind of tied to the serverless platform. And then, and this is a big one for me personally, you might find that the platform is tied to a specific cloud vendor. Which means that if you decide you don't want to be with that cloud vendor anymore, or you might find that you're writing for customers who want to be on different clouds, you've got a bit of a problem. So it can be a cage. It's a very nice cage, but I would make sure that you know you've got enough room in there before you get in behind yourself and close the door. And I've been involved with serverless now for two, two and a half years. And in that time I've come across people who've been concerned about all of these different freedoms. So portability, that ability to move between clouds and on-premises. Decentralisation, people who say specifically I want to run across multiple clouds in case one has a problem. Packaging, they might choose to say I want to package my cloud functions in the same way that I package my microservices. Local testing, the ability to actually test your stuff locally and see what's going on. And one that led me in, which was language, because my favourite language is Ruby. And two and a half years ago, if you went to write serverless functions and you wanted to write them in Ruby, there was very little out there that would let you do it. So I started looking around for something and I came across the FN project, which was one of the few that treated Ruby as a first-class citizen. So you can get FN project at fnproject.io and that will give you a link across to GitHub and a whole bunch of guides. It's a serverless platform. It is container-based. It's open source Apache 2.0. It's part of the CNCF landscape and has a representation with CNCF. And what we do is we model functions as containers. So you take the function and its dependency and you create a self-contained docker image. And then when it's invoked, you're going to get that container stood up and it will run and then it will go away. That's ephemeral. It's also a good reason to make your function stateless because if you write something inside the container on your next invocation, you shouldn't expect to find it there. And the good news is it will run on any cloud as long as that cloud will let you run docker. It will run on-premises or you can run it on your laptop. In fact, anywhere that you have installed docker. So, if you go to the site and you look at the guides, you'll get an install guide and there will be a curl script. You can run if you're on Linux. And then you can go and after that's all finished, you run fn start minus d. It'll tell you don't use that for production because you're supposed to use a helm chart. Because production, it should be on Kubernetes. If you run version, you should get back as client version, a server version. If you're on Windows, I know some people who run this on Windows subsystem for Linux. I've not tried that myself. You can run it on Windows on a virtual box VM. Mac world is even easier. You just go brew install fn. And again, you come down, you do fn version. You should get the two and do the start. Right. So now we've got it installed. We want to create our first function. So you do fn init minus minus runtime. If you're me, you go Ruby, and then you give it a function name. And it creates a function in the directory with the name that you gave it. And so you get a boilerplate function. And because it's cloud native, there's a YAML file created for you as well. Because it can't be cloud native unless there's some YAML. So we have a look at our function. We've got require fdk, which is function development kit. And then we have a boilerplate hello world function will be generated for you. So deaf my function context is like metadata around the call. And the input is actually the input you've been given. And then that's passed to the handle method of the fdk. You point it at the function and it will go and run. And it will just carry some data about the file, in particularly the entry point. We say when the container comes up, it's going to run Ruby, FunkRB. So what we've created is our function code inside a function container, usually sitting on top of the fdk. And the fdks, they just basically make it a lot easier for you to write functions. You include the gem or the library or the package, whatever your language calls that kind of thing. Write the function to the function interface and the fdk provides input data to the function and writes the output in the errors and we have these fdks available at the box. Right. Come on quickly. Damn thing. Right, Ruby fdk. What happens if you make helpful suggestions on an open source project? There. And then the person who started it left, so I kind of am the Ruby fdk maintainer. So what happens, all the fdks work pretty much the same way. The container comes up, the fdk opens a socket, the fn service connects to the socket and then we get past the input. It's passed as HTTP over the sockets, which we call HTTP stream. We execute the function, giving it the input and the context right back on HTTP stream. Errors are written to standard error, which then goes to syslog. You're not playing, are you? If you don't have an fdk, it's not actually a problem because it's docker based. If you run fn in it in a directory with a docker file in it, it will assume you want to build a function out of that docker file. So you can either bring your own docker file or you can template, so bring your own image. And we also have a facility called hotwrap, which is if you've got a command line program and you want to just turn that into a function without doing too much work, you include the hotwrap binary in the docker file and make that the entry point and tell it to go and run that and it will take the input, feed it into standard input of the command line program and take the output back on standard out and then ship it back out again. So now we've created our function, we need to deploy it. So we create an application and then we do the deploy, telling it which app it's going to go to. And then what you'll see is it will do a multi-stage build and push it to your docker registry. It could be yours, in this case it's docker hub. And then your function's created. You might decide that you don't actually want to push it to a registry. So if you're sitting on a train or something like that, you don't really want to be pushing to docker hub, in which case you say deploy minus, minus local, it does the build of the container but doesn't actually push it anywhere else. And then you want to invoke it. So from the command line, fn invoke fozdem hello, hello world. Now obviously you're not going to be invoking it from the command line most of the time so you can do an inspect of your function and you get that beautifully memorable URL or come out. Which is the one that it's using internally. You also see things like there's an idle time out on the function which I'll talk more about in a sec. And you can do a curl to that URL but it's easier if you can create a more meaningful URL. So we have a thing called create trigger. The only type is HTTP at the moment. And you can give it hello URL and then you'll wind up with a more meaningful trigger endpoint which you can hit and get the same result back. You can also have that created for you automatically when you do the init. So you say minus trigger C. I'm not biased. I'm including node in this minus trigger and it will tell it to create there. Or you can even just include it in the funk.yaml file for the function and it will create it for you on deploy. So we've got our function container. We've got our function code. We've got our fdk. That is running in Docker. So the fn server itself runs Docker internally and then within that we run the function containers. A couple of reasons for that. In the early days it made things a lot more stable because if something blew up it would contain the problem. These days the main reason for that is that it gives us much more control or much less sensitivity to the external Docker version because we control the internal Docker version. The actual architecture, full architecture for deployment we'd have Kubernetes. On my laptop I just only ever run Docker. Docker fn server function. We've got a container registry, a metadata store which holds things like the endpoints and assist log service for errors. I'm going to need to go quicker. When an invocation comes in, the request comes in, the fn server will go and look at the metadata store and say what function maps to that URL. Have I already got an instance of that function running? If I haven't, well if I have I just give the work to it, if I haven't then I stand it up if necessary pulling from the container registry, send the output back to the user, anything that's gone wrong gets written to the assist log service. So if you raise an exception then that is going to wind up in the assist log. That's a joke, don't worry. There is actually a book. This is actually a book, Prisoner in a Toothpaste Factory, assuming that you're not a prisoner in a toothpaste factory but you do want to get messages to the outside world. You can just write to standard error and hope that the police are reading the log and will come and rate to skew you because you're being held prisoner in a toothpaste factory. I said about the idle time-out. Well you were useless anyway. I said about the idle time-out. The functions run at least once and then it waits for that idle time-out before terminating just so if we get a lot of invocations on the same function you're not having to pay for the start-up cost every single time. You can configure that up or down. Your FDK needs to handle multiple invocations or if you're not using one and writing your own you need to remember that. It's another good reason to make your function stateless because you don't want dirt hanging around from a previous use. Orchestration. The functions are quite small so they're not going to do a great deal individually so we need a way to orchestrate them. In FN we have something called flow. It's promises-based orchestration. Unfortunately for those who like XML and YAML and things like that it's not a new dialect of that. It's Java or Python and you use that to compose your functions together. It can be synchronous, asynchronous, asynchronous in series in parallel and it's written in your code as a function and you deploy it within the app as a function. Only your good sense stops you putting business logic into the function. Remember the flow is supposed to be a conductor not an instrumentalist. If you start getting it to do both at once things will go wrong. I think I've got just time possibly for some Shakespeare. There is always time for Shakespeare. So FN enjoy. Well let's hope so. So what I have is this on-bit bucket. I've taken Shakespeare's comedy as you like it. I've come up with all the different functions that are required for the play to be performed. We've got the right result. That was it. That was it. Result. That was it. So if you look at a flow so what we see in here is there's the this is the action of the play. Various characters are added and then we invoke different things. They get disguised. At the end of the day they should all preferably have a happy ending and get married and yes we've had four weddings and no funerals. So with apologies to Shakespeare. Yes so serverless it's abstracted infrastructure on top of which we run ephemeral functions for higher productivity and lower costs but you should beware of locking and you have an option. You don't have to just go with AWS or Azure or Google and be locked into the cloud as well as the platform. FN project is an open source serverless platform. It's container based. It functions in any language. You can run it in any cloud or on premises or on your laptop. Questions I guess I've got a minute or two. Otherwise grab me afterwards. I'm really easy to convince to talk with beer or tweet me at you and Slater. We are we were looking at CRDs I would have to go and check because actually most of the time I said I don't run it on Kubernetes. I just run it on Docker any more for any more. The question was how do you connect such a function to the database and what there's a couple of ways I've seen this done one I approve of and one I don't and you shut up. It is. So I have seen people do things like wrapper JDBC driver and credentials into a function container and then use that and I just look at that and think that's horrible. I don't think the credentials should be in there and you're opening a JDBC connection that was designed to be a long lived thing. Me I would go for a a rest API of the database if it has one. If not then I would have some either be sticking stuff into an event service or a queue which was then going to be read and polled by the database or just actually probably if it's me something with a little bit of Ruby and Sinatra that stood in front of the database and you send something to it and then it sorts out writing to the tables and things appropriately. OK. Thanks a lot.