 Introduce our first speakers. We have David Martin and Peter Brown, you're both Red Hat, Red Hat, Red Hat, and Arrow Gear. Okay, so they're going to present calling serverless functions from an Android app. Now, if you have any questions, they're going to take questions during the presentation, if we built that into their presentation time. If you have any questions, let me know and I'll come by with the mic. Oh, I should have turned this on. I'll come by with the mic to make sure that your question can be heard and recorded. There's no need to get up or walk up to me, I'll just go to where you are and the mic can hear you. So with no further ado, please. Okay, thanks everyone for coming. And we're talking about serverless functions, calling serverless functions from an Android app. Our presentation will be structured into two parts. The first one will be about running and managing a serverless platform on OpenShift. And then the second part will be about invoking that platform from an Android app. So first, let's do a quick introduction into serverless architecture. And it kind of sounds like there's no server involved, so how would it work? So where does my request go and where does the answer come from? So that's what serverless sounds like, but it turns out that's not really how it works. The idea here is not that you don't have a server, the idea is that you don't care about the server. You don't have to know what technology stack your server is using or how much CPUs it has. All you care about is, I have my request. My request goes to the serverless provider and at that moment the serverless provider will spin up the resources, it will process my request and afterwards it will spin down my resources. So that's, sorry I have to move the mouse across. The most common type of serverless architecture is called function as a service. In that scenario you map your requests to single functions. The advantage here is that you pay for usage and not for having a backend that is available at all times. So only when a request gets in, your function will be invoked, the resources will be available at this time and there will be deprivation afterwards. Examples of this type include AWS Lambda, Google Cloud Functions and Azure Functions. So let's go on to Apache OpenWISC. Apache OpenWISC is what started by IBM. It implements the function as a service infrastructure. It's now an Apache project, an open source project. You can write functions in a large number of languages. For example you can use JavaScript with Node.js, it supports Golang, it supports Ruby and it scales using container technologies. So to interact with OpenWISC, well first of all functions in OpenWISC are called actions. And there are many ways you can invoke an action. You don't have to make a direct HTTP request. You can for example define triggers where you say whenever an email goes in I want to invoke an action on my serverless cluster. Or you can define feeds where you say that I have a GitHub repository and whenever someone pushes to that repository I want to run an action on my cluster. Or you can use AMQP when a message is received I want to run an action. As mentioned all those scales with Docker containers so if more requests go in it will spin up multiple instances of your function to manage the demand. Okay, usually you interact with OpenWISC through its CLI. This is how the banner of it looks like. And it gives you examples of what you can do with it. So you can manage your actions. You can manage your triggers, namespaces. Namespaces are ways to group functions. And here are some examples or some useful commands that you can use. The first one is just list everything that's in the cluster. This will return you all the actions, all the namespaces, all the triggers and feeds that you have. You can also just specifically list actions with WSK action list. More interesting is creating an action. So the simplest way to do it is just to use WSK action create. You give the action a name and you provide a file. By default it assumes that it's a JavaScript file. And it will be run in a Node.js container. You can create more complex actions where you provide a zip file. In that case you have to provide a kind to tell OpenWISK is this Node.js, is this Golang because it can't infer the type from the file extension. This has several advantages. So you can bundle your dependencies with your action. You could say I have one function but it calls out to a database maybe and I bundle the dependencies for this in my zip file. You could just bundle your Node modules if it's a Node.js application. On to OpenShift. So what do we want to achieve? We want to run OpenWISK on OpenShift. That already works well. There is Project Odd which provides templates for this and it makes deploying OpenWISK on OpenShift or also Kubernetes pretty straightforward. There's also an Ansible Playbook bundle. You can use that to deploy OpenWISK to the service catalog and then provision it via one click from the service catalog. This one is experimental but it works. That's okay but we want to go one step further. We want to manage or we want to represent OpenWISK resources in OpenShift so we want to represent our actions in OpenShift and we want to manage them using templates, using the YAML templates that OpenShift and Kubernetes use and we want to give you a sensible way to retrieve all the necessary data you need to invoke those actions like credentials and endpoints. The question is why would you want to do this? There are several reasons. First one are operational reasons. So it's possible that you have cluster admins that know how to deal with OpenShift. They know how to work with templates but they should not have to know about how to deal with OpenWISK and the CLI. So by giving them OpenShift templates they can just use their existing knowledge and apply that to work with OpenWISK resources. It's also possible that some applications in your cluster depend on actions that are available at deploy time. So when you have templates you can just bundle all your templates and deploy them at once and it guarantees you that okay when my application runs those actions will be available. Then there are security reasons. You might want to restrict the CLI access to your cluster because to do that you would have to give out credentials and it might be more sensible to rely on OpenShift OAuth for all authentication. And then there are some user experience reasons. So you might want to take advantage of things like the service catalog or service bindings to provide more advanced features. Okay, how do we do this? We are going to use operators. So what's an operator? I've taken this from the CoroS website and it says an operator is a method of packaging, deploying and managing Kubernetes application. Kubernetes application is an application that is both deployed to Kubernetes and managed using the Kubernetes APIs and Kube-CTL tooling. We are particularly interested in the managing aspect here. So let's go a bit into details. Operators are applications embedded into your namespace. They are typically written using Golang because that's what Kubernetes and OpenShift are also written in. They watch your resources, so everything that's deployed to your namespace like pods, deployments, routes and services and they react to changes. You can use custom types and we are going to use custom types to represent OpenWisk types like actions. So Kubernetes gives you this ability to define types that are not known to Kubernetes but are custom resource definitions. I will show you an example of that later. And there's also the CoroS operator SDK and this was announced pretty recently and it provides a pretty good way to start developing operators. It's a nice SDK that abstracts away a lot of the nasty things of the Kubernetes API. Okay, we're going to work on the serverless operator and I mentioned the custom resource definition. We want to have a custom resource that represents an OpenWisk action. This is how this template would look like. So we say that our custom resource definition is called serverless action and that's more or less all we have to provide. The rest is pretty standard. So you don't define here what the content of this custom resource will be. That's handled later. With this, you just tell your OpenShift or your Kubernetes, hey, I want to have a resource called serverless action and I want to make the cluster aware of this. We also, in the operator, we have to represent this custom resource as type in Golang. So do I have a moment? Nope. What we have is we define the type serverless action which represents our resource. It has a spec part and a status part. The spec is basically, what do I need to create an OpenWisk action? So I need a name. I need to tell what the name of the action is. I need a kind. Is it Node.js? Is it Golang? I need to write the code that I want to run, of course. Then I have a username and a password here. So OpenWisk is protected by its own authentication that I need to provide and then we have the namespace. And then we have the status struct here. This basically stores what is the status of this resource. In this case we only have, is it created or not? And the next slide will go into a bit more detail to make sense of why we need the status here. So the reconciliation loop. So how do operators actually work? They don't really react to events. They implement something called a reconciliation loop and in every iteration of this loop you are presented with all the watched resources. And it sends the job of the operator to sync the status of a resource with the managed service. So in that case we have only a status created or not created. If the operator would see a resource, that is there but has the status not created false. It knows, hey, I have to do something. I have to create this in OpenWisk now and then update the status. One could say that an operator is basically a state machine for OpenShift Kubernetes resources. Okay, this should help us understand a bit of the code here. This is taken from the serverless operator. It's a bit simplified. But the first part of this is watched. We have to tell the operator what kind of resources it should watch. In this case we only care about the serverless action. So we tell the operator, hey, watch this resource and then we hand over to the handler. And in the second part we have to implement the handle function. So this will be called in every iteration of the reconciliation loop and here we get the resource that is currently being watched. So the first thing we do is we make a copy of this resource. It's called event which is a bit misleading because this actually contains the resource itself. First we make a copy of it because it's a pointer and it's a pointer to a live Kubernetes object that is possibly watched by other operators of services as well. Then we check the deletion timestamp and if it's set we delete the action. The way this works is in Kubernetes when you delete a resource it's either deleted immediately or if there is a finalizer attached to the resource Kubernetes will set the deletion timestamp and it will not attempt to delete the resource until the finalizer is removed. Otherwise if no deletion timestamp is set we check the status. If status created is true we do nothing because okay this action is there, it's not deleted and it's already created, we can ignore it. If created is false then we just create the action. This is calling out to OpenWISC using its REST API to create the action. Okay, it's down on time. So let's create an action. We have the, let me show you that, we've deployed OpenWISC here in namespace and you can see there's also the serverless operator deployed. When we go to resources, other resources we now get the type serverless action. That's what our custom resource definition added. You can see there is already one action, test action. Now let's create another one. Let's have a look at the action that we're going to create. So this is a template of kind serverless action. That means it belongs to the type of the custom resource definition that we added earlier to the cluster. We have to give it a name, the resource itself and we also have to give the action on OpenWISC a name. We just say that this is a Node.js application or a function, this is the code that it should run and here we provide the user or the credentials to add this to OpenWISC. If you're concerned about this and you don't want to give out those credentials you could also make it known only to the operator by storing it in a secret for example. So then the operator would just take those params and apply the credentials only when it's making a request to OpenWISC. And finally we have the namespace. Under score means in OpenWISC just put it into the default namespace, whatever that is. Okay, let's create this thing now. I just used the lccli, so the OpenShift CLI to create this action by providing the template. Since created, let's check. Okay, there is test action too now. So now we've created an action on OpenWISC and it's represented in OpenShift. Here we go back to the slides. Okay, and yeah. With this I would hand over to David for the Android part. Just let me... Thanks Peter. Let's get the mouse over there. Okay, we're good. So if you want to call a service from a mobile app the first thing you might be thinking is okay I need some sort of SDK perhaps to call it or kind of just make a simple HTTP request but you also need some details of the thing that you're actually calling. So need some sort of configuration. That's the first area I looked at. The serverless action. How is that represented in OpenShift? What can I get from OpenShift? And how can I make that available to the mobile app? In this case it's a simple Android app. How can I make it available to the Android app in a way that it can actually make a call to the action? So what I'm trying to show on this slide is if we just get that serverless action, the customer resource there's a lot of stuff in there and we don't want all that. So we can slim this down. So on the right-hand side just to get a bird's-eye view of the amount of stuff in there. So as a mobile developer I don't care about all that. I just want the very important bits while you or Elo need to call do I need any credentials for that? So a little bit messy but we're getting there. We can do an OC get command, pass in the template string and the important bit is what we actually get out. So we get the host, here's where our open-wisk server is. There's the action name, the namespace and credentials. So that's more usable in our app. So we can use that, pull it down, put it into a JSON file. The mobile app itself. I'll show some of this in Android Studio but just to give an idea before we jump in. It's a simple example app. It just has one action for calling the serverless action. Don't expect anything magnificent looking. So repo's up there. If anyone does like to make things look nice by all means go ahead, create a pure. So we have a module in there for the open-wisk client. That abstracts away the bit that actually talks to open-wisk. We read in the open-wisk config from a JSON file. So using the command in the previous slide we can just dump that out to a JSON file. Make sure it's in that location there. In our app then, in our activity we can create a new open-wisk client from this config and then using that client we can invoke actions. So nothing too fancy there. Okay, to the studio let's have a look at this code. Okay, I'll walk through right from the config to calling the action and if anyone has any questions or wants me to explain a bit of code somewhere just share. So first of all this is in assets open-wisk.json. That's what we saw the output of the OC command before was location of open-wisk action name and credentials. Where is this used? In activity if we look up to top here we are just going to parse in that config here creating a new open-wisk client from that. And please ignore the line that talks about SSL, search and nuking them. That's only temporary. So we have our open-wisk client at this stage. What can we do with that? Well, down here in our on-click handler so this app is very simple, it has one button in our on-click handler for that and it will construct some prams that we want to send to this action and pass it along. So this particular action that we created it takes in a name and a place. So name will say world places Boston and it will respond with a string that includes those words in some place. Just down a bit further then. Client.invoke. So with the client somebody set up an order to talk to currently you have to pass in the action name and the idea is of making that much nicer to talk about that in a couple of minutes. Anyways, client.invoke. That's the action name. Pass in those prams and we get a response back and all we'll do here is just update our text view to set the text just above the button to say whatever we got back from that serverless action. I'll jump into the invoke function as well just to show there's nothing special in here either. So this is using the volley HTTP request library setting up a new queue format the URL based on the various configurations so the host, the namespace, the action name then we set up this new JSON object request passing the prams make sure we set the headers here for basic auth and that just adds it to the queue for volley down here at the bottom. So nothing special here, it's just a HTTP request. Lots of potential for making this nicer for the mobile developer but simple for now. So eventually that will come back in here to update our text view with the text. So let's give this a spin and just show that it does actually work. So we should have the emulator running here. Perfect. Okay, button text goes up here. Super exciting. Call action. Hello world from Boston. There we go. Peter did all the hard work. I just did a small bit that looks impressive at the end. So, yeah, that's pretty much the end to end. I'll say a few things of how I think we can improve this though because there is plenty of scope there. So, simpler configuration. So, one thing we didn't cover is what if you create more actions? I want to call all these actions from my mobile app. Currently that would mean a good customer resource for each one. How can we bring that all together in one JSON config file? I think Peter you mentioned about abstracting away the credentials to a secret. So for security reasons you might want to do that. Also for simplification of the configuration just keep it in one place and then the app knows where you can get that. Speaking from strictly Android point of view, Gradle or build time plugins could really help here. So, one idea is to get a plugin that at build time pulls down the latest config from OpenShift for you, rather than you having to remember or script that horrible OC command. Also a nice plugin. This is one that I've been inspired from the Apollo client, the GraphQL client if anyone's familiar with the Apollo libraries, but generating types or classes at build time that map to the service actions. So, you could do something like that. My custom action.invoke and there's type checking on that and if that action doesn't actually exist there won't be a type there. So, much safer for programming. And then integrations. I think this is where the most interesting bit is. How can we get the mobile bit integrated with more true OpenWISC? How can we get OpenWISC integrated with more things in general? So, mobile security. That's a big feature of the air gear community, the air gear SDK. Integrating with key clock. So, what can we bring in there so only the right people are authorized to call particular actions. I know OpenWISC has its own credentials, but can we bring key clock into play to keep it more centralized as part of a larger project? Server side or serverless side, whatever you want to call it, integrations using. So, something like if people have used Fuse or Synthesis, the idea is you have these connections that can connect to different services, many, many different types of services, and you integrate them or tie them together in some sort of filtering or data mapping through the UI. So, serverless actions could feed into that and possibly at the other end, you could have something like a messaging queue and hook them up together. Third point, OpenShift UI extensions. This is something, as part of the air gear work, we would have experimented a good bit with OpenShift 3.10 and we're looking to the future with OpenShift 4 as well because it's changing somewhat, but how can the OpenShift UI, how can it be made aware that OpenWISC is running here and there are actions created, well, the customer resource, they can tell you that, and then showing that in the UI in some really nice way so you can have a unified view of your project within OpenShift so you can see your serverless actions and you have other things running, you can see all those things as well and you can visualize all these things working together as part of a larger project. So, that's it on the future potential. I'll hand it back to you, Peter, if you want to do a bit of a wrap-up. Maybe to show you one more thing. So, we've seen that the resources have been created here, but what does the resource actually look like? We can just inspect the YAML source of this. Yeah, it's hard to read from here. Is that okay to read? I'll try to. So, this is what the YAML representation of our OpenWISC action now looks like. We can see that it's of type serverless actions and that it has a number of annotations. Those are put here by the serverless operator. One thing it does is it annotates the resource with the endpoint of the actual action in OpenWISC and it provides those properties also standalone. So, you have the host, the name, the namespace, and this is used in David's app. So, this is parsed out and put into a JSON file. Then we can also see the finalizer here. So, when we create an action, we add this finalizer to the resource so that when you delete the resource in OpenShift, it's not deleted straight away, but the operator gets notified and it can do its cleanup, where cleanup means removed action from OpenWISC, then removed the finalizer and then OpenShift can finally delete the resource. And then we see the spec part. This is what we talked about in... Let me find my mouse cursor. Whoa, there. A few slides back. So, if we look at that code, this is exactly what we read out. So, name, kind, code, and so on. This is what we read from this YAML definition. Yep, stored here. And the status. In that case, the action is already created so the operator will just skip it. That was just to show you how it actually looks like on OpenShift when we created an action. And then back to the presentation. Just a quick recap of what we did. So, we have OpenWISC running on OpenShift. We didn't do much here because that already worked thanks to Project Odd. We have now an operator that manages our actions. We can interact with this operator by creating instances of custom resources. We can retrieve this configuration using the OpenShift CLI tools. And this configuration is then consumed in an Android app and this Android app can use this configuration to also invoke the action. The code to all of this is available here. So, you can have a look at the operator itself. It's using the operator SDK. Here's the repository and we also have the repository of David's Android app here. Yep, that's it. Thank you very much. So, the first one... The first one is you mentioned like Lambda, for example, and in support for the same type of languages. For example, let's say Node.js, which is commonly used. How easy is it to port the code over to OpenWISC on OpenShift? You mean how easy would it be? So, can I just take the code from Lambda and plug it in and would it just work out of the box? I have to say I'm not familiar with Lambda, but OpenWISC takes, in the most simple case, it just takes a function without any dependencies. So, if you have something like that running on Lambda, I would imagine that you can just take it and deploy to OpenWISC, that would work. I'm not sure how Lambda deals with bundles, so where you have an action with dependencies added to it. Well, so you just upload a zip file, the same as here. Can you tell what language, what runtime version, and so on. Can I just ask one more question? So, related questions on the zip file that you upload. So, using your same thing, let's say you have MySQL database that you're uploading as a dependency, but you don't want to create the connection pool every time for your serverless request. What are best practices for making sure they persist over multiple serverless requests? I'm not sure if in that scenario, serverless is the best approach. It's best suited to best to stateless and then to maybe actions that do simple transactions with a database. But if you have a connection pool and you're doing frequent database transactions, maybe a standalone service is a better solution. But that's just my take on it. What I tried was to add the... Yeah, I had an action that I used dependencies to talk with their Google Home. And I just downloaded the dependencies, bundled everything into a zip, and that worked fine. But sorry, I can't really tell you how to deal with database connection pools. Thank you. Go ahead. We have a short time. Okay. So, yeah. Really, the open source serverless framework to support function as a service, such as the omwis and Kubernetes or other framework. So, it's a special reason why we use the omwis in our machine. We don't consider it as omwis as a framework. Do you mean, why do we want to run open-risk and open-shift? Yeah, yeah. The other kind of... Oh, yeah. I can go back to the slide. Oh, sorry. Yeah. The idea is that usually you interact with open-risk through your CLI and you need credentials to do that. And the idea is, imagine you have a large Kubernetes cluster of an admin that knows how to work with... work with templates that he shouldn't have to know about open-risk. And that's the idea. This administrator can apply his existing knowledge about templates and just apply it to open-risk. You can use those templates to interact with open-risk. That's kind of the idea, giving a more general way of interacting with your services. Okay. Does that answer your question? I think that in the new field, it could be incorporated as a 70s framework beyond the open-risk. But in the future, Open-Shift has planned to double-sign as... So this is purely experimental. Okay. This is not a product. And I don't know if there will be... Oh, okay. Yes. All right. We have two minutes left. Keep it very short. My question will be short. Instinctually, if you have a... You're building an application, an Android application, and you're deciding whether or not a serverless backend is the right option for you, just by what metrics do you make that decision versus a standalone service? So I think that if you have stateless transactions where you just want to do some kind of computation on the backend, that's a pretty good use case for serverless. If you do have something like do simple look-ups that involve maybe a dependency to a database, but not much else, this also might be a good solution for serverless. Everything more complex, especially everything that needs routing or has different... or implements different kinds of intents, is probably better suited as a standalone service. Or David, would that be your view as well? Yeah. That's pretty much my view as well. I just wanted to add that sometimes your choice might be there as well. If you're using serverless on AWS, then it's all managed for you. But if you're talking about on-premise, do you have the team to manage an open-wiz cluster or not? In which case it's just ruled out completely do your own app and manage that. I can give you maybe one practical example, David. Yeah, we have to wrap it up. Can you talk after? Yeah, that's okay. Can we talk after? Yeah, sure. Sorry about that. It's a very tight schedule. No, that's fine. Sure.