 is Belinda Venom. She's going to tell us about Cloud Made Simple with Serverless Python. Let's give her a round of applause. Can you guys hear me? Okay, perfect. So my name is Belinda Venom. I'm a developer advocate and software developer at IBM. Right now my current focus is on both cloud and serverless. Prior to this role I was a software developer on the IBM Cloud Functions project which is based on top of Apache Open. Oh, it's going to come in. Based on top of Apache Open LISC. So who here has heard of serverless? Okay, cool. Most people. Is anyone using serverless in their projects? Okay, so less people. Anyone like projects in production with serverless? Okay, so same people. Awesome. Today we're going to talk about some of the benefits of serverless and then we'll also check out a real-life use case and then I'll show you guys a quick demo of running some Python code on a serverless platform. So let's say I've been working on a project and I'm ready to play my code out to the top. In the past, me or someone else in my company might have needed to purchase bare metal servers to run my application on. You have to manage the security updates for the servers, install all the required software and keep the hardware up to date. Then virtual machines came into the picture, right? We were able to let someone else like Amazon or Ratspace start to manage those machines for us when we just get a virtual machine to run our application on. You'd still have to manage the operating system and the software dependencies of your application, but you didn't need to physically own or house those servers anymore. Even more recently, we've had containers enter the scene where we're only managing the application code and its dependencies. We package everything up into a container. We give that container to the platform to be running. So that means a lot easier, but we're still managing some of the infrastructure there. We need to manage our own runtime and our own application dependencies. Building a container to run your application on is still somewhat manual process that could still be abstracted away. So software developers really need to focus on that core business logic and not get distracted by infrastructure management or configuration. And these abstractions are ultimately what serverless or functions as a service offerings are promising you. So the execution environment for the code is provided for me as a developer, and my code can just be run in response to events or in response to actions that I care about. So I should note I'm positing this as an evolution, but all these models are still available today and they might even still be the right choice depending on what problem you're trying to solve or what your requirements are. So you'll notice on the x-axis it says decreasing concern over your stack implementation, but it's also decreasing control. So if there's a security vulnerability or environment update that you need to make, you have to wait for the platform provider to make that required environment update. So if you need that additional control, one of the others might still be the right option for you. So I think this is a really just another nice way to visualize the increasing levels of abstraction. So as we're moving through each of the deployment models, we're seeing less and less that the end user is responsible for. So on the far left we have on-prem where you have your own machines and you're managing everything, and then we go through infrastructures as a service, platform as a service, and then ultimately functions as a service where you literally only write, manage, and deal with functions. So these changing deployment models have also affected how we write and think about our application. We used to write code as these big monolithic applications. The entire application gets deployed as one unit, and introducing any new code changes needs a completely new deployment of the full application, even if you're only changing one or two lines. So from there, we moved more to this microservices paradigm. We started to break that monolithic application into various services that are communicating with each other over RESTful APIs. In a store application, for example, you might have a microservice that manages the accounts for your users, one for placing orders, and one for maybe managing interactions with your shipping company. So you can deploy each of those microservices either as an application on a pad solution, like a platform as a service solution, or maybe you deploy each one of those microservices into your own containers. And then finally, as we've moved more towards this function as a service model, there's been further decoupling and shrinking of the size of the unit of code. So instead of creating microservices, we can literally just write functions that we want to deploy to the cloud. And each of those functions have the goal of doing one thing and doing that one thing really well, perhaps in reaction to a particular event. So as we said, serverless promises that you can just spend your time writing business logic that has a direct positive impact to your users and not focus on the configuration or management of VMs. Even though it's called serverless, yes, there are servers, but the idea is you don't have to see them or deal with them. So from a developer's perspective, the functions as a service platform is going to provide a runtime and an execution environment for the code that they're writing. So you just write the code and then the serverless platform runs it for you in the cloud on demand and within milliseconds. And this may be actually abstracted away from you, so you typically won't see it. But usually what's happening under the covers is the code is ultimately being run on a container in the cloud provider's environment. So one of the goals of serverless is to provide developers with scalability out of the box. So let's say that you have a set of APIs and you've represented each of those API endpoints as different functions. Maybe, you know, suddenly your app gets featured on Reddit, you have millions of requests from your UI to these back end API endpoints. The only thing that's going to happen is that the platform will start up multiple instances of the back end API code to process those requests in parallel. And it kind of gives you that horizontal scaling for free without any additional programming or server management from you. So we talked about your code scaling out or up to end instances when it's required, but it also scales back down to zero when it's not being used. So what that means, from an app developer's perspective is that you're not paying for the time that your code is not being run. So you're only charged for the amount of time your code is actively running or doing some work. So you're no longer paying for that idle time. You have to do the testing like for your particular scenario. But this can actually mean really big cost savings compared to having a server up 100% of the time. So it really just depends on the project and depends on the workload, but it can be really good cost savings. Other promises, most serverless solutions provide support for some kind of event-driven scenario. So this is literally anything where an event happens and you want to react to that by some code. Some examples like maybe you have a new user who signs up to your app, and that results in their information being stored in your customer's database. And then you want to execute some logic in response to that new database item. So maybe send them a welcome email or do some processing of their input data. You have like a user uploads an image to the app. Maybe you want to generate a thumbnail as soon as that image is stored in your object store. You can do other things, sentiment analysis on text that a user has input, literally anything you just want your code to react to a particular event. So typically serverless platforms are going to provide that infrastructure for you to easily react to those events with your code. So many of the cloud platforms offer a number of different services you can integrate with your applications. So these are things like artificial intelligence, IoT platforms, object storage or databases. So think of some of the things you can do if you connect up with, like I'm from IBM, so the Watson APIs or maybe the Alexa APIs from Amazon, using those in conjunction with your application. Most of the serverless solutions provided by the various cloud providers really want to make that connection very seamless. They're really incentivized for you to go off and use their other services, right? So if you're using their platform, they want to make it super easy for you to connect up with their various services. So I think as you're looking at, you know, maybe moving some workloads over to serverless, definitely look into what integrations are provided, see what other services you can easily and seamlessly connect up to and react to. So ultimately, all of these promises are trying to achieve this overarching goal of decreased time to market. They're all attempting to help developers focus on just business value and the value to the users and not on infrastructure or configuration. I really like this quote from serverless.com. It says, serverless abstracts away the most menial parts of building an application, leaving developers free to actually spend their days coding. So I think that's kind of the dream of serverless that people are hoping to realize. So there's a few use cases they're emerging as really a nice fit for the serverless space. First one is like web application backends. So you have a front end and it's making some calls to some back end APIs. Each of those end points for APIs can be functions that are created in a serverless platform. A mobile back end is super similar to the web application use case, except the front end is mobile. Internet of things, so just because a lot of the serverless platforms can help you connect up with some kind of IoT platform on the provider's top. Scheduled tasks, so let's say you maybe have some code that you want to run once a week or maybe once a month. You can set up a chronological trigger for that code to be spun up on maybe Wednesdays to run and then after it finishes it spins back down. You're no longer being charged for those resources that are being used. So you don't have to have the server up all week just to do that, just to run that code once a week. Conversational scenarios, so this is things like chat clients where you maybe want to do some additional processing on the chats they're coming through. So like maybe translating the text or filtering out bad words or doing some other other processing on that text. And then finally data processing. So this is literally anything where you get that new item in your data store and you want to react to that with some kind of processing. So this can be things like normalizing audio inputs, image resizing, address verification, anything like that. So today I wanted to quickly check out a real-life use case on a mobile backend architecture. So you can kind of start to get an idea for how you might architect your serverless apps. So the application I'm going to talk about is called Weather Gods. This is kind of cool creative way for people to interact with the weather. I think it's like $2.99 on the app store and they have this kind of creative way to interact with the weather. So the weather is represented as various gods. So there's like a fire god for heat. There's an ice god for winter weather, water god for rain and precipitation, et cetera. I think there's like six gods. Maybe I can't keep up with them all. But users can click through the various weather events or they can subscribe to specific notifications. So I actually used to live here in Austin. So when I was living here I think it's pretty unlikely that I would subscribe to notifications for high UV since that's basically all the time. It was always high here. But downtown where I lived there would sometimes be flooding issues or sometimes there would be like hail here. So I'd probably go ahead and describe to like water god notifications or ice god notifications. That's kind of how the app works. This is the architecture for how they did their back end. Obviously they have a mobile front end that interacts with this back end which isn't pictured there. Oh no actually this is over on the right. So each of the little green functions are the functions that they created. I mean I'll kind of quickly walk you guys through this and then we'll take us that back to talk about why they chose serverless. So the first thing that happens is they have this chronological trigger that's going to query a subset of their users which are stored in a Calcino SQL DB. And then the group scanner action is going to determine the locations and weather conditions that those particular users are interested in. Then the next action is just a weather checker and that makes a call out to the weather channel data service. Parsis that parsis and handles the data that it gets back gets it into the right format for their next action and goes ahead and writes that into the weather database. And then as weather gets updated into the database that will trigger a scanner for each user that's interested in that particular type of data. So let's say a user is interested in snow in New York and rain in Austin. Only those two scanners are invoked for that particular user. The scanners are running parallel so that gives them some performance benefits and then of course they're also able to scale up and back down independently. And then once the scanner finds interesting data for the user it'll send a notification using the push notification service. So what about this architecture makes it a really nice fit for a serverless. There's a few different points I want to highlight. Integrated platform services. So they were able to really easily consume some services from their cloud platform. In this case a notification service, a database as a service and the weather company data API. So those are all pieces that they don't have to rate. They were just consuming some services. They were also able to regularly schedule their code to run a reaction to a trigger every three minutes. A bunch of their code was running in reaction to events. So they want to run weather checks when a new user location is updated. They wanted to react to new users in their database. And of course whenever there's a new weather item of interest they want to react to that as well. And then finally they had the really nice scanner section where the code could be executed in parallel and independently. So if a ton of their users are only interested in rain, then that rain scanner code can be scaled up independently of the other events. Alternatively, like let's say no one's interested in a particular weather event, that code's not being called. So ultimately they're not being charged for it. So finding those elements of your application where you can really benefit from running in parallel and scaling out independently can make a good use case for a serverless for that particular case. And then the last thing I wanted to highlight about weather guards in particular is they didn't actually start out as a fully serverless backend like this. And they already had there out of them and they realized, okay certain pieces of this would actually benefit from being run on a serverless platform. And so they kind of transitioned those elements over piece by piece kind of as it made sense for them. So let's say you're ready to write your next serverless app. How do you get started? So you have some options. These are some of the major players in this space and listed them in alphabetical order. So the first one is Azure Functions from Microsoft. They have a Python runtime but it's currently in preview. So it hasn't fully become general availability yet, but I would assume that because it's in preview they're heading that direction. Lambda from Amazon supports Python out of the box. Apache OpenLisk. So this is an open source Apache incubator project. And this would be really nice. Maybe if you need something open source or you want to host and manage your own instance of a serverless platform for the rest of your company. IBM Cloud Functions is based on OpenLisk and it's basically just managed and hosted OpenLisk on IBM Cloud. And it also supports Python out of the box. And this is the one that we'll check out in a demo in a couple of minutes. And finally I know a little less about it, but I think it would be interesting to check out Python anywhere. So I did a few minutes just kind of checking it out and playing with it. It seems like it's a little more focused on web hosting and education use cases, but I think it's just kind of a good one to highlight and check out. So let's take a quick peek at what a function might look like. You can see a really simple example up here. It should look familiar. It looks like Python code. So it's a straightforward hello world function and it's just going to check and see if it got named passed in as a parameter. If not it'll return hello stranger. Otherwise it's going to return hello with the name that was passed in. So for OpenLisk and IBM Cloud functions, the expected input and output is JSON. So you can tie your various functions together with JSON. So let's say you have three functions in a sequence. The first one outputs some JSON. That will be the input for the next one. That'll output some JSON which will be the input for the next one. So that's how you kind of tie them together. And then the expected entry point is just a function named main. So you can have other functions defined in the file. But main is the expected entry point. So we'll see kind of a quick live demo of this in action. Like I said, I'll be using IBM Cloud functions which is managed OpenLisk. You can really take these and apply it to any serverless platform you want. So it should look and feel very similar to a lot of the other serverless platforms. So for our example, we'll say we have this image. It's really cute puppy image. And we'll say that it's getting uploaded to our cloud object storage or database from a mobile app or UI. And we want to react to this uploaded image by running some code. So in our case, maybe we want to do some image analysis. So what's actually in the image? So if we upload this really cute puppy picture, we want to see an output like Bernie's Mountain Dog or Dog or Animal. So this is an architecture diagram for how we might create something like this. When there's a new image URL that's uploaded into the database, that should trigger an action to get run. In this case, I divided that action into two separate functions. So one function is responsible for reading the changes that occurred in the database. That's the read changes from DB action. And then that action will parse the changes, output the URL to a new image in the database, and then hand that URL off to the get tags action. So then to get tags action, we'll actually make a call out to a visual recognition service, do some parsing of the return value, and hopefully output what it believes is in our image. So this is a really simple use case, but you could definitely continue building on this, right? You could save tags back into the database. You could add an action for compressing a really large image if you use or upload something too large. You could do things like handling video uploads by creating another function that breaks apart the video into multiple frames and then running the get tags on each frame. And that would be able to scale up kind of independently. Each of my functions gets scaling out of the box, so I don't have to do anything special. So let's check this out. This is going to be live, so fingers crossed. So this is just our get tags action that I talked about earlier. She looks super familiar with some Python code that we have. And I'm going to kind of walk through a couple pieces of it. So the first thing that's happening is we're going to pull in this visual recognition SDK. So what that should imply to you is that visual recognition SDK is provided as a part of the runtime. So it's just already there in the runtime. And then we're going to do a couple of things. I need to instantiate that with an API key and a version number. The API key will actually pull in from parameters for this action. So we should be able to pass in parameters into our action. In this particular case, that API key doesn't change. So I can actually store it as a stored parameter for this particular action. So if we click on parameters, you see, okay, I've got this API key that's stored in here and it'll just be safe for that action. I'll make sure I revoke that right after this. And then the next thing is that we'll want to pass in an image URL. So that's the image that we want to get the tags for. And then finally, we'll make a call out to that visual recognition.classify right here. And then we'll just do a little bit of parsing of the return data because it has some info in there that we don't care about. And then we'll print out the tags. So let's try that out. This is our little puppy picture. So it's cute. And if we do change input, we can go in here and make sure we have that image in there and it is, we can click apply. So we're passing this in as a parameter. And then if we click invoke, we should hopefully see, yes, so we see some tags back. So it says puppy, dog, laboratory, retriever dog, retriever dog, golden retriever dog. So just some different tags of what the visual recognition service thinks is in that image. But that's not all we want it to do. We actually said we want it to react to the image when there's a new image stored in the database, right? So let's see how we set that up. This is inside of a sequence. And that sequence contains two actions. So there's the read from database action and then there's the get tags action. And so the sequence gets kicked off by a trigger. So if we go down connect to triggers, we can see, okay, there's this cloud and image trigger. So anytime a new item gets added to the comment database, then that trigger will be fired. So let's try that out really quick. So I'll grab this image. And then I already have the comment open. I've, oh, I want to see it log in again. So you can see I already have a database created called my image URLs. I can create a new document in here. So I'm just going to create image URL. And I'm trying to fly. And then I'll just like create document. And then what that's going to do hopefully is cause my trigger to fire, which will then kick off that sequence. The first action was to read from the cloud database whatever changes happen. And then the second action was to get the tags. So if we go over here and go to the little monitor tab, hopefully we should see that it was called. Yeah. And so at 11 o'clock we can see, I don't know how to make this bear, but we can see that the trigger was fired at 11. There was a sequence that was called and then the read action and then they get tags action. And each action has an activation ID associated with it. So if we click that activation ID we should be able to see the results in there. So we have the logs just printing out and then of course the various tags. That's just a little bit and actually that worked live demo. I was getting nervous. Oh, let me stop mirroring. Okay, perfect. So I don't want too much time left, but I did want to quickly highlight. I wanted to highlight one of the Python specific libraries that's out there. It's called Pyren. So what it does, it has this goal of making a really easy pushed cloud experience for Python developers. So maybe if you have like a super parallel workload and a parallel workload is something where you have little or no effort that's needed to kind of separate the problem into these number of parallel tasks. So common examples might be like 3D video rendering handled by GPU, where you want to run some code for each frame or maybe for each pixel if you're doing ray tracing and those can all be handled without any interdependency between them. So Pyren is this library where you can take your Python workloads, scale it out into multiple functions to be run on a serverless platform. So really their goal, like I said, is to simplify that pushed-to-cloud experience and you can run your algorithm against the cluster of machines without needing to figure out like how do I set that up or how do I have those machines talk to each other, how do I get the results back. Pyren was originally built for Amazon Lambda. It's also available for IBM Cloud Functions. So go check out this article to learn a little bit more. It talks about using Pyren for scenarios where you have a lot of data processing to do. The two examples they talk about were really large datasets like solar flares which can help predict storms that can hammer power or satellite operations. And then they also talked about just the sheer volume of data and looking at like how neurons combining to create behaviors or disease or cognition. And their big goal is to really easily give access to parallel computing to data scientists and others. There's another article you can check out. Like I said, it's built for Amazon Lambda and has recently been ported to IBM Cloud Functions. So this article is about using Pyren for running stock market predictions through the money-car-low method. So it uses Pyren to scale out and then combine the results from those simultaneously running predictions. So that's all I had. Just a couple quick notes. You can go check out IBM Cloud Functions. The first link and then the last link is for the Apache OpenMess project. So if you're looking for a new open source project to be a contributor on, it's a really fun one. The community is really nice. So that's all. Thank you so much.