 So, let's do something to do a break the eyes. Everybody stand up, let's stretch everybody. Just one minute, right, we're in family so we feel comfortable about asking questions and everything. So let's do a few claps like one, two, keep going, keep going. So this will be a standing ovation. Thank you. Thank you. Now I can do it, I got a standing ovation. I'm gonna talk. Everybody loves, all the people that love Serverless they'll be contributing to the project or incubating anyway. So let's get started. I have some people that will tell me if I go over, if I don't get, we have until 12.10, right? And Matt is after me so if you stay here you will get double bonus, you'll get to hear about open with twice and we'll go deeper into the repos with Matt. I'll give an overview. So my name is Carlos Santana. I don't play the guitar, not a musician. I was good with math and then good with computers. I work in IBM, I'm also a PMC member and committed for Apache Cordova phone gap. Anyone have used phone gap or have heard of Cordova? Okay. There's a big conference going on, phone gap day. So I think all the commuters are over there. So I started this project. We went into incubation December. So open with is currently incubation. The project had been in GitHub in open source for about a year. Before that it was inside IBM being developed. So for today I'm going to give a quick evolution of why would I care, what is the serverless thing I've been doing servers and J2EE and Node.js servers for a while. So those are the type of servers we're talking about. So the evolution many years ago I started working in storage and servers in IBM. So I had a lot of experience dealing with physical servers in PowerPC. So that's where developers or applications you needed to build your servers. But for developers it was like step 70, right? By the time you got to install your application, the thing that did something, it took a while. And sometimes it took a while from another team to set up the storage, set up the network and get the CPUs and everything. So we got an evolution on that. And then came VMware and the VM revolution. And that got a little bit easier where somebody just gave you an image. But you still manage an image. You were in control of the operating system and also took a while to get to that level of getting that application up and running and getting a solution and getting a prototype. Then we went to, I missed the slide, containers. So recently containers becoming the sandbox or the methodology of, is this playing? Why is this thing, this thing, do not play? How do I make it not play? The full screen, we send it. Let's see if we can, I think I'm not touching anything, sorry again. So yeah, I was talking about containers. It's the new methodology of deploying applications. And that became easier. But you still needed to do some management and orchestration of containers. Something like Kubernetes or docker compose. You're still managing servers and orchestrating them and low balance and then how many do I have? Which regions do I put them? And then with functions, it's the new evolution also going by a lambda, Amazon lambda. It becomes easier. So you don't have to manage the server. You don't have to worry about the infrastructure. You care about small snippets of code that you put in the cloud or you put in the system and the platform. And then the system will take care of elastically deploy them and run them. So you just concentrate on just your code, literally in functions, not a set of a server of a one monolithic app. The programming model is very simple. We call them triggers, event-driven programming language. These are some of the trends. I put them just to have them in the slides. I'm not going to go over them, but if you have business people that you want to convince about trying out or getting started, this is kind of the market trends on the type of industries of applications. Also some of the industries that are getting into serverless. Bluemix is the IBM cloud platform. So we have just a quick overview. We have many things. We have many services. But it breaks down in terms of compute to the starting range with serverless, which is open with. So we run open with in our platform. And then you have platform as a service, which is based on Cloud Foundry. And then you have containers. And you have VMs. So you have a spectrum of building different applications in a single cloud, taking advantage of, I think last count, 150 different services from cognitive IoT, Watson, no SQL databases, and different services. So as a managed server, that's what we offer. But today we're going to concentrate on the open source project. This is an Apache and not a business set. So open with is a function as a service. Everybody is given a different name, event-driven programming model. The worst name that they were given is serverless, because there's a bunch of servers, actually. But they're not managed by you. They're managed by me or my team. So we have a lot of VMs. And I was given a talk this week of our challenges as commuters or running this application is how do we feed so many functions in a single VM for all these multi-tenant users and at the same time doing a single-digit millisecond running these containers, these functions. This open source is Apache Incubation. I think everybody's familiar with Apache. We are partnered with Adobe. So Adobe and IBM are kind of the two companies that started the project. But we're looking for contributors and committers. So I think like any, we want to sell me like getting line, everybody's getting committers and contributors. But I think at this point, we're looking for users. Just to tell us what are we doing wrong? What are the things that are missing? A lot of the committers already are very familiar with the project. So we already lost that train of that first person trying it for the first time. I'm finding that maybe the documentation is not that clear or maybe the tooling is not that clear. That's why I think where we want to get feedback. As any other, as you guys work with open source project, that's how you started in your open source project as a user, right, either your company or yourself. So that's what we're looking for. We have it as a managed service in Bluemix. You can get a free account. You get a certain amount of actions for free a month. It's similar to Amazon Lambda and the other ones, 400 gigabit seconds. The concept of OpenWis is a little bit different described from other platforms. So our model, we tried to make it very, very simple where you define a trigger. So this is the non-blocking or async invocation. So you have a trigger. We could be an HTTP, but in this case it's a trigger for an event that you want to respond to that trigger or just fire and forget. But you need to connect them to somehow with a rule. So actually the entities that you program or you declarative program in OpenWis is you're going to define a trigger, it becomes an endpoint, and then you declarative define which actions do I want to do when that trigger fires. It could be multiple actions. Somebody was asking Slack and Open Slack today about how do I run two actions with one trigger? That's the purpose. You define two rules for one trigger and then you can run two things in parallel. So there are the actions, yeah, actions. Actions is the code. It's the function. So the entity in the programming language or the programming model is called actions. So you create actions, you invoke actions, you update your actions, you annotate your actions. That's kind of the core of the function as a service platform is your actions run and they get you results. The simplest way of API to define these actions is you take JSON in, a JSON object, either your programming language of Java, JavaScript, Python, and it outputs a result in JSON. It could be an entity JSON if you just, you don't care about the result, but it needs to put JSON out. In terms of supported languages, out of the box, I'm not going to talk a little bit more about the details. We support Node.js. Looking at analytics is very popular, but we also support Java. Docker should be down there, but Swift 3 for people that are doing more developing Apple, Swift on Linux, and Python 3 was added, but Python 2 is also still supported. So I know there's a big debate between Python 2 and Python 3. I'm not a Python guy. Then the Docker is more of anything else that you have. If you have C++, you have Go, you have Rust, those things can be compiled. You have a batch script and basically you define it a Docker image and then the system will Docker run it, catch it, so you pay a little bit of penalty of the code latency if you're using something but it's not the one supported. But if you're using the open source project, you're deploying these things. So you can make any language be default and be a warm container and support it. If you want to have Go as a primary language, you can do it. Anything that runs in Docker? Intel, that's the thing I've been talking to someone this week about supporting ARM. So somebody told me that there's ARM servers coming up and they are looking into contributing or looking into open with. So I told them that I'm open to investigate and work together. Supporting model, so basic three is either fire and forget is kind of the most common one. So if you have data from IoT or from a message queue, if you have changes from a database like CouchDB, if you have a webhook from GitHub, basically these are fire and forget. Those are the non-blockings. We have the blocking ones where you're calling that kind of a recipe. You want the results back. You want to be very fast. So those are the blocking ones. You're expecting a result back and those are like you go directly to the action and you don't have to go to a trigger or a rule. You know the action that you want to run and you want a response very fast. And the last one is periodic alarm. I'm going to go over one example of those. We also support sequences. So you can have an action developed by you or developed by somebody that shared the action in the system that you may not know what is the programming language that they're using. But they could be giving you an action to talk to GitHub or talk to Slack. We have some system actions. So you can change them as a sequence. So you have an action that takes input as a JSON, puts output as a JSON. The parameters match to the next action and the next action and the next action. I think we have a limit of 10. But again, if you have the open source version, those settings, you can change them. The other aspects of an action is parameter binding. So these are default parameters that you set in your action, that you're not expecting the user to call them with those parameters. So mostly, some of the times it's used for credentials or an API key that you don't want to put in your source code in GitHub. You want to bind them when you deploy them. So that's kind of the configuration part of configuring your actions. Event providers or event emitters are demons or different notification systems that would give you the events. Yeah, it's providing events. It's frying triggers. So in GitHub, for example, will be a web hook. The alarm system is a demon service that we have that is basically implementing a crown job and firing that trigger based on the crown syntax. We have open interfaces, basically that's saying that we have a REST API. So you can implement your own trigger whenever you want to. Fire trigger, just call the REST API. And that will start the change of sequence. These are some of the examples, I think I've already mentioned them. The last one that we implemented was Kafka. So if you have an example, I heard this week that there's a lot of interest on IoT, message queues, on burst loads from the field. So in Bluemix, the product name is called Message Hub. But it's just Kafka, it's Kafka as a service. If you use the open source version, these providers are in open source, by the way, we have alarm, we have catch DB, and we have Kafka. Mobile push and IBM App Connect, those things are only available in Bluemix. Push notification is a service that is using an API to send push notification using the IBM service. And GitHub is just a web hook. That's the most simple way of explaining it to someone. Granular pricing, so I think that's the third dimension. So I mentioned that serverless, you don't deal with infrastructure. The second one was, you change your mindset of dealing with events. So it's an event-driven programming model. And the last one was, it is, and some other providers offer somehow monthly payments. But in Bluemix, this is not part of the open source project, right? If you run it in your laptop, you're not paying anyone. But in Bluemix, we charge per gigabit seconds. So it depends how much memory you give to the action and how much milliseconds does it run. And then you pay, I don't know, there's four zeros there, 17 gigabit seconds. It's basically the same price as other ones with Bluemix. The API gateway is free. Other vendors charge for it. So today I was trying to do, we have a calculator, by the way. And there's also your Google serverless calc. It will give you, you can do five million. I was thinking today I was doing a calculation of like 20 million invocations a month with a free tier 128 max actions, 500 milliseconds. So that's enough to run for a while. In terms of the architecture, this week I was very excited because there was a lot of you asking me, how does it get build? How do I deploy it? What is inside versus just what can I build with it? So the architecture is based on Docker containers. We encapsulate the services in Docker containers. We have a NGNX, which is just do the SSL termination, routes the controller. A controller is built in Scala, it's a web server. It uses ScouchDB for state management. It uses Kafka for queuing. And then it uses, the invoker is another Scala web server, which is a container that is the worker. So we have multiple invokers that will do the work of running your functions. One thing that I was telling folks this week was in IBM, we have the similar deployment. So there's no private code versus open source code. We just take it as it is and we deploy it. It's just like we need to integrate with our authentication of our Bluemix with your IBM ID. So we have authentication integration. And then we have elastic search for monitoring. And this is something that the open source project is already looking at of having this in open source. We have it in Bluemix because we're managing in multi-tenants, so we need the logs for the controller and the invoker to maintain the system. But we have found that folks want to get that elastic search so we can get the logs and the results of your actions. Put them in elastic search, and then it's your data. Then you can have Kibana on top of it or anything like that. API gateways, the last major thing that we did, also one of the GitHub projects that Matt is going to discuss. It's about defining APIs that can run your actions. So you're defining operations and things like that. But the reason that the team did it was, and it's included for free in the Bluemix offering, is to have rate limiting. So you want to have rate limiting, or you want to have API keys, or API secrets. Or have cores enabled by default, and was the rate limiting on OAuth. So they will do OAuth token validation for things like Facebook, Google, and the third one is GitHub. This is a quick example of how would you define a crowd operation so you have customers, so you do a get, delete, or post, and you map them to a certain function instead of mapping that to a single code base. So you can have your monolithic app break into microservices, where each of the endpoints deal with something particular. This is an example. So I was promising that you guys were going to see code. This is basically with the CLI. We have a UI which is in our Bluemix ecosystem, but from open source is there's a CLI. So you create an action, this is just returning a payload. Using the CLI, you create the action, you define an API, and then call it. I think we're missing the dash, dash web through. There's a flag in there, I'll have a demo on showing that. Web actions are, developers are bad naming things, right? We didn't know what to call it. So it's an action, but it's for web development or URL. Basically, it takes any regular action or traditional action, and it puts an annotation saying this is web export or web equals true. And basically it gives you a public URL, and then you can give that public URL to anyone. And then you can invoke it with any verse, right? You delete, you're post, you're put, and it becomes kind of your REST API. And then with inside the action, I don't know if we have it here. We should have it here. Inside the action, this is where you're as a web developer, it's like, ha, finally, I see a web server. You have access to the request, access to the headers. You can return the header. So you have access to the request coming in, and you have access to control the response. So if you want to, and the first one you want to return a 302 or 301 with a ready deck, you can do it if you want to return HTML with a cookie you can, or just return JSON you can. So this is where some folks were asking like, how do I do a web server with serverless? It'll be basically using web actions where you have control of that HTTP request and response. Yeah, this is a part of the response. Yeah, this is just an echo, don't get confused. This is the stuff that you have access from the request. So you have access to the path, the headers and the body. Basically, that's the only thing that you need from my HTTP request. Kong is another way of doing API gateway. They implemented, we work with them since our OpenWis are open source, open standards, REST APIs, they implemented a plugin. So if you have Kong in your shop and you want to use Kong to define APIs backed by actions or functions, you can do it. Serverless framework, anyone have heard of serverless IO or serverless framework? They're basically, it's a methodology where they help you, it's a framework to package your actions and configure your actions and they're tried to be vendor agnostic. So you can do it for Lambda, you can do it for Google, you can do it for OpenWisk, what's going to say IBM, but it's OpenWisk. And I've been telling them to remove IBM from the OpenWis term, because this is an open source project. So you can use serverless framework to package your apps and we also have a, now we'll be talking about which deploy, it's another way of you can deploy your apps as functions. So we just, one vendor choice, like we were saying, we're looking for users. But not if you have a chat application or it's a service of, it's a data stream service out there. They did integration, again, using the REST APIs, Open APIs, nothing specific about Bluemix being running in Bluemix or not. They're using the REST APIs, so OpenWis. You want to connect PubNup to an instance of OpenWis running on your VMs. You can just go ahead and do it. In terms of community, developer tooling does act in the area that we need help for to grow the user base of people trying it, giving feedback, and things get better. We have an IDE plugin for VS Code, so you can, instead of using the CLI, you can use the IDE to create the action, invoke it, and so far. We have no red, if you're doing IoT, some people use no red to create workflows. From those workflows, you can define actions and invoke your actions and certain points of no red points. I think we have time for a quick demo. Let me, I was going to do it live, but I'm running out of time. So let me see if we can place a few videos I did a few minutes ago. So, yeah, let's do this one. So I'll talk while, I don't want, I was going to say I don't want to make typos while code, but there's typos in this video, it's raw. So I'll go over what is going on here. So basically just the rundown, you go to GitHub. Now we're under the Apache organization, the easy way, the easiest way is to get a Ubuntu system with Vagrant where you do Git clone, CD to a folder with Vagrant, and then call this script that says hello. It will take a while to build all the containers. We're trying to optimize that to make it faster to get just a VM to give it a try. Once you do the hello command, you will have the CLI in your system. You have a Linux system running the Docker containers. And as you can see, this is running locally in my laptop, it's 192.1.68. So you have the same environment that is on the cloud in your computer so you can develop and you can help in the community and be a contributor. This is just one of the actions that come with the system, it's called echo. You test that out, it works. I'm going to try to create a function. So basically this is Node.js and you create a function, see those are typos in there, I don't know. Call it main, you can also export it with the package.json. You need to return JSON, so I'll be returning an object. I can finish typing, come on, I didn't want to cut anything so. So you create a payload of JSON and that's what you return. In this one, I think I'm just going to say hello, save it. And then how do I send it to the system? So you run with action create, give it a name hello and then give it the source code. It could be a file or it could be a JavaScript file. I think I made another typo in there, creation. It was already created, so it fell, so it update. Usually it can create it or update it and update it. If you want to invoke it, you do with action invoke, type hello. And then that would run the action and return it. Dash R says, just return me the result. If I run it with dash B, it says blocking and they get all the metadata. So I'm going to show how many milliseconds it took. So this one took six milliseconds to run. So it went up, the Docker container was running and it ran, it was warm. You see, log, so people ask about logs, you do console.log, so standard error, standard output that's captured by the system and then available later on that metadata of activation ID. So I'm going to run it again, update the code so it's console.log, invoke the action again with dash B, yeah, or just the activation ID. So that's your ticket ID. If you want to go later and find all your activations, you basically do with action and then pass it, pass it the activation ID and then you get when did it started, when did it run, how long it took, and then the logs, as you can see, there's an array of all the logs that you put in. So you have an error, right? Or you have an exception, you will see that in the logs that you got, undefined is not a function, right, the typical error. This one, see what I'm doing here, I did just run it multiple times. So you can see that the first time you run it, it will take some latency because we do something with Docker. It's a call start and then the later functions that it run, it will run faster. Let me just explain again, is it, the last one is parameters. How do I pass parameters to my function? Basically they come in an object so in here I'm using ES6 to parse the parameters, says Carlos, hello, then I'll put the parameter in there and that will come in so the way you run it with the recipe, you pass the parameters with recipe, it's just a payload of the body of JSON. In this case, I'm using the CLI, that's the Infra guys talking. And one tip is if you want to learn what is happening behind the scenes, there's a dash V, I don't know if I have it here. There's a dash V flag that you can pass and then you can see what is this HTTP client doing. It's just calling REST APIs against the OpenWISC system. And if I do Jim, then it says hello Jim, hello Carlos, or Rich. Add dash V, so if you do dash V, you will be able to see how do I call this with Coral, right? So you will see the URL, your authentication token which is basic out and then how do I pass a parameter is the request body. So far so good. So I think that's good for the demo, let me see if it's finished. I think that's it. Let's go back to see what actually happened there. Let's see if I go over there. So what's happening behind the scenes is your actions are sandbox in a Docker container. So when you do with action invoke, that equals to a Docker run. The team did some optimizations and looking to performance, actually even looking more. We found that using the Docker CLI command was too slow. It wasn't giving us that lower level performance. So now we're using run C. It's a different level of managing containers. And we're able to Docker pause and unpause a Docker container in single-digit millisecond. So there's no overhead on that. One thing that we do is when you do Docker run, you have the penalty of getting the image downloaded from Docker Hub or even if it's cache, creating that image for the first time and having it in memory. What we do for the language, when we say we have languages that we support, we already have some images already pre-warmed, ready to go, listening on a port. So it's a server running. So that's why people often say serverless. There's a server listening in there. For example, for Node.js, there's a server in the Docker container ready for a function, but it's functionless. It doesn't have your code. So the first thing that happens is it runs an init, and that gets the code initialized. And then that's why the next request that's coming into the system as a burst or just REST APIs will keep the container very warm or hot, it should say. And then every round, that's why you get single-digit millisecond overhead. So that's kind of the optimizations that are different for OpenWiz versus other systems, and also looking with other orchestration for containers or Kubernetes and Mesos, where it's not that trivial to get some workload or some action to run on any container anywhere. So we are doing some optimization to do everything that we can. So the only thing missing is your code, but even when we have your code, it's already in memory, so we can just run it right away. So that's why the serverless, we're getting to the point that it's a similar thing, hitting a server that's hitting a serverless API. Whoever's using it will be able to tell. This is what I was saying before, is we're working with having a way to deploy the control plane. So this deploying OpenWiz, the controller, and invokers, do it with Kubernetes, do it with Docker compose, and then see if it makes sense, or how do we do it to make invokers be able to use the utilization of containers with Kubernetes or Docker compose or Mesos. But that's another call. If you are an expert on these areas, we're looking for contributors on that area to either document it or looking at how to orchestrate it for the project. So we're open to other deployment options. What is serverless good for? Let me see, I think that's going to get into use cases. People say anything, but if you have something that has to be up and running and it has a persistent connection, then no. If you have somebody was asking me if I have a server, one server, is fully-unitized, I only have one server, then this is not a choice. You're optimizing for something that you should not be optimizing. Other than that, if you have things that are workloads that you can split into smaller problems, smaller functions, it's a good thing. If you have burstiness, you have a heavy load in a certain period or heavy load that you cannot predict, this is something that can scale. Some of the users and applications that are building out there and integrators want to go over a few, but one that I'm excited that was helping is Weathergods, it's a mobile app that is using the periodic alarm system. It's also using Cloudant or CouchDB. We basically tell, I think I have an example, it sends push notifications in the morning and during the night, and it gives you data about the weather. So it needs to orchestrate those things, but it doesn't have to be running all the time. So it's a certain period that it needs to do that analysis on the data and also push notifications to the users. So data processing is one of the use cases. If you insert information to a CouchDB, CouchDB has a notification API and also supports, we recently added support for filters. So if you want to listen to events coming from your database that says when a customer gets inserted into a database, run this function against that data or fetch that customer, and it could be like customer bought a car, but it bought a bicycle and you want to only listen to customers that bought a car, you can do a filter and only listen to that data coming in. Another, one of the customers we work with is SiteSpirit, I think it's called SiteSpirit. They reduce their cost by 90%. So they were paying a lot because they were using constant service all running all the time. And also dealing with all that data and burst of submitting images, they made the application faster, 10 times faster. And these applications basically is for travel business that they need pictures and those pictures, they wanted to crop them and resize them. And then it's kind of cool because you can point in the picture, an area that you want to focus and it will crop the picture on that where you're clicking, not just the whole picture. So he has a lot of customers that are sending him images that he needs to do that image processing. So he's doing it with open-wiz, so he has some bursts that just offloads to open-wiz and just gets processed with his actions. And he's using Node.js to process that images. Another example is processing checks. Santander did a POC with one of our open-wiz advocates where they were processing images, doing OCR recognition on the checks. And it's something that happens in a certain amount of time. So everybody gets paid maybe on the 15 or the 30 or some people just get on Fridays. So they were doing a lot of manual work to get that process to check. So now they're using open-wiz in a serverless fashion that they can handle that burst. So this week there was a lot of talk about IoT and data and message streams. Message hub or Kafka is another way of getting data into the system and then BQ into a certain message broker, basically Kafka, and then you can create triggers that will run your functions against that data or whenever that data comes in. And the last one is what I was talking about, it was periodic. So if you want to do run a certain task, a certain amount of time, or every other week or once a month, basically supporting a ground syntax, you can run your function to kick something out with a periodic. And what is becoming very popular are chatbots. With the API gateway, you'll be able to support the APIs when you type things in your channel. Slack has an API that can start sending those messages to an API that you define where you can have open-wiz and open-wiz will process those messages and maybe come back with analysis on those messages or one thing that we were talking about, there's a lot of Apache projects using Slack and there's a concern or just an idea of like, how do we get these messages into the dev list, right? How do we get if it doesn't happen in dev list, how do we know, how to archive? So we'll be talking about creating something generic that will work for every project that you can install in your open Slack and maybe do it with a daily digest or when a thread gets started, send an email to the dev list, create a new dev list thread. And that's basically a chatbot is you're communicating with it or just sending commands in Slack, you can create a bot that says, run me this backup process or create me a Jira issue if you're using Jira in the chatbot will do it, but you need some backend. So that's just a function running in open-wisk. I think we have QA, but if you want to know more information in terms of Bluemix, you get an account and you can use it. I can show a demo of message Q with in Bluemix. Let me show that because I saw a lot of people interested and the package, this service provider, this event-driven trigger provider for Kafka is open source. It's one of the repos that Matt is going to point out, but we're going to show it here with Bluemix of how would you implement that? Let's see if we can get that to full screen. So in Bluemix, we give you, you can create a Kafka, a Kafka service and here I define, that's tiny, a topic called IoT. If I go to open-wisk and I go to the develop tab, this is kind of a nice UI for people getting started. I have a function. I can create a function. I call it, what do I call it? Take data. Yeah, so I'm taking data from message Q, from Kafka, Message Hub. Using Node.js, you can use any language, you can use Python. I create the action and then I think I edit the action to get the messages that are coming from Message Hub from Kafka and process them, right? In this example, can I do two X this thing? Yeah, that's better. I take sensor data, so in this case, you have a sensor, you have a temperature, I get the sensor data and I want to send a Slack notification. So it will take the data from Kafka, take the piece that it wants, which is the temperature, create a text for Slack and then I will do a sequence where I can define this action, process the data, outputs another data which is the text to the Slack action and the Slack action will send a message to me to Slack about the temperature of the sensor. It's going very, very slow, let's see if I can do 8x. Yeah, so in here I'm creating a demo, take data, it runs Slack, I'm running Slack with my account, my webhook from Slack, so I'm creating a sequence and then I'm going to, after I save the action, I need to name it something, so I think it says, when data, Slack, again, we developers, that's our biggest problem, naming things, right? So I think I call it when temperature, data, Slack, and I keep messing with the name. I think we spend a lot of time naming methods, right, and variables. So the sequence get created, close it, so that's kind of a visualization of the sequence. I have a sequence, then if I want to automate that sequence, I how do I want to run that sequence? I'm going to create a trigger, so if I click, run the sequence, I can just run it from here and pass the temperature, which is 60, just testing it out and let me see, I'll get the data, and here you can see all your activations of all your actions, and you can see the message that was sent to Slack through the sequence to the trigger of IoT, so I just put a message on IoT, Kafka, and that was in the image. I think that's it, at least that's Slack, and that's the message that the temperature is 60. So I think we have five minutes for questions if there's any, let me send this back, let me see. Yeah, so we have, if you want to join our Slack, if you have questions on getting started or creating your first Java function, you can go ahead, I'll have a few questions. Yeah, so persistent, the question was, how do I handle persistent or state? So these functions have to run for a certain amount of time, you can configure them up to five minutes, and they're stateless, so you have to write your application in a stateless way, so using something like Redis, using something like Couch TV or Cloudant, or using MySQL, so you need to have a connection somewhere where you persist that data. For those serverless hackers, they cheat a little bit where they cache, so if there's a function running and you have to do some work or cache some data, you can check if the last function left something and just use it, but don't assume that it's going to be there. That's just one trick, because your function may land on that same container, but it may land on another one, but if it's a step that you can save, I don't know, 20 milliseconds or 100 milliseconds for a function that runs 500 milliseconds, you can do that trick, but other than that, yeah, like you were saying, connect out to persistent, and another way is your whole app is the state, right? So you can create a function that processes a record from one database, analyze an image, or do IoT recognition, and then that action would put it into another database or the same database, and that would trigger another action to handle it, so you don't have to maintain the state to see if that thing was done and this thing was done and this thing, then I do this, just follow the programming model using sequences and triggers, any question? Deployment of your function? Yeah, yeah, so this is marketing, I guess, we used to do a demo. In real production, you will just see ICD, right? Something like Jenkins and something like GitHub. Matt has a cool project he was showing which is called WishDeploy. Adobe has another GitHub repo, which is if you push something to your Git repository, then a webhook will call an OpenWish action that will package your action and update your actions like meta, right? So it's just a, at the end of the day, you're just calling either the CLI or the REST API to submit your SIP file for that action to update the code, or JAR file for Java or your Python SIP file. So, yeah, Jenkins, Bash is just a REST API or CURL, actually. But the way we do it, we do the CLI. I think the CLI is becoming kind of like the common denominator, so wrap the CLI with Gradle, wrap it with something else. And the CLI is just your interface to the REST API so you abstract yourself a little bit. Did that answer your question? Cool. It's not there, but we're open to support it and we can create a repo if somebody wants to invest time on doing that. To, for those type of things, you can use the pattern of Kafka, which is basically, there's two APIs. When you create a trigger, you're defining that I want this trigger for MQTT and then you have to do a second call, REST API saying, hey, MQTT, start listening or handling these messages and fire this trigger, which basically is just send an HTTP post request to it. But no, people have done MQTT through Bluemix that then they do a web app and then will trigger the trigger manually. But natively, it'll be nice to have that. The same thing for MySQL. I did a little research and I think it's possible using the MySQL binary log so have a demon that can listen to anything that happens in that SQL and then firing triggers. Also, I learned this week about Rocket MQ, it's a call. It'll be cool to have Rocket MQ firing actions on data streams. But as part of the open source project, nobody's working on it. So we're looking for contributors. What is the difference between open source, the open-wisks open source versus the commercial in IBM? Yes. Yeah, open-wisk. The difference is a multi-tenant. We take the open source project and we deploy it in our cloud. So we're using the VMs. What would be different? I think the authentication, what I showed was authentication. So if you're deploying open-wisks in your company, you will need to implement that piece of how do I authenticate a user in my company if you want to deploy open-wisks also in your environment. And the other piece that is coming is elastic search. So in the proprietary systems, since we're maintaining all these users, we use an elastic search to do the monitoring and the logs. So if you have to deploy open-wisk, then you have to get the logs of the system somewhere with subsystem and that's not available in open source yet. And that's the part that we're working to have that. Other than that, I think that's it. I don't know if Matt wants to add. Oh yeah, the UI. So in the commercial, you will see that UI that I showed to create an action and link the things and the boxes and also the nice graphs about those things. Those are available only in the commercial. In the open source, you basically what I showed with the command line interface, you get the REST API raw. Sorry? Yeah, so when you create the function, you pass a flag call limit. So you can set the limit of memory. Basically, it's a cap and then that's what you pay for. So I think the max is 512 and we're talking about increasing that to one gig. So you can have up to one gig, but today, if 512, if you use the open source version, just go to the ansible script and the change that to whatever amount you have. But as a user, when you create the function, you will create a function with 256, another function with 128, another function with 512, depending what the function is going to do, if it's going to handle a large buffer of data or not, for IoT, like for IoT or Kafka, you don't know how many messages you're going to get on how big is each message. So you may give it a 512 megs. And also the time limit, if you want functions to run for more than an X amount of minutes. Timeout, yeah, that's the... Any questions or comments or? Yeah, yeah. For Java, it has GIGison, Google JSON is the API. So you create a JAR file. It has to have your Java code inside and then you specify what is the class that implements that function. And then it will run. But any dependencies, you have to build it in this one single JAR. So you have to compile everything. And we have examples that show you how to do it with Gradle. But you will need to adapt it because maybe your function doesn't take GIGison. So you have to adapt it to that API. But again, it's open source. So if you're all using the open source version, you can change that piece or adapt it for that. I was going to go for the docs. Let me see if I have the doc somewhere here. Yeah, when you create your function, whisk action create dash dash Java. Here's the JAR file. You can pass another parameter saying memory. How much memory do you want to allocate to that function? Yeah, and basically that memory, you're controlling the memory through the container. So that field of memory, it will be similar of doing a Docker run and passing, hey, this container should not have more memory than X and then the JVM inside. And for Java, it's very interesting. It's a Java proxy. It takes your request and then runs your function. For Node.js, it's Node.js. So we try to optimize for the language so the performance is better. So we have different proxies for the Docker containers. The isolation is through Docker. So the Docker is locked down with IP tables and security. So you should not be able to break out from your container and access another container that is running another function. So that's the level of isolation that we have. But we have a bunch of VMs running the functions. But the isolation is done at the level of the container. I think we run out that, okay. So I think that's it. If you want to, this is the same room right, Matt. If you want to stick around and learn more about the other tools, the other repos, you can stick around for the next talk. So for so far, thank you for coming. Yeah.