 So, good afternoon. My name is Kalanjee Bankoli. I'm a software engineer from IBM. I've been working with them for the past, I'd say, three and a half years so far. I started out in the Cloud Computing Unit, and I've recently transitioned over to the Watson Artificial Intelligence and the IoT Division. So, before I get started, I'd like to ask you in the audience, so how many of you have heard of RU's Cloud Computing? Okay, just a few of you. Okay, so those of you, how many of you have heard of serverless or event-driven architectures? Okay, so just a few. Okay, so I just wanted to get a bit of the background of the audience. Okay, so moving forward, so just a brief overview of what you and the audience will learn today. So we're going to start out by just talking a bit about Cloud Computing and essentially how it's evolved over the past few years to make day-to-day how to develop our operations much easier. And then I'll talk a bit about serverless and event-driven architectures. So I'll explain the basics of these architectures and explain exactly why there's so much excitement and growth in these architectures over the past few years. I'll also talk a bit about how these architectures can help developers create Cloud and IoT applications more quickly and efficiently. Finally, I'll finish up by talking a bit about OpenWisk. So OpenWisk is essentially a new open-source project that we've recently released, so that's out on GitHub. And essentially what this allows is for users to be able to consume some of these event-driven architectures so they can either deploy OpenWisk locally, so on-prem, on their laptop, or they can use our hosted offering at boomx.net. Okay, so moving forward. So first, I'll set the background to explain exactly how these event-driven frameworks can be so beneficial. So if we trace back and figure out what the developer's ultimate goal is, so their ultimate goal is essentially to be able to write some code and some business logic and be able to get that deployed to a production environment as quickly as possible. So if we go about a decade ago, then we would see that the developer would have a lot more tasks on the operational side to deal with. So for example, they would have to do provisioning and configuring hardware. They would have to determine the hardware capacity and availability and so on. So to address this issue, Amazon essentially came out with their web services and what this would allow for developers was to essentially be able to provision virtual machines and block storage on the fly, well, on the ground and as alternative to bare metal machines. So next after that, then container technology came out as a result of Docker being released. So this essentially allowed for developers to be able to package their code as separate microservices as well as being able to package and distribute their code along with the dependencies and image as needed. So this microservices approach essentially allowed for larger apps or larger monolithic applications to be broken down into smaller modules. So since these microservices are independent units, each can be tested, iterated upon, deployed and even removed without really affecting other units. So it was great for agile development and collaboration. So this approach has become very popular in recent years especially due to the increase in agile development processes and so essentially instead of developers having to all work on a monolithic code base, they can break up the application and just work on each isolated microservice. So microservices don't have to run in a container. They will work in a VM but containers are easier to isolate and package. So essentially the main goal and each of these advances of cloud computing was to again decrease the amount of operational tasks required by the developer and allow them to get their code as a production more quickly. So now I'll talk a bit about functions as a service or serverless architectures. Okay, so just to set the stage, so let's say that you're a developer and you come up with the perfect app idea. So you come up with the next Uber or Snapchat. So after solidifying the idea you might spend a lot of time actually writing up the app code and getting a prototype running. So after you have this prototype you'll probably have it on a local development machine or your laptop or something like that. So you'll need to then go on and get a server up and running so that your application can run there. So after you've got the server running so you put the code on the server and you send some friends a link to the beta. So after the first week probably something is going to go wrong. So maybe your hard drive is going to fall over. Maybe your Linux image is going to have a new security vulnerability. There's just like so many things that could go wrong. So you as the developer are essentially spending all your time firefighting and just dealing with these day-to-day operational issues instead of focusing on the actual application code and focusing on user experience. So essentially what this serverless model will allow for a user to do is to not worry about maintaining these day-to-day operations. So they don't have to worry about like hardware crashes, scaling, updates and patching security vulnerabilities and so on. So essentially again allows for the developer to cut out all these operational distractions and just focus purely on their code base. So there is a bit of controversy around the term serverless because obviously the server still exists. But the idea is that the developer just doesn't have to worry about maintaining these servers. So many of my colleagues actually prefer the term function as a service. So the idea is that the developer can write a stateless decoupled independent function and just upload that to a serverless engine. So once this is uploaded and the function can be manually called through a HTTP request or it can be triggered off of a change and a service like a database or a message bus. So there are similar offerings that have been out for a few years called a platform as a service. So some of the more popular platforms as a service are Heroku and Cloud Foundry. So these essentially allow for the same thing to write some application and essentially just send your entire code base off to the service which will then handle dependency, scaling, hosting and so on. So the primary difference here is that a platform as a service will deploy the application in such a way that it's always idle and waiting for a request. So the way the serverless model is different is because it spins up portions of the application code on demand and essentially these portions of the code are executed in temporary containers and then these temporary containers run the code send off a response and then get deleted afterwards. So this serverless model isn't IBM specific thing, it's very popular of the community. So Amazon, Google and Microsoft Azure have all released their own solutions. But the primary difference is that our particular solution is the only open source one and it can actually be ran locally and on-prem. Okay, moving forward. So with the rise of cloud native applications we've seen many types of workloads emerging that are natural fits for event driven architecture especially for those that monitor and react to rapidly changing data sets. So for example there can be applications that take action every time a new order gets added to a database so that would be like a more traditional web app like a web store or something. In an IoT context application can also depend on methods that respond to changes in sensor data so this can be something like a motion sensor or it could also be like a temperature passing a certain threshold. Other applications are also built to analyze trends so essentially these can be trends in something like social media but this can also be aggregated sensor data as well. So it can run analytics against this aggregated data and essentially detect anomalies and respond as needed. And finally one of the more basic is for tasks that only need to run based on some set schedules such as a cron job. So as these newer type of workloads are being bought to the cloud developers we'll find that there's quite a large number of challenges when moving their application from a local development machine out to a cloud production environment and essentially like making them bulk tolerant and highly scalable and so on. So many READMEs and GitHub are developer works articles. They tend to show a user how to get a single monolithic instance of their application up and running. But when they actually try to push the application into production they'll need to start thinking about breaking down their applications to containers or VMs. They'll also have to start thinking about how to deal with the load balancing, making their services highly available and so that they can effectively scale and recover from outages with minimal downtime. So for example, if like a particular region our data center goes down their application should be able to recover from that quickly. So there are products such as Kubernetes and Mesos and essentially they're supposed to address many of these concerns but there's no size fits all method. It really depends on, it's a case by case basis it really depends on the nature of the application. So some of the more traditional cloud architectures where things are always on and running aren't really always the right fit for every workload. So for example, you might have certain workloads that run on some set schedule and then there might be other workloads that run in response to events that might not happen very frequently. So in these cases having your application on a virtual machine or a container or a bare metal system that's always up and running and consuming resources can end up being very wasteful and expensive. So rather than having these applications always actively polling and sending requests to see whether certain methods should be triggered and executed, an event driven architecture can wow for resources to be consumed more efficiently but essentially they'll be consumed only as needed. So also a side note, so in this diagram CF stands for cloud foundry. So that's one of the more popular open source platform as a services. So this is actually the base product behind our cloud offering BlueMix. Alright, so since this event driven model allows for resources to be consumed more efficiently by consuming them all only as needed this model essentially enables developers to take advantage of a much better or more accurate billing model and essentially the idea here is that this particular cost model is built to only charge the consumer for the exact amount of time and resources their code is using. So like down to the millisecond. So they're really getting what they pay for. So we're not necessarily saying that the serverless model is a perfect use case or the perfect solution in all use cases. For example, like if your application is something like a UI then a platform as a service might be better because you want the UI always up and accepting requests. But this may be a better fit for some of the newer workloads. So if we look back at these emerging trends these newer type of workloads efficient resource consumption and more accurate billing model then we can see why these serverless and event driven architectures are becoming so popular. And so since there's such a demand an increasing demand for these particular architectures new solutions are coming up very frequently. So that's something that IBM are very interested in being involved as well. So a few months ago actually that was in February IBM finally pushed a prototype of OpenWisk to GitHub. So essentially OpenWisk is a platform that allows for users to take advantage of event driven architectures. So the idea is that the user can write some snippets of code and be able to just upload that to the OpenWisk code base. So these pieces of code can be configured in such a way that they'll only be executed or invoked in response to certain events. So for those of you that have been experimenting with Amazon Lambda or Azure Functions this is essentially you should already be familiar with this concept. It's essentially the same model but the primary difference here is that our solution is completely open source and can be run locally. And also we're always looking for new features and feedback to come from the community. So we can talk a bit more in detail about the programming model. So there are four main entities in OpenWisk. There are triggers, actions, roles and packages. So trigger essentially defines exactly which events OpenWisk should pay attention to. So it can be essentially anything. It can be a database change, incoming tweet, data coming in from various IoT devices, messages coming from a certain message bus and so on. And also custom triggers can be created as needed. So the developer will ultimately want to focus on implementing the logic to respond to these triggers by creating actions. So these actions just respond to triggers or events. So they're essentially functions or snippets of code that can be uploaded to the OpenWisk action pool like so. So this is an example of what a very simple hello world action will look like. So essentially it's just some code that will be placed in file, are in the OpenWisk UI, and then it will be uploaded to the OpenWisk action pool. So as you see, there's quite a few languages that we support so far, and we are in the process of adding more as needed. So SWIFT is actually a newer open source that was released by Apple I'd say about a year and some months ago. And SWIFT is actually becoming very popular and it's very easy to pick up and it can be used as a front end or a back end language or both. If there are any particular languages that we don't support yet then essentially you can write your code in a docker so essentially you'll need to create a docker image and you can upload that docker image to the OpenWisk database and essentially as long as that particular image follows a certain set of API rules and guidelines then it should be able to be called like any other action. Actions can also be chained together and executed in a particular sequence so you can actually reuse pieces of code and combine them according to the needs of your application. So this allows for developers to be able to create applications in a loosely coupled fashion. So this fits the popular microservices design in which we have a lot of small pieces of code that are running independent actions. Next so we have roles so roles essentially tie everything together they essentially define the relationship between a trigger and an action. So given the right set of rules you can either have a single trigger kicking off multiple actions in parallel or you can have the same action being triggered by multiple events so it's very flexible. Okay and finally actions and triggers can be bundled up into packages that can actually be shared and distributed using the OpenWisk catalog. So you can either just use these bundles internally or you can choose to publish it to the public catalog to share with other Wisk users. Okay so now that we have these basic concepts down let's see how everything is tied together. So essentially the execution model will start with some event being picked up by the system so that would be a trigger. So the event trigger should have some set of rules associated with it and those roles will dictate exactly which action should be kicked off based on that trigger. So based off of the relevant rules some pieces of code should be pulled out of the internal Wisk action pool and called from there in response to that request. So the actions are called and executed in temporary ephemeral containers so the idea is that the code just runs returns a response and then the container gets deleted. So it's supposed to be like very very quick in temporary. So how does OpenWisk work? So each function that is submitted to the system gets associated with a particular resting point. So you can actually invoke a function directly by just by making an HTTP request. Such a request can be emitted by essentially any device that has internet connectivity so this can be your laptop, your phone a various sensor on your IoT network and so on. So as an alternative to HTTP rest request we can use something we call feeds. So feeds essentially monitors some service such as a database or a message bus like MQTT. So in these cases every time a message comes into a certain topic on your MQTT broker or a new record gets added or removed to a database then action can be triggered to respond to these changes. So just going over the high level implementation architecture so this is a simplified version to show exactly how OpenWisk works internally. So when a user sends out a request that might be from the UI, CLI or from an automated feed they'll be targeting an edge machine which essentially acts as a proxy. So this is the primary endpoint for all of this request. So it acts as a proxy and if the user calls an action then this request is forwarded to the to the controller and the controller is essentially responsible for managing these API calls. So when the request for action comes in then the controller essentially ensures that the request is valid so checks the credentials and makes sure that the action actually exists and so once this incoming request is validated then the controller sends a message over the Kafka message bus and essentially it then basically calls the invoker machine and looks for which invoker machine is available and essentially so the invoker machine is where the action will actually be executed. So the code that is associated with the action is pulled from the database so in this case it will be a CouchDB database which is open source. So it's pulled out from the database and pulled into a Docker container and then just executed within that container and once the process is completed then a response is sent back to the user and the container is then deleted. So there are several there is quite a bit of optimization to ensure that applied to the invoker to ensure that the activation time or essentially the time between event being triggered and start of the execution is as well as possible. So just in summary so I just want to stop here and trace back on and reiterate the main benefits on a service or event driven framework such as OpenWis so essentially OpenWis allows for writing applications in a completely modular fashion. So this flexible model is great for IoT because it doesn't require any registration from devices as long as they have internet connectivity they can trigger and respond to events. So generally we found that the most efficient or the easiest way to integrate IoT devices into our OpenWis applications are to use message bus like MQTT rRabbit. So each component of the application can be written in a different language so just depending on what language is best for the task at hand and furthermore these components can easily be shared or pulled down from either public or private OpenWis catalogs. So these these can actually so these components can easily be shared I'm sorry so these can actually be just used like off the shelf so either so they might be offered either in the OpenWis catalog or the OpenSource community. So also these computational tasks are outsourced to the cloud so it's great for mobile applications so it makes it much more energy efficient and it won't drain your battery as much when you're using it and also finally the developer doesn't have to worry about scaling or configuring servers they only have to pay for the resources that they're actually using. So it's very granular and you can essentially like rip and replace modules without breaking a call stack. Okay so that actually concludes my presentation so does anybody have any questions? So essentially if you just want to play around with some hello world actions then the easiest way to get this setup is you can just like run it on your own laptop so essentially you'll use Vagrant and Vagrant will spin up a virtual machine, go through install Docker and all the prerequisites and then and then set up the OpenWis components. So each component actually runs in a Docker container so it's pretty easy. If it turns out that you like OpenWis and you want to scale up then essentially what you'll need is so that'll be something like OpenStack, VMware or AWS and essentially what they'll go do they'll spin up a series of virtual machines and also install Docker and then just communicate with Docker over the base port to the CCP port and then it'll go through and configure them. So these are actually like flexible scripts that are still in the works but essentially these scripts you can run them either from your laptop as long as you have internet connectivity to each of the virtual machine IPs or you can like stand up a bootstrapper VM within that IS and run it from there. Do I have any other questions? So essentially the device would if you're making an HTTP request then essentially the device would so it'll specify the action name either in the URL or it can also pass data using a JSON object as well. So does that pretty much answer your question or yes? Yes. Yeah, so actually so the way that that will work so by default we have a few built-in actions and essentially what these would do so if you're using OpenWis then you essentially also have a Bluemix account associated with it and this Bluemix account has a full catalog of different Watson services so this might be a speech to text this might be like language analytics and so on. So basically you would do that when you're setting up the action. I should be so like this is essentially what our UI looks like so you would create an action, you would just write some code and then basically from there then you can specify exactly how that action would be triggered so this is where you would actually oh yeah, okay so here's the catalog so essentially these are all the various triggers that you can set up with your action and no so this is actually you'll get everything except for the UI oh if you spin up your own then yeah like IBM doesn't get anything so essentially the way that so if you're running it yourself then we would essentially get paid through the Watson or the various catalog calls but yeah so yeah if you're running on your own hardware you don't pay for that uh yes actually I believe so there's James Thomas in the UK he actually talks a bit about how to create your own OpenWiz so this is a feed but he also has a variety of packages and various components as well from his own catalog I think I'm not sure if he has a link here oh yeah and this is essentially like how you if so if you stand up your own instance that's essentially how you would you would have to interact with the OpenWiz deployment using COI I'm trying to see if I have a link to his uh oh yeah okay so that's how you would create a package um any other questions so not yet actually so that's gonna be the next issue we're still I'm still talking with the OpenWiz team to figure out um how to go forward with that so at the moment um if you're deploying your own OpenWiz um if you're setting up your own OpenWiz deployment then essentially you specify in the Ansible scripts exactly how many invoker VMs you want so it doesn't it wouldn't go out in like just like scale and create additional virtual machines but we're we're in the process of setting up Elk and Kibana and so on and basically um based on the output there then you can just essentially like monitor the amount of resources that are being used across your invoker virtual machines and um but so it's it's a half baked like we're still like thinking it out but uh the our hope our host the service on bloominx.net it does that for you so I'm not entirely too sure um cause from what I know the user credentials well I believe like the authorization information that lives in yes not even up so the authorization information is actually it lives in CouchDB so like that's where you communicate with if you wanted to create new users or change your password name space and so on um do I have any other questions so from my understanding so console is a key value service and from my understanding it keeps track of um so it keeps the log of the invoker of the incoming requests and then it also keeps the status of the invoker machines oh I see um wait so so let me make sure that I have your question correct so you're saying like essentially if you like create your own um I see I see so that that is still I'm not too sure I would have to talk to my manager about that cause there there are like a couple like CouchDB for example is one of the newer um services that we've like actually added as a it's actually a service in Bluemix as well so like there there would just have to be some yeah we just have to like work it out there I'm sorry I'm not that doesn't really answer the question but I'm not too sure um do you ever consider to deploy this model on the embedded device not in the cloud so we have thought about that so I did in theory that may work at least so if you're embedded device is able to run Docker there was something like the Raspberry Pi then ideally it should work because each of these um each of these components are actually running in a Docker containers so the tricky part would probably be like the invoker um so like just figuring out yeah so it would essentially be the same um same model as the single deployment model where you spin up a vagrant VM and um and deploy all the components there so ideally it should be I haven't I haven't tested it personally though but ideally it should be able to work oh okay so it's entirely asynchronous so I'm not sure I'm actually in the middle of testing to see exactly how many um how many actions you can run in parallel but um but you can't run actions in parallel and yes so it yes oh yes it's stable yes yes that would be JSON so you you don't necessarily have to pass a body but like if there are various if there are like actual parameters that you want to see if I have it still up so essentially it would be called as argument here and um you can just like call well you can like pull things out of the JSON object so it would be that params object there and also if you guys like wanted to play around with OpenWISC hosted version then you can just go to boomix.net and essentially you can set up uh you can set up your own account and it will be free um yeah it will be free like up to a certain usage I believe so like for example with the Watson services I think um after the first thousand or so I think it's a thousand or two thousand requests then you start getting charged so there are like free trials out there initially but like when you have like an application that's actually you know like running all the time and showing up resources then that's when uh payment arises okay so there's no further questions then that concludes my presentation