 My name is Andreas, I'm working for IBM. I'm one of the technical product managers responsible for open risk. And first of all, thanks for being here. Thanks for having me here. And thanks for listening to this talk that I will do together with my colleague Andre. So at the beginning, Andre will motivate a little bit and talk about what he calls the industrial internet. And then afterwards, I will try to explain you the main concepts of serverless computing in general, and open risk in particular, to show you how the problems that Andre has motivated before can be solved with these technologies. So with this short introduction of myself, I directly hand over to Andre. It's all yours. Yeah, greetings, everybody. I'm excited to be here. Hope you're as well. It's my pleasure to open this breakout session. And the breakout sessions at the summit. And you will probably see a lot of those slides about Alturas, we're a professional services company around Cloud Foundry. And the reason why we decided with Andreas to present at the summit is that open Cloud Foundry functionality is not enough. And you really can take most of advantage from the Cloud Native platform when it has not only runtime, but also has services, have different infrastructures, and has possibilities to run different kinds of applications. And we will tell you why we need event-driven infrastructure. I will give you some very trivial cases which we'll demonstrate when we can take the most benefit out of it. We will speak about the requirements, what we expect from event-driven infrastructure. And Andreas then will introduce OpenWISC and will explain at what extent it addresses those requirements. We will tell about how OpenWISC can work with Cloud Foundry. And we will run a short, very simple demo, OpenWISC connection. So to start with, we often now speak about industrial, internet, internet of things, industry 4.0, what is it about? It's about the different devices that are connected to the Cloud. And the devices could be different. There could be some devices send their information to Cloud, some information get information from the Cloud. There could be bidirectional devices. But what they all have in common is that they don't have to be connected to the server all the time. In most cases, they just send a signal, get a signal, send a signal, get a signal. It might happen once a minute, maybe once an hour, or maybe once a year. But quite often, these signals, both incoming and receiving, are important for business, for security of people, for life of people. So we need to have reliable connection. However, we don't need to have those devices connected to the Cloud all the time. And in general, what is the way, how we implement industrial, internet connected devices. We have devices, and they connect to some compute infrastructure. In a very simple case, we have one device, one device. What do you think what it is? Any ideas? Yeah, physical, it's a physical Bitcoin. Just imagine physical implementation of Bitcoin connected to the Cloud. So we have a device connected to virtual machine or container. And in some cases, we need to have high availability for this device. And how we will implement it in AWS or in Cloud Foundry, we will have an application provisioned. And we will have a demon running in this application. And it will be listening for incoming connection all the time. Again, I was telling about devices that don't have to be, and don't have to be, and they don't send information all the time. But in order to implement this functionality, in AWS or in Cloud Foundry, or using different traditional passes or infrastructure services, you have to have a process that is listening all the time. Because, again, sometimes our lives is dependent on this signal. And in some cases, it should be highly available. So we have two instances of applications sitting there waiting for a first signal, which may come today, maybe tomorrow, maybe once a day, maybe never. And let's imagine that we have two devices. So the question is, do we have to propagate our infrastructure to multiple containers to connect to devices? Do we really need to have? An instance of a container or virtual machine per each device and also have it highly available. So we have this big infrastructure just for two devices. Let's assume that we're smart and that we don't have to connect one device per one machine. And we can have multiple demons sitting in one virtual machine and listening for incoming devices. And this way, we will be able to optimize utilization of our infrastructure. So we cut down the number of our infrastructure. We have two virtual machines or two containers having multiple demons running on them waiting for a connection. And let's imagine that we have an unexpected spike in traffic and we have many, many, many devices. And we may have to scale out to cope with this load. So we scaled out. And at some point in time, traffic goes down. And we have this number of devices. Then we have this number of devices. And right now, it's a really mess. And we don't understand which servers shall we shut down. Maybe these two servers, maybe these two servers, the utilization of cluster goes down. But we don't know which exact device is connected to which server. And there will be several connections on this server, several connections on this server, several connections on this server. So if we just shut down the underutilized server, then we lose some connections. So we have to keep this infrastructure in place. And traffic goes down and down and disappears. So now we can destroy anything. So we're coming to a situation that in ideal case, we need to have one container per one device. So that if we have many devices, we have many containers. If we have a few devices connected to the cloud, we have fewer containers. And if we don't address the issue of fast scalability, then we need to have this whole infrastructure provisioned all the time and waiting for incoming connections. And we need to optimize for the maximum load of the cloud, for the maximum number of devices that could be connected. And it could be quite costly endeavor. In ideal case, we need to have zero infrastructure, for zero connections, and to have compute following the traffic. So now we're coming to a point when we need to specify what the requirements to an event-driven infrastructure we want to address. As we were speaking in this case, we need to invoke and scale as fast as possible, ideally in a fraction of a second. So that when we don't have any infrastructure at all, then we get a signal. And we have this infrastructure provisioned to process the signal. And it should happen fast. Then we want to terminate this infrastructure after the job is completed. Ideally, we want to get charged per compute used. We don't want to get charged for the demons waiting for the connection. We need to have reliability of a service. We need to be highly available and self-healing. Ideally, we need to be able to use different technologies to develop for this infrastructure, to launch microservices, get it connected to different data sources. And also, it would be good to have developers obstructed from the infrastructure so that they don't have to manage all this system. Provision of containers, taking them down, auto-scaling, load balancing, routing, and abstract them from that complexity. So now Andres will tell you about OpenWISC. And how does it help to address all these questions? Yep. Thanks, Andrei. Before diving into technology, I would like to show you how we started as an IBM, because I think that conveys a very important message as well. So we dived into the field of serverless computing in the early years of 2015. And it has initially been born as a resource project that we started off at the TJ Watson Research Center in Yorktown. But meanwhile, we have development teams that are working on the technology of OpenWISC in development locations around the globe. So we are developing on OpenWISC in Böbling, Germany, in Raleigh, in North Carolina, in Austin, Texas, and many more locations. And I think that conveys an important message, because it demonstrates that this is for us a very important effort. And it also demonstrates that we regard this technology to have the potential to become a game changer for the future way of developing cloud-native applications. And I think it's important to point that out. But what the hell is OpenWISC? So very crisp and to the point, a very short one-sentence definition. OpenWISC is an event action platform that allows you, as a developer, to execute code in response to an event. So with respect to what Andre said, for instance, that could mean that you can execute custom logic in response just because an IoT device has emitted an event. And this event is then a kind of a trigger that is then supposed to kick off application logic that you, as a developer, has written. That's actually what it does. OpenWISC is being offered in two ways. So you can get access to OpenWISC if you go to the Plumix platform. That's where our commercial offering is running. So as of today, you can just access the Plumix platform, can go to OpenWISC, and can play around with it directly. We have the CLI, we have the UI, just play around with it. But we also made it available as an open source project. It's being hosted on GitHub. And of course, I would here like to take the opportunity and to encourage you to really do that. So please go to our GitHub site, have a look at what we are doing there, provide feedback, and feel invited to, and that would be even better, to participate and contribute to accelerate the development of this open technology. So what is OpenWISC in a little bit more detail? So OpenWISC propagates a serverless deployment and operation model, which means it hides any kind of infrastructure and operational complexity, allowing you, as a developer, to focus on what you really want to do, namely, developing quickly value-adding code. That's your main focus. So it's a little bit like, if I would have to say a slogan, it would be a little bit like, you provide us code and we execute it for you and you do not have to worry anymore about all these low-level details. We also guarantee you an optimal utilization where you do not have to pay for resources just idling around what would have been the case in the old world if you went with VMs and so forth. And it inherently scales on a per-request basis because at any point in time, we provide you with the exact amount of resources, compute power, storage memory that you need to operate your application efficiently. OpenWISC also provides you with a flexible programming model where developers can develop in totally different languages like Swift, like Java, like Python, like JavaScript and they even can execute custom logic by being able to run Docker containers in response to these events. We even support things like interweaving or interconnecting the little puzzle pieces that you have developed in a decorative fashion by doing things like chaining. All these things that are part of this flexible programming model allow your developers to reuse existing skills so they do not have to learn new languages, for example, and to develop in a fit-for-purpose fashion because they can tackle each problem that they have been assigned to using the best-seated technology. But the best is the entire technology is open, so the engine itself is open and it's being built on open technologies as well, so we leverage things like Docker, Kafka, Consul, and so forth. But even the entire ecosystem around is open and this entire ecosystem is then comprised of event emitters, so services emitting the event then supposed to kick off an action and event consumers. And all these different event consumers and event emitters, they can even be provided by different vendors, which is what makes the ecosystem open. We also provide you with an open interface for event providers, which makes it even better because that means that everyone, including you, can enable any service that has not been enabled before bars. So it's not only the engine that is open, it's also the ecosystem that is open so that we can end up with a lot of event emitters. And open risk has been implemented just for performance reasons in Scala. So the question that remains, of course, is how can this be better than a traditional model? I would like to explain that along a simple example. So assume you have, you want to execute logic just because something has changed in a database service like clouded. So how would you have done that in the past? So what you probably would do is you would write a little application that little application would contain code that can then connect to your database and can check if there was a change. This little application would then run on a VM or as part of a container. Maybe you have implemented as a Cloud Foundry application, something like that, okay? But then due to the absence of a real event programming model where the service itself can tell you that something has changed, you would have to do something like polling. So you would have to ask the application over and over again, hey, has something changed? And of course, this is from a utilization perspective, very poor, because that means the application is very often waiting for the next request to come in. But at the same point in time, the underlying VM, for example, is still up and running. So you have to pay for all that. Even worse from a scalability perspective. So if you went for a VM, for example, at the point in time you ordered UVM, you are bound to the capacity you have ordered. But what if load increases? Then you need to answer the question, how, when and how fast do I have to scale out? So what you probably have to do, which is again, not the business developer is being interested in, you would probably have to do things like setting up very complex auto scaling rules that define when to scale out. For example, maybe because the memory is running low, the response time goes down, all that stuff you as a developer are actually not being interested in. So it's really here about radically simplifying the development process by making it not necessary anymore to worry about these low level things. Even worse, you have even to think about resiliency. So if you want to achieve high availability, you need at least two processes, which is a kind of redundancy. So of course this costs money. Sorry. And probably you also want to have multi-region deployments that costs money as well. And of course, keeping everything of that running and healthy costs money again. So OpenRisk helps you to overcount these drawbacks. So what we have is, we have this little trigger here, which might be the event that is being admitted by a service like Cloudant. And this little event then arrives at the OpenRisk engine. And then we determine in the OpenRisk engine what the right action is that is supposed to be executed. And the action, by the way, is the little piece that encapsulates the application logic here as a developer of written in any of the languages that we support. And then the magic happens. Because what we do is we are able to deploy these little actions very quickly in milliseconds. We run it and we take the response and send it back. And then we free up resources again. And that means there is nothing idling around anymore. We have a 100% utilization. And we have a real event programming model because that trigger is telling that something has changed, right? We are not calling anymore. Even better is that we can scale inherently because we can parallelize the deployment of these little actions. So if the load is increasing, we just deploy more of these actions. If the load is decreasing, we just try to get rid of some of these actions. So we always have exactly the amount of actions that we really need. And of course, you don't have to worry about resiliency anymore because this entirely becomes our business. So how does open risk work behind the scenes? So the events that cause the actions to be kicked off, they are emitted by what we call event providers. Typically event providers can be clouded. I've just talked about that already. It can be a push notification service, something like that. And all these services, there can be services run on Plumix, but of course it can be also services running outside of Plumix. And as I've already told you, if you have a service that has not yet been enabled, so it's not yet emitting events that open risk understands, you can do that on your own because we have that open service provider interface. If you are a service provider, please do that. That's exactly what I would like to encourage you here today, right? Anyway, if the event then arrives at the open risk engine, we have something that we call a rule. And the rule tells the system, if this event is coming in, then please execute this little action being implemented in this particular language. Of course, what you also can do is you can invoke a little action in a more direct fashion. So you can invoke it by doing just an API call, namely a REST API call. So imagine you have a little web application or a little mobile application supposed to list a set of customers. So what you probably would do is you would have a little button in the web application or mobile application. And once it is being clicked, an API call is being made and then it ends up at the open risk engine. We determined once again, the right action to be executed and the action would contain the code that can then connect to the database, fetch the right subset of customers to be displayed and hand it back to the web application or the mobile application. So coming to the programming model beforehand, over back to Andrew, he was then demonstrating this along an example. It's very, very similar to programming model. We want to have a very low entry barrier. So on the one hand side, we have the services that emit the events as triggers. But the only thing the developer has to take care of is really implementing these little actions in the languages that we support. It's all he has to do. And then there's an additional one liner that he can do when using our CLI. He has to associate these little triggers with these actions so that the system knows if this trigger is coming in and please execute this action. So triggers are actually nothing else than classes of events that can happen. We have already had a look at this one. So events can be emitted by database-centric services just because something has changed in the database. Maybe data has been updated, deleted or something like that. It can also be with respect to what Andrew explained. It can also be an IoT service which might emit an event just because an IoT device has sent some particular data. It can be an analytic service, for example. Maybe there is a service that is continuously scanning a Twitter stream and has just detected a trend and just because it has detected a trend, it emits an event and that fires off an action that contains some logic that is supposed to be kicked off. Or it can be that a simple service like Git is emitting an event just because there was a change in the Git repository. Actions are just event handlers containing a code. That's what I already mentioned. Just to foster reuse and making it able to change behavior quickly, we also support higher programming constructs like, for example, sequencing. So for example, what you can do is you can define one action that, if being invoked, invokes the concatenation of other actions already existing. So for example, you can have action AA and if being invoked, it invokes A1, A2 and A3. You can have the similar action AB that invokes the same actions but in a different order. And of course, you can have many, many more actions by using that sequencing concept that interconnects these little puzzle pieces in different order or there's an additional step or you remove a step or a step like that. And then you have, as I've already said, you have the rules that just associate triggers and actions. One last comment before I hand over back to Andrea who's then demonstrating this technology. We also provide you with what we call packages which is a shared collection of triggers and actions just to give you two examples. One of the packages, for example, we have the Cloud and package which we have already talked about. And therefore, for example, we have the trigger called changes and you can configure that package against your Cloud and database so that you get informed why that trigger, if something has changed in your database and then you can say, okay, if this trigger fires, please execute this particular action and then this action contains the application logic that you have written and does whatever you want to do. Or there's the IBM Watson package where you see there are no triggers but they are actions. And what you can do here is you can, for example, invoke the translate action, hand over some text and you can translate the text without writing any code. You don't understand anything about Watson. You just hand over some text and you can translate it from English to French, for example. Of course, we once again would like to encourage you to write these kind of packages as well for different services to make integrating this other service easier. So I hope I could motivate you a little bit to at least have a look at this technology. So if you want to try it out on your own and I would be very, very interested in your feedback, so don't hesitate to contact me via Twitter, via mail. Don't call me, please. Then please try it out. Go to Plumix. You will find it very easily open-risk and go with the UI first because that's the simplest way to enter in this area. And also go to our developer center where you'll find our open-source offering and a lot of resources. So there we'll find all the events that we have attended in the past. You will find the events that we will attend in the future. You will have access to recorded sessions that we did before. You will find access to our YouTube channel where we have a lot of samples, all these things. So there's a lot of information there. There's also my contact data. If you go there, just get in touch with me if you have additional questions. We are really, really interested because there's still a beta program. We are still learning from you guys. We are still learning from our customers, from our partners. So do not hesitate to reach out. I will be around for the rest of the day if you see me out there just, yeah, just get in touch. With that being said, and after having stolen too much time from Andrei already, I'll back to you. Yeah, okay, enough with theory. Let's come to breakfast. I need volunteers. Those who have Android devices, Android phones. So what I would request you to do is to download an application from here. Nothing too scary. It will just steal your Facebook password. Maybe Twitter, I'm not sure. But yeah, nothing dramatic. Please download an app, install. I will post here for 10 seconds. And once you have installed, you will see something like this here. Yeah. Once you have cladding. Yeah, and the demo is actually boring because, right, no, no, no, no, no, it's, what was that? Yeah, it's spoiler. Somebody just starting to do something before officially announced. So in terms of things, it's boring because you have one device, you have another device and something happens between those two devices. So what I will do, I will shake my phone and you already know what will happen. Yeah, so we'll have somebody else clapping. Yeah, so what number will we have here? On the right side, we'll have the device, my phone. Or other phones. And here we'll have a mainframe. Each IBM technology requires a mainframe, so it's an endpoint mainframe from Raspberry Pi, not from IBM. And here was a speaker. It was the volume, so it was instructed. And what we see here, phone, device, and openWISC. Now, openWISC, in order to use openWISC, we used Bluemix service. And I will tell you, I'll show you that. How long does it take to invoke this action? I have to switch to Bluemix dashboard and you can find the openWISC in Bluemix dashboard and I will refresh the results. So you see that some actions took 700 milliseconds, some actions took 6 milliseconds. So at maximum we have 700 milliseconds. And it's the time that really takes to invoke an action and to process it and to terminate. So actually when IBM puts openWISC into production, they will charge you just for 600 milliseconds at most for this application. And the rest of the time, there is no payment is required. And as I said in the beginning of the conversation, the signal may come in once a day, maybe once a minute, but still it's much more efficient than paying for the infrastructure sitting there and waiting for signal all the time. Okay, coming back to our presentation, I will spend the next 20 minutes to explain you what is displayed on this diagram. Actually what I would suggest is to read this article. There is a detailed explanation about what's happening, how openWISC works under the hood. Please take a picture or the presentation will be available in some time on slide share and at Cloud Foundry Summit website. There's another diagram that you will find in this article. And now you will ask me, okay, so it's Cloud Foundry Summit, it's not openWISC Summit, where's Cloud Foundry? And here is how openWISC can work with Cloud Foundry. I really think that openWISC can be a valuable component in a cloud native platform because its address is a very important use case. And addresses, it's quite efficiently. And so I mean, in this diagram, you will see that here is application sitting in openWISC infrastructure. And here are the applications that are deployed in Cloud Foundry. So in Cloud Foundry, we love it absolutely. It's perfect for deploying application, for making the data scalable, highly available, authentication, assertion, policies. But openWISC can be deployed with Cloud Foundry with Bosch. It can get external connections from external devices like from the phone or from Raspberry Pi. And it can also get signals, get triggers from applications that are running in Cloud Foundry. Something like Andreas gave an example, Twitter handles or GitHub notifications. So those applications can send triggers to openWISC. And openWISC in its turn will take advantage of the service farm of Cloud Foundry and may work, in this example, it may work with non-relational database that doesn't support ACID transactions through messaging queue like Rabbit or Kafka. And it also may have adapters for relational database like MariaDB. And it also can be integrated with UA component of Cloud Foundry so that there is a transparent authentication process for all the components of the platform. However, if you think that you want to avoid vendor leaking or you have your data in a local data center or you implement a hybrid strategy and you don't have information at Cloudant and you don't use Bluemix at all, you need to have your computer device on your network edge, again, not in Cloudant and you're concerned with what security you think that you may implement your infrastructure better than IBM. Then you don't have to use IBM service or Bluemix. The good news is that IBM open sourced the source code of OpenWizk and it's freely available at GitHub, please join, collaborate, download and you can deploy it on your own infrastructure or on OpenStack or VMware and you don't have to be tied to Bluemix to IBM. We're done. We'll be happy to hear your questions. Thank you.