 We're going to be talking about a new Apache project which is called OpenWISC. Is this a new project to you and did anyone hear about OpenWISC? Cool. All right. So it's not really, really, really new. OpenWISC is an Apache incubating project. It's a serverless open source cloud platform that executes functions in response to events. So why is it that we are at a mesos conference managing servers and yet we're talking about serverless? And I have three factors that I think are important when we think of serverless. I would say the first one that is a driving factor for serverless is the user experiences. We're probably still familiar with those old times. I should say once upon a time we had these big desktop machines and the user experience was limited to that desktop machine. Then we had some modems and then we had a little bit of connectivity and we had these big applications in the back end. Then mobile apps came into the picture. Really cool. We ported some of the desktop capabilities on the mobile and in the server it also made sense to split the monoliths into smaller pieces. I believe that starting with this year we're seeing something new coming. And that's mainly driven by voice, AR and VR. I think that these three are probably going to dominate the next generation of user experiences. So one particular thing about voice is that the interactions are shorter. You could be in your hotel room and maybe speak with the digital assistant, check on the weather, check on local events, restaurants, which ones are popular, make a reservation, maybe call the reception desk, whatever, short interactions. But in these few examples that I gave with my voice I was able to touch on so many systems, weather, reservations, events, hotel services. So voice is pretty magical. But not all experiences are real time. Imagine that we're at an event and we use the phone, we take pictures. Those pictures are synchronously copied into the cloud, backed up. And then while they're into the cloud there's a whole process happening. We have workers that can look at the pictures. We have those events queued into a queue. And then from the queue a microservice consumes events, processes them and maybe looks at the pictures, what was it taken, phase recognition, friends, and so on. So I guess the point I'm trying to make is that the richer and the more diverse the user experiences, the more complex the services in the back end become. And I'm really building a case for serverless here, if you bear with me. The first factor, user experiences. The second one, and we are familiar with this one, the time, how long it takes right now to provision some compute capacity. We went from days, if not more, to provision bare metal servers to VMs, minutes, probably seconds in some cases, but I'll say rather minutes, then containers. And I'm sure using mesos we're very familiar and excited with these containers. Well, so serverless is just a tiny little step forward on how much it takes to provision resources. And we're going to look at some real demos this evening. The last point that I want to bring is product extensibility. This is not generally spoken about at the, I would say, general serverless platforms like Amazon Lambda, Azure Functions, that would categorize it as general purpose serverless. But there's cases, and Adobe falls under this case, where we have a platform, we have services in the cloud, and we want to make it easy for developers to be able to extend them. And we started with APIs, like anybody else, right, APIs, great. Then we went into the real time aspect. We realized that there are events, people, creatives put pictures into the creative cloud. They generate events. And we said, yeah, we're going to make not just APIs, but web hooks, and they're great. But web hooks, even though they're simple, you still need to care about what's happening and how do you handle the web hook and the event. And serverless provides an answer to this. So serverless gives not just Adobe, but whoever wants to install such a platform, the capability for developers to write a piece of code, extend the platform, and integrate it with any other SAS systems that they're interested in. After these three, I will say that the modern apps have a richer UX, are smaller, composable, and have this real time aspect. And without further ado, I'm going to jump into the first of the four demos that we're going to run today. And I'm going to show an example with Adobe Analytics. And I'm going to use my voice to get some data out. Of course, I'm limited to the internet, so I hope it works. I have on my phone an Alexa user interface. It's an application called Reverb. And I'm going to have a dialogue with Alexa. I wouldn't ask if you are using Adobe Analytics. I will assume we didn't use, but I saw happened that somebody installed Adobe Analytics in my web page. And now I'm curious how many pages I got last week or visitors. So without having, without being skilled on the user interface, I'm just going to ask Alexa. So, Alexa, ask Adobe Analytics. Welcome to Adobe Analytics. Which report suite would you like to use? Summit demo 2017, template report suite. Demo. OK, using the Summit Demo 2017 report suite, how can I help you? How many page views last week? The total number of page views is 1,658. How many visitors last month? The total number of visitors last month was 11,799. Thank you. All right. So as you can see, I just showcase some very simple things that we could do with serverless. And this runs, the demo runs in OpenWISC, by the way. And I just demoed a very simple interaction. It's real time I was able to get an answer back from a very complex system. This is what happened. I gave a voice command. It went to the Echo, Amazon Echo. It circled back to Adobe BIO. I got a response back, right? Now, let's look a little bit inside Apache OpenWISC. And I'm going to hand the microphone over to Tyson, my colleague. We're both working for this team in Adobe that is called Adobe IOM. And we're the interface between Adobe Services and third-party developers. Thank you. So I want to talk a little bit about what OpenWISC is, how it works, and the kind of steps that we're taking to leverage Mesos underneath it. So some of this is kind of marketing-speak for OpenWISC. But Apache OpenWISC is a serverless open-source cloud platform that executes functions in response to events, in Docker containers. And part of what it provides is a command line and an API for function management. So really, it gives you the vehicle to deploy pieces of code, whether they're small or large, into Docker containers in a generic way. So I want to talk a little bit about some concepts and terminology that are used in OpenWISC to kind of outline how we have started leveraging Mesos within. So this is a simplistic architecture diagram of OpenWISC how it operates without Mesos in the picture. So you can see from a internet connection, there's the first entry point into the OpenWISC system is to the controller. This is for the execution type of workflow, not the function administration workflow. But for the execution, events come in and are queued in Kafka. And eventually, they're processed by a component that's called an OpenWISC as the invoker. And the invoker natively speaks to Docker client and launches containers, executes functions based on the request, resubmits the response back to Kafka, responses picked up by the controller and sent back to the client if there's somebody waiting, or it just keeps the response for itself if nobody's waiting for it. So this is fine, but let's talk about OpenWISC scaling. So in order to add an invoker, what happens is this whole invoker host block in the diagram gets duplicated, another topic on the Kafka broker gets created, and the controller has to manage knowing how many invokers are in the systems, what actions have been executed so that there can be some amount of optimization on routing. But it's not easy. So what I was just talking about, when you add the invoker, he advertises himself in Kafka, and that's how the controller discovers, monitors its health via Kafka. But we're all here because we're a Mesa shop, and we really don't want to have competing cluster resource managers. So in OpenWISC, the invoker is effectively a resource manager who considers that he's the owner of all Docker containers on any particular host. And this sounds really familiar to Mesa's operators who know that when a Mesa's agent is on the host, he typically operates in the same fashion, considering himself as the owner of all the Docker containers on the host. So we don't want competing clusters. We don't want competing container managers. We want to use Mesa's to manage the cluster. And yes, we can't. We just have to make some minor changes in OpenWISC to do this. So I want to go into some details on how we're doing that. So first of all, it's important to know that OpenWISC components are written as ACA applications. And this is helpful because it makes it easier to decompose the application in a way that we can use messaging to drive the interactions in between the components. So the steps that we're taking are, we want to use OpenWISC to launch the Docker containers, but do it via Mesa's. And so what we're going to do is we're going to use a Mesa's actor inside the ACA applications that OpenWISC is running for the controller and the invoker components. So the first question is, where do we find a Mesa's actor? So after looking at some of the existing clients dealing with things like managing libMesa's, revving the client with the Mesa's versions, it becomes a little disheartening to try and build. It's the classic, it's difficult to build a Mesa's framework story. So what we did was we started from scratch a Mesa's actor that behaves based on the Mesa's scheduler HTTP API. And what we end up with is a single actor that encapsulates the interactions with the Mesa's HTTP API. We can drop that actor into an ACA application and start causing that application to behave as a Mesa's framework with doing very little work. So when it goes through some details on how we're building the Mesa's actor. So this is a diagram of outside of OpenWISC concepts, how does the Mesa's actor behave? And some of these concepts will look very similar to the scheduler Java API and the scheduler HTTP API. But in general, the lifecycle of the actor begins with a subscription. So you can send the actor a subscribe message. Once the subscription is complete, you can send the actor a task submission message. You can receive task state messages as tasks are changing state within the Mesa system. And then you can submit delete task and tear down the framework. So you can see how really interacting with the Mesa's scheduler API really becomes an exercise in ACA messaging at this point. So I'm going to go through a short demo on the Mesa's actor. Thank you for holding the mic. So what we have over here is my Mesa's cluster. And you can see I don't have any active tasks. So over here in IntelliJ, can everybody see this OK? I have a simple application. I just called it sample framework. And you can see, so my code for starting up is just instantiating the actor, giving it a framework ID, pointing out where the Mesa's master resides, giving it a failover timeout. Very simple. Once it subscribes, I want to receive a message for subscribe complete. And after I subscribe, I just launch some tasks here. And this is the piece that's just not that easy to do with the scheduler API in a lot of ways. But we've kind of encapsulated the HTTP stream interaction behind this other Mesa's client actor here. So when I run this, I'm just running this locally. And what will happen is over in my Mesa's cluster, my application that's behaving as a framework will start launching tasks. And the way that the application is written, it just simply launches some tasks, waits a few seconds, and then kills some tasks. But you can see how we've gone a long ways for consuming the Mesa's scheduler API in a few short steps. So now my tasks just got killed. And my application completed. And it tore itself down as a framework inside of Mesa's. So this is our really simple example. Now, the next thing that we want to go through is, so it's great that we can launch tasks in Mesa's now. But that's not really a reliable framework. And so some of the things that we're adding are dealing with highly available frameworks. So we know that when the framework is running and it crashes or disconnects or goes through a partition, we need to fix that and have a new instance that becomes the manager of the tasks that this framework has launched. And so the next thing I want to show is going through a different version of a sample that will deploy an ACA cluster application. And we're going to use ACA clustering to establish who is the leader and who should be managing the task at any given time. And as a cluster, these instances will work together so that if the leader becomes disconnected, another member of the cluster will take over and be able to have some continuity in the task manager for this particular framework. So when I launch this application, so now my ACA cluster is coming up. And if I go, so now I can see that my cluster has started. So if I look in these tasks, I can see that one of them will be the leader and is subscribing. So this one has the most log messages. So we happen to know that that's a good indicator that he's now receiving offers from Mesos. So that's great. So now he has the ability to launch tasks. But what if he crashes? So if we go back, this is 11014. So if we go over to Marathon and we choose him and we say, well, let's just kill that guy. So let's see. OK. So now Marathon is taking over, relaunching one of our instances. And now if we go back to the Mesos UI. Oh, now we have four instances. So now we have four instances running. I'm not sure how we ended up with four. But we can come back here and look for another one who has taken over as the leader. So this guy was a leader momentarily. Sorry, we don't have a great way to pick out which one is the leader without looking through the logs at this point. So you can see one of these tasks that is running now has taken over the responsibilities of being a leader. And you can see in the logs here where the ACCA cluster listener is announcing that the leader changed. And what's happening here is that each instance will, if it becomes a leader, what it will do is it will determine the framework ID that was used during the last subscription by a distributed data store in ACCA. And it will re-subscribe itself as the framework with that same ID. And therefore, according to Mesos, it will reattach and be treated as a failover. And if I go to frameworks, I still have my sample framework running. And so the task that it might have launched can be reconciled with it. But the main idea here is that we're using ACCA clustering to resolve failovers within the cluster so that we can reattach an instance as the new leader for this particular framework instance. So just recapping some features that we talked about, there's leader election based on ACCA clustering. We re-subscribe a new leader that has been determined after there's a failover. The framework ID has to be consistent between a leader failover and a leader election so that the same tasks can be re-associated with the new framework instance. And there's support for Mesos roles in case people are allocating resources based on role and having a framework identify itself as operating in a particular role. Some things that are not done yet with this actor implementation is the reconcile process. So in the case where an instance of a clustered framework has failed, its tasks are not currently reconciled when the new instance re-subscribes. And then also sharing task states so that that reconcile can happen. So getting back to OpenWisk, because that's what we're really talking about, what? Oh, we have a typo here. So a service platform on Mesos, massive scale operators expand and contract a Mesos cluster. So what alterations do we need to actually do in OpenWisk to support this type of Mesos actor to control our containers? So in the controller, if you recall the diagram with OpenWisk components, the facade for incoming requests is the controller. There's not any container interactions in that component currently. The containers are all interacted with in the invoker. So what we've done in OpenWisk is define an interface in the invoker for encapsulating what a container factory is. And so the out-of-box OpenWisk system will use Docker CLI as a container factory. So it just assumes that it has access to Docker, can run Docker and run C commands on Docker containers. In our case, we're going to use the Mesos actor to create and kill containers within the system. So here's the diagram that we had before within OpenWisk. The invoker is communicating directly with Docker. After we integrate with Mesos, we're not going to leverage Docker at all anymore. We're going to have the invoker communicate with Mesos. And the invoker is going to be elevated to the status of a Mesos framework. And so if we compare these, this is just kind of pointing out to be clear that no longer is invoker going to be leveraging these Docker specific CLI. And it's going to delegate to Mesos. So now it's going to show a demo of exactly what happens in OpenWisk when we do this. So the way that OpenWisk operates is by a concept of actions from a developer perspective. So if you look over in my terminal here, I have this JavaScript snippet. And this is what I want to run as a function in OpenWisk. So what I'm going to do is first I need to start my invoker, which was already running as a container. I'm just going to restart it. Now remember, this invoker is going to be operating now as a Mesos framework. And so you'll see it here in the framework section of the Mesos UI. So that's good. And now if I go back to my list of tasks, you can see that there's some tasks that were created by my OpenWisk framework. They're prefixed with this Wisk load balancer ID. And the reason these tasks are created is because OpenWisk does some optimization to speed up the code initialization process by starting some containers so that they're prewarmed. And once code arrives to be executed, it doesn't have to start the container if a container is available for running. It will just inject the code into it and run it. So now what I'll do is create my action. So I'm listing my actions. Currently, there's some tests there. So now I'm going to go and say action create hello. OK. So now if I list my actions, so now you'll see I have a hello action up here. And now if I invoke my action, what you'll see is in my action what I did was I returned a string, actually a JSON result which indicates what task it was running on. So, incidentally, if I go over to my Mesos UI and look at the logs for that tasks, I'll see some indication of a log message that was also generated in that task, as well as this is a marker that OpenWisk uses to look at the end of the log file. And if I go back, you'll notice that now instead of four tasks, I have five tasks because once I start consuming containers inside of OpenWisk, it will try and launch additional containers so that it has a steady pool of pre-warm containers available for new actions that might come in that need to be executed. And it does some optimization to try and reuse containers for the same action so that the same code is not re-initialized more often than necessary. But this is our demo for OpenWisk. And we'll see. Thank you for coming. If anybody has any questions, I think we can go to the questions now. Sorry. This might come from me not being completely familiar with ECA cluster. How do you manage a cluster membership? And is your in-worker framework a Mesos actor that you just presented? So by how I mean do you coordinate through ZooKeeper, I think a cluster can do that or multicast or what? How does this all come together? That's a great point, actually. And it was on my tongue. I wanted also to mention. I think the reason why I personally got so excited about ACA working again this year with it is that we didn't need to use any ZooKeeper. ACA clustering knows, has a nice algorithm. It uses Gossip to know about the nodes in the cluster. And it is capable. It has algorithms to pick a leader. So then what we do is that once ACA picks a leader, that's the node that is going to listen as a Meso framework. And I'm personally very excited about this because I don't have to depend on ZooKeeper and do all that orchestration. As a developer, when I work with ACA, I feel I have so much power now that somebody else implemented some algorithms for me. And now I just look very like I'm very smart. But all I do is just use ACA. So the one thing that I would point out is, so I highlighted a very specific piece of code here. So ACA does operate off the notion of seed nodes to define how a cluster is initialized. And what we're doing here is we're leveraging Marathon's application state to determine what the seed nodes are. And once we determine that there are some seed nodes and that the seed nodes are healthy, then we initialize the clustering based on everybody gets the same set of seed nodes and it has its own gossip protocol to establish who is the leader at that point. Yes. How do you plan on keeping up with currency? So when OpenWISC changes version or Meso changes version, is there much work in to upgrade your code? So the way that this is a complicated answer because the way that OpenWISC is operated today is going to be different probably in the future. But the way that operates today, there's the controller component and there's the invoker component. And strictly speaking, we can rev those independently. And so while the incoming facade, you know, the controller can be at one version, we can do a rolling upgrade of the invokers and then we'll be doing a rolling upgrade of the controllers. But the short answer is that we will have to do coordinated rolling upgrades of each component. So if I understand correctly, invoker actually natively manage the Docker containers and to do actions and you modify it to talk to Mesos and leverage Mesos capability to manage containers. So I would imagine it would be nice to abstract that like adapter later away. So invoker could not only talk to Docker demon or Mesos or Kubernetes or Swarm and you have these interfaces so that it can be leveraged to manage the containers. So then that API work could be upstream back to OpenWISC community. It is and you're right and it is. So we're working with the OpenWISC community right now. There's an open pull request that's nearing completion for the container factory. We are in close contact with a team at Red Hat who's doing the same for Kubernetes. And you're right, it's a generic API for launching containers with OpenWISC semantics but there's no exposure of Mesos dependencies upwards from that API. Okay, great, thanks. Cool, that was a great talk, thank you. So a couple of quick questions. One is the invoker is still the one that's reading the data from Kafka and invoking the actions, right? That's correct. So after it got integrated with the, I mean, the invoker itself is becoming a framework right now. Yes. So all of the actions get deployed in its own container, I assume. So how are you guys also scaling the invoker and how is the actions from the invoker gets called on the actual container? So this is, so the way the invoker is scaled today is if you are constrained by the number of containers based on the number of hosts that run an invoker, your choice today is to deploy a new host which also has another invoker on it. When we're operating the invoker as a Mesos framework, there's no real need for multiple invokers anymore. The invoker now has visibility to the entire Mesos cluster. The only reason that we have to operate multiple invokers at this point is to have a highly available system so that if one of those invokers fails, the controller will route all of its requests to the other invokers. But the ratio of invokers to hosts is no longer relevant, specifically for scaling out containers. Got it. So that means each of these individual containers can also subscribe to Kafka directly to receive their own messages. Invoker is still in the middle of getting the messages. The invoker is still in the middle of processing the messages. Okay. Incidentally, we are working on some other throughput optimizations that are specifically related to how function invocations are passed through Kafka and processed at the invoker layer. Where effectively what we want to do is share the container pool state of the entire cluster from OpenWiz perspective all the way back to the controller so that we can really optimize how the throughput is managed in the system. If I can just add, because I like your question, this is the beauty of Mesos that it takes away the container management in the cluster and we can focus on what is the core of a serverless and actually are the core. What we want to get to is even use machine learning to spin up containers ahead of time, pre-worm, do all these other things and offload the complexity of the managing of the containers to Mesos. Cool, thank you. So thank you for the question. I like this. Just one last one and then we'll wrap up. If we're in an environment with a universal containerizer runtime, UCR, no Docker, should this work? Not, well, I don't know. I was gonna say no, but that's not really true. Okay. We're not operating that way, but in theory it should work because as long as the container that's surfaced as a Mesos task as a routable IP and a port, it's in theory fine. Related question, like if we're doing IP per container with Calico, no overlay, or just layer three, is there any network implication? Should work, right? Yeah. Yeah, so this is actually one really nice thing about OpenWisk currently is that the interface that's required of a container from an OpenWisk perspective is strictly defined only as an HTTP interface that has a slash run endpoint on it. And as long as your container ends up with that contract in mind, then technically speaking, there's not really any reason OpenWisk can't execute it. Also point that you said you're with Calico. Oh, using Calico, yeah, it would be a great addition to OpenWisk to learn how to isolate containers from each other. You see, this is yet another thing that serverless environment needs to look at. So thank you for the question. Thank you guys. All right, thank you very much for your time. Thank you.