 Well let's get started. My name is Matt Rakowski. I work at IBM in cloud area. Specifically I work in open technologies which involves both open source and open standards. It's my goal to make sure we create healthy open source ecosystems around open standards where possible. OpenWISC is ripe for collaboration and involvement to improve and advocate for serverless computing event-driven programming. So my job here today is that if you saw Carlos's last session, you understand the workings of OpenWISC to some degree. And I want to try to take you through the many repositories that make up what I call the OpenWISC project ecosystem and entice you to try to connect the work that you're doing or your interest areas to some aspect of the project to get you to think about contributing if nothing else your ideas. Hopefully some code and integrations towards becoming active committers who can carry forward and improve serverless computing for everybody. So to level set OpenWISC to make sure in the correct room for people who did not attend the last session is OpenWISC is a serverless platform. So WISC is a cooking term and I have a little graphic I created where we want to bring together the functional components of applications from many different sources. Wherever you have events, wherever they come from, we want to be able to bring those together through serverless. Whether it be from IoT, mobile, traditional data storage, stateful data, cognitive analytics, we want them to come together through serverless because it makes sense, it's highly efficient and it removes in a lot of the complexity you'll see in each of these areas is removed if you can approach it from a serverless functional standpoint. Again, while we're taking the tour, we've got a lot of repos. We've got 28 plus or minus, luckily I put plus or minus because I added two repos yesterday. I'll touch upon those where appropriate. Again, spark your interest. We have a platform, some aspects are rather imposing. There might be languages you don't want to learn, but there are many areas for integration on the periphery for tooling and integrations. I want to explain where the code, relative to the architecture where it lives in each of the repos. I want to highlight the functional aspects of what's going on in the code and provide some call to action. I'll give you some ideas of things that we've talked about. I have Carlos and others have talked about where we would like to expand new feature sets to see if these things resonate with everyone here. This is my ecosystem view of OpenWisk. Again, we have the core, which Carlos took you through in the last session. That largely lives in one repository I'll cover. We have a command line family of things that interact through Rustful APIs. We have tooling, which I'll cover each of those areas as well. We have a catalog, which we split out last year from OpenWisk. We have things that are called system actions, which are shared actions, basic connectors. Anything that should be intrinsic in the catalog when you load OpenWisk to compose with, we want to put into a common catalog. We have external resources, education materials, playgrounds, hackathon materials, things like that, workshops. We have our website, which is still, so I should say I should thank Daniel Grunow and also Carlos for working over the last two weeks to take all of the repos that I've been, I'll be referencing here, into Apache organization from OpenWisk.org. The only standout is still, our website is still in the old org, but we need to create some, we need help there to take the Jekyll compilation and get it moved and pushed over to its home in Apache. So once we get that automation through Jenkins or something, we'd love to have that moved over as well. We have packages, so we have this catalog of some actions and some smaller packages getting started, some helper things, but when we have full blown integrations, we have something called a package concept, if you have actions and feeds and triggers and the things that allow you to connect to these external data sources, event sources, we want them to have their own top level packages. And the questions at the end of the last session were about what about MQTT and other things. Well we want packages, we want packages that we curate and we have as top level repos that encourage integrations. And we have samples, so as we bring some of these workflows together, if you come to our booth, you see things like part ordering or check processing. We'd love to have samples that show people how they can deploy with one click using one of our tools to get people started, to show a meaningful sample application that perhaps speaks to a workflow they have in their company that they can use or use as a drive from a basic example and customize to their needs. So these are the areas I'll go into. So WISCORE, so it's most of WISCORE, the controller specifically is written in Scala if you look into the repository. Primarily what I think of it as it's besides the controller which manages all the WISC entities, the actions, the triggers, the feeds with the stateful databases and has the configurations that it uses in the console store. Besides that I view it as a big deployment, build deployment repo. So all the things you need to run WISCORE and get going are in the core, Ansible scripts, Gradle build stuff and things of that nature. And all of it is towards containerizing each of these components. So these are the things you'll find in the core. I'm not a Scala person, I'll admit it. I only know basic things about Ansible to get things going and how things deploy. So I spend most of my time on the periphery not in this area myself. But if this is what you want to work on, things like Carlos is talking about, we're doing integration, trying to do with Elastic Search to try and create a generic like Lumberjack MT output from all of the containers and all of the componentry so you can gather that data and analyze it however you want. We need help there. Documentation is always key. We have lots of deployments, different deployment target environments and things like that. We'd love to add more as Carlos was saying. We'd like to improve the documentation for them, the testing for them, those things. We've talked about things like replacing our key value source. Somebody came by me and said it was Peter. I can't remember who it was. If we run this in Kubernetes, how can we plug in a CD as a source of truth? So these things we'd love to have help with in the core repository. And also message queuing. Can we replace Kafka? Can we replace it with a different queuing mechanism? Is there a need to? Question the system. We'd like people to come to us and question the system and engage us in our Slack channels and our dev lists and things like that and get answers and document those answers and come to a consensus of what we might want to do in terms of architecture. So I've started talking ahead of myself. So in terms of what we have in Ansible, we have local distributed Mac. With local, we support Windows and Ubuntu, the native installations. You can have your CouchDB as what we have in the open source offering as your store for all your objects, your entities, your state. You know, replace that if you want. See if we can have some generic DB connector to store in your database of choice. Recent activities. We added support for web actions with different HTTP header variants and MIME types. We separated out the companion line. That's still ongoing work. I hope it would have been completed by now, but we're still working on that. So we want to basically make the command line more configurable, pluggable. Maybe look at other command line frameworks we use for future. I mentioned the logging. Logmet is a source that we want to work towards, but the mechanism for getting data to Elasticsearch is through some open source projects. So being able to hook in there is a possibility for other targets. And just general help wanted. I mentioned documents. We'd love to have performance testing. We have a set of performance test suites in IBM. We'd love to move out to open, but we need to figure out how to stage those tests. How do we automate those tests in the Apache infrastructure? If people have experience with that, we'd love help. The UI, I mentioned from the audience. We have the command line. That's great. But when you look at commercial offerings from Azure or even Amazon, they have a web UI in public. We'd love to have somebody who's excited about taking our command line experience and things like our debugger and making them available in an open web user base that people can use and customize as well. Drag and drop composition from our catalog. And always it's about performance and hardening of the code. Improved security. How do we make it easier for taking our namespacing model and applying maybe other security models to it? Maybe our namespacing model for putting entities into the system needs to be challenged. Who knows? People who are dealing with security or applying this to different target platforms need to come to us and engage us and help with how we improve those things. The CLI. This is pretty simple. Again, HP Rust calls for managing all the WISC entities. And SDKs. We have SDKs, which you'll see. We have multiple language variants. So if you want to integrate, which we do for different tooling, different IDEs or whatever it might be, we have different languages. Swift, Python, JavaScript. The language that we primarily look to support is Go language in terms of being the starting point for adding new functionality and new APIs. And the other repos are kind of after the Go stuff. API gateway support. That's an example of recent activities. Adobe and IBM worked in collaboration with others to create an API gateway project under the Open West Family. And Carlos demonstrated some of that. And we'll get into specifics about needs there. But that functionality was experimental in the APIs until a few weeks ago. And it took a lot of effort to go through the code to make sure that the naming and the experimental tag for that left the system and it became a full-blown integrated function. I guess that reminds me that something that we've been talking about Carlos and I this week is about releases. A lot of our focus the last few days in attending sessions here at Apache is we need an automated release mechanism for OpenWISC. And we were debating what things and how we do it and which things we include in a release and how we version those things. So these are things we help. For people in Apache who have done these things for larger projects, especially we'd love help with. When we took the experimental off of it, we'd love to have created a point version and pushed a binder out someplace and notified people and those things. We want more plugability in the APIs. Right now we use a Cobra framework. We'd love more SDKs. Figure out how we make a generic SDK more pluggable and write less code. When we adopt a new SDK, is there a need for other languages? What's the value of that? Should we reduce the languages for that matter? Who knows? We want people to work where it's important. Is anyone using Python? I don't know. Who's using Python? So should we support it? So those questions we want the community to tell us. But if there's need for other tooling, we want to add it. Again, I talked about the Cobra framework. That's the basis. So most of the language-specific things, they basically compose the actual format of the SHB arrest call, the actual command line that you see with the help and the feedback and the prompting, the interactive prompting. That is in the base CLI component. So the OpenWISC CLI repository. Again, we're doing the separation. We're authoring lots of integration tests. It's been interesting because it was very convenient for us in our automated integration testing in OpenWISC, where the CLI and OPUS lived together. Once the CLI was installed in Travis or Jenkins or whatever, we'd be able to run lots of cool integrations, tests and things like that. They're separated. It complicates things in terms of cloning the repos and installation orders and things like that. So again, documentation is always a need for any project that is here specifically. As our APIs change, or interrogate the help we output when you enter a command that's wrong, maybe our help needs to be better. We'd love to have better interactive prompting, but the big thing for me is interactive debugger. We have a debugger project that we'd love to become part of our existing command line and be able to work not just locally, but maybe work remotely. That would be a significant step forward in something that I've seen from Microsoft Azure, their Cloud Function project. They now have integrated debugging from the client to their back-end platform. So if these things interest you from an outside end, what do we need to enable on the back-end to do debugging, those type of things? Catalog, I mentioned that. So what's the value of... We have a catalog repo, which is basically, like I said, it's a list that we have curated actions and basic feeds and things for some services. Things like forwarders, retry, we have some git stuff, slack... They're very basic. Do these belong in a generic catalog? Do they need more curation? Do they need more attention? Like the weather one, that API set is quite large. We only support a small set of the APIs there. And those APIs change. Maybe it needs to be elevated to its own package. It's a top-over repo. Which things should be built into the system to the WISC system? Are there other types of basic things you might not want to do for security checks? Or who knows? What needs to be deployed into the system catalog, if you will? There's not been a lot of activity here. Again, we have a lot of samples, a lot of them are in Java. So a lot of these things people might want written in a different language. So we'd love to, for anything in the catalog, we'd love to have... Not available in JavaScript, but have examples for all of these things that they choose to use Python or whatever language. We'd like to have an equivalent for each of the languages. We have a sub-directory structure that supports that. And of course documentation for each of the individual... individual actions and packages. I'd love to have this use WISC deploy. So again, we have a new packaging spec and that takes a manifest. All these could be enabled to use single click deployment and conditional deployment. So maybe there's some interact... When people go to install the catalog, you perhaps have a way to say which packages you want and which packages you don't want for a target installation, make it interactive. So those are some ideas even for this repository. API gateway, what's inside? Basically, when you ask API gateway guys what it is, it's basically a giant build thing and that's kind of like a view OpenWISC. It's how you build all these modules that comprise a framework for an API gateway set of services, leverages existing technologies in terms of recent activities, OAuth 2 support, core support, cross origin stuff, profiling in tools and performance measurements were added recently, help wanted, the focus is performance. So the tooling was just added, but we need really a focus in tooling and instrumentation to help figure out where we can improve performance. We want more OAuth providers supported. And we lack some of... A lot of deployments like Kube and Mesos, they don't include API gateway currently. So we need support for adding the dockerization, containerization of the API gateway service as part of those deployment scripts, those Ansible scripts. And I forgot configurable caching for web actions. So adding some caching support, intelligent caching support at the edge. So WISC deploy, I'd love to spend a lot of time on this because this is what I have a small team working on. So basically, we want people to be able to incorporate serverless computing and this programming model into larger level applications. So if you went to other sessions, there's Terraform. If you work in Amazon, there's CloudFormation. We'd love to have, and we have a manifest that describes OpenWISC entities as resources and our compositions, how we create sequences and how we take outputs of one action and put them into another action. We can describe these in all the properties, the configurations, can be described in a YAML format, and I can show you a little snippet of so you can get an idea. But this is how we automate. So the command line is not great. Again, I'd ask for a UI, that'd be great. But for automation, for large scale deployments to different target platforms, a YAML based manifest is desirable because you can plug these things into Terraform. You can plug them into something like a CloudFormation. So it's essential. And also automatic, so we want to, I'll touch upon this maybe a little bit, but I'd love to be able to work on a registry where we actually have people just have repos of WISC enable packages, and we can just point to that repo, find the manifest and be able to have some people just we can zip up the that package there to determine the dependencies that package has for whatever language, target languages, what JavaScript packages are needed, Python packages are needed, zip them all up, send them in one fell sweep to have them deployed to an OpenWISC platform server. Help wanted tool chains. So as we zip these things up we want to figure out what other things we need to do for different language constructs. We want to have a generic tool chain that we invoke. So if we know if we're working in JavaScript, we run MPM install and we have consideration for binary support. So we need to run these things perhaps in doc containers on the target for the target binary. What tools do we need to do to be able to bring in all the dependencies, compile them if necessary and for the correct environment pull the correct packages for the correct target environment and create a zip file without the user having to do anything. So I'm asking personally for that help since my team is working on that. Integration with the WISC CLI, that's a goal for us in the next month or so. We'd love to help in that area again in determining how we make the CLI more pluggable and how we integrate things like WISC deploy into it and perhaps how we make the CLI WISC deploy available to things like Terraform or other orchestration tools even like the Aria Tosca project which got incubated last September. So how are we doing on time? Not bad. So WISC deploy again We have a specification so we'd love to collaborate on the specification as well. I think that there's a lot of standardization around serverless some discussions happening. I know at the Cloud Native Computing Foundation with Google there's been other serverless proposals for projects there. I think that standardization can happen at many levels. You know, OpenWISC has a programming model around actions, triggers and rules. I think that's it's a very robust model. I think that a starting point would be creating a specification on triggers and events. I think if I talk to people like Stackstorm and other people who have similar serverless concepts Adobe, our partners in Apache we'd love to be able to standardize on what an event description is and a trigger description is and how we formulate rules from those two things. If we can define at the edge how we can consume events in a basic packaging format around that we could actually have this packaging format or standard be used cross platform, even for those who don't use OpenWISC as their source code target. They're running source code target. So yeah, I frowned I put a little sample manifest here. So hello world. So basically it's also about versioning as well. So the tooling is for undeployment so we want to be able to do update and undeploy over time. So we actually traverse all your actions and all your packages and we create a graph that's underway right now, the code's underway right now and when you want to undeploy it you can actually reverse the graph and and undeploy your event based application. A lot of people ask me about complexities applications the complexity is removed through descriptions like in this package manifest and can be visualized in the UI somehow. And again, how do we compose? This is the only way now to describe your inputs and outputs. So if you want to compose an output of one action to another action this is how you make sure we do validation. So the tool will be able to tell you this action can be composed of this other action because the outputs and inputs match each other, the data types and things. So I think this is an exciting direction for how we build out an application ecosystem and get some be able to manage more complex systems based upon this event driven model. Debugger help help was a great was, I said was. It is a great tool but at some point in the recent past it stopped working and I haven't had time to look at it no one on our team has had time to look at it if debugging is your thing this is all about local debugging action debugging it locally before you deploy to a server and there's probably something very small that needs to be fixed but documentation there is no documentation so we have some very smart people who created a few smart people who created the debugger they're off doing other things and other projects and we'd love to have somebody pick this up and again like I said integrate this into our command line so if you know Node.js and you can go through the code and help us fix that we'd love to help fixing that I think there's some small thing that's missing what's inside DevTools another person on my team has been working on DevTools basically this is kind of like the if you have an idea for a deployment or some tool some integration to some IDE or some other client or something this is where you get you started we'll say we'll create a sub directory under this repository currently it primarily houses the Docker compose and kubernetes work we've been doing I said I created two new repos yesterday one of them was for kubernetes so we've gotten to a point where our kubernetes work under DevTools has reached a point where we believe we want to promote it to top level repository so the kubernetes stuff is going to go to top level repository so the people who want to find kubernetes they see a repository named kubernetes and know that's where the work is being done and we create dedicated documentation for that effort Docker compose needs some help Adobe has been kind of leading the charge on that on that work but I know that the developers there would love some help as well better documentation and we'd love to promote that as well as kubernetes to a top level repo in the future Playground Xcode so basically these are again tooling integration with different developer client tools Xcode source editor extension so it's this extension so you actually can code and have interaction with the command line familiarity with the formats and syntax in your favorite IDE client side tool development tooling it's still experimental code I mean we had some very good people who developed the code got it working but as far as keeping it going version updated with what's current in OpenWisk as we add new things to OpenWisk new parameters, new apis, new things we need somebody to keep these things maintained if these target the both VS code and next code if you have a target developer tooling that you feel strongly about supporting we'd love to have your help making sure this stuff is kept current general advocacy for this tooling stuff VS code I kind of touched upon that already same thinking but for visual studio update docs we need again developer owner for this to keep it going if VS code is your thing and integration testing I guess that holds true for both Xcode and all these toolings we don't really have integration testing for so when things don't work or things fall out of scope or we lack support for something we'd love to have integration tests tell us that we'd love to have Travis or Jenkins tell us this is not working anymore with this change in OpenWisk so these are things we'd love to add if you're familiar with testing and automation that you could help us with greatly to let us know hey guys this needs some attention it's not kept current and no longer works so help there so packages so when packages have discussion on the dev list or in Slack and they have different people using it for different purposes we want it elevated to a top level package again every package I talk about we don't have a manifest for so we have the WIS deploy tool and we'd love to add manifest for all these things that describe all the things that get installed as part of the package right now they all have bash scripts to install which call the command line we'd love to have it just deployed through one click and out of the repo directly so cron jobs all the schedule stuff, batch jobs that's where you have alarms we've been talking about making this part of the default install with the catalog so again my hope is the catalog becomes just a what I'm working towards a distributed registry so the catalog project might become more of a registry project much like in the sense of npm where we actually have people submit packages and the packages can actually live elsewhere and we actually maybe we actually run integration tests on that they submit it to our registry and they have a WIS deploy manifest and we can actually run some integration tests against the package, automate those and version them and making sure that they work with the current version of OpenWis we'd love to have registries of packages provided by other companies for different target applications or environments and I see this as a key growth area in general so instead of having to worry about promoting packages or demoting packages of the catalog we just have a registry and a distributed registry and we don't worry about that anymore at all and through testing and tooling we know which packages are well tested and version which OpenWis platform releases Kafka again Carl is touching upon that I think that of all the things I would ask for from people at this conference and I heard MQTT I want support from people from what we call multipliers I want for people who have data sources data stores, databases, different types of databases we need packages dedicated packages that integrate with more and more data sources and Qs that's the future even data sources I talked to in serverless environments platform environments they're being offered in cloud what they're doing is they're dumping all their events into some Q or another and it's up to you and they're basically saying you subscribe to the events we're dumping every single event into the Q on every database change every document change, every field change every index change, you figure it out so I'd love to have even more intelligent packages that maybe build on top of Qing packages that can filter out events and be able to when I compose my applications I'm getting millions literally millions of events per second how do I filter out just the events I need for my database for my specific action those are things that we'd love to have generic packages for before we fire and so we can find the correct action make sure they get just the data they need and we don't waste a lot of time sending events to actions that they they can't use push notifications this is cool this is another example of common multiplier everyone wants chat bots are popular of course Slack we have support for that in the catalog repository but here's a package for push notifications of course supports all these target platforms but again integration tests how do we integration test these things without the devices do we have simulators we can use I don't know we need integration tests are there other target for push if Google or Apple change their tool kits how do we test for that how do we make sure we're current those type of things JIRA I'd love to have documentation is lacking integration testing as always developer owner but I'd love to create some samples around this with Wist deploy we can actually create some cool integrations and Carlson will be talking about how do we get the dev list working with Slack or Slack channel what about JIRA for Apache can't we create a event driven thing where if JIRA comes in we notify people on Slack or notify a specific person who owns a given you know based upon the ticket that comes in the information to ticket we route it to some place and send it to their favorite notification system creating some samples some reusable samples that are configurable would be really cool in this area RSS we had a guy who started playing he was an intern at IBM last year and he's like he wasn't even in any of our groups he's like hey I thought I found an open list that was really cool I created this RSS package so we accepted it and it does have some integration tests and we'd love if people wanted to expand this it's not very configurable so I would love to have some better configuration options for filtering out different feeds based upon the values in the feed data CloudInt this is if you don't know what CloudInt is it's basically the commercial version of CouchDB I'd love to have a generic CouchDB package again this talks to multipliers wherever there's a data store I'd love to have a package and it's very simple you have examples already and all you need is something that knows how to listen for events either it's either a queue integration for the events coming off a specific database that leverages Kafka what do we have already and is able to create a feed that is able to configure the data coming in from that event source create hooks for us basically if hooks don't exist general wish if I haven't covered them already more packages we had a Twilio package somebody started the work on it and for some reason abandoned it so we basically deleted the repo I'd love to have more packages so we can create repos no problem we'll create a repo for you and work on it we might start you in DevTools if you want GitHub repo maybe you can help us work on the registry so we can remotely bring it in those type of things the registry I talked about that's just an idea it's getting started I hope to start working on it sometime next month when we finish our current work in the WIS deploy tool which is the foundation for the registry if you have an experience with the MPM registry I'd love to have you work with us they use CouchDB as a backing how do we create a distributed registry what's the correct API set do we just reuse something do we take the MPM HTTP standard for the registry and adapt it for our use what's the correct approach more compositions so I know that we have people who have been talking on our mailing list about how we do things other than sequences how do we do different types of programming constructs if then else blocks how do we automate instead of embedding some of the testing of the data inside the actions move it external maybe we can create an action that does the test for you a generic test action a switch statement how do we codify these things how do we make them part of our standard catalog and how do we represent them in the manifest how do we say create basically a pseudo connector in our manifest format to conditionally execute actions based upon some input value web actions I'd love to have more examples of web actions we have no out of the box samples web actions are so cool how do we support more MIME types how do we support different headers maybe there are some variants on web actions for different protocols I'm just thinking about that now node red was mentioned there's node red work I know people have approached me about node red but we've had no conversations on the dev list if you use node red there's a natural fit with serverless and open-wisk the idea that was flooded to me and I need to know I don't want to start the conversation I'm trying to get people to talk about it on the dev list you already have sequences and things and graphic compositions in node red if you're using Node.js we can create a container that can run those an entire node red sequence or set of jobs inside of open-wisk that can be an invoker container in open-wisk if you're working in node red work with us come work with us Jupyter Notebooks ironically the people next to us in the IBM booth are doing Jupyter Notebook stuff so I'd love to have people basically if we can map reduce a set of data and run functional stuff on that data that's serverless so you don't have to worry about configuring all these servers and all these things that you do currently with these Notebooks open-wisk does it for you all you need to do is point us to the data create an event saying this data is ready for your function to compute on and then basically you're done you notify some aggregator that the data is available after I've applied my function to it you can do with other Apache projects so those just kind of like my stimulating ideas of things where I'd like to go there are many many more in fact Carlson I've been talking about starting some lists for these ideas starting a low-hanging fruit type of thing for people who want to start with our projects point them to issues and things we have open or feature sets where to get started based upon their skill sets so we hope to do that in a few weeks perhaps on our SeaWiki and on our link to our website and that's about all I had for today I mean vital statistics I grabbed that Monday I think that number of stars we grow 10 to 20 per week we've got we have the forks have gone up so we have around 300 contribution graphs we can use all the things there to figure out how but it's basically any contributors across all the repos so we'd love to grow that I think that this ecosystem mentality we should have many many more contributors especially in the integration space with databases, queues and packages package implementations for other services I'd love to see that grow that's why this whole session was given today more information all this is from our .org site so if you want to know how to contact us obviously at devilist we have a public Slack channel you click on the org site it'll be auto-invited through an action we're trying to use what we promote so we were excited about trying to figure out how we host an open-wisk platform in Apache to do the things we talked about to actually use service actions and maximize Apache infrastructure to do connect these things together connect our processes and workflows and tooling together so we can better communicate and work together we try to add blogs if you do things in open-wisk we'd love to know about it we can add it to our medium blog site videos, we have a video category for open-wisk Stack Overflow of course we respond to you and of course we are monitoring all these things through Slack so a lot of these channels come in we get notified in Slack if you submit a pull request we get notified in Slack that the pull request has been issued and that if it's not been reviewed for several days we get another alert saying this pull request is not being reviewed so we try to use open-wisk in our DevOps on a day-to-day basis to showcase these as sample applications that other people can use so any questions? yes since I haven't refreshed myself on Node-RED for a couple months I'll probably misuse the semantics but in Node-RED there's a graphical interface where you can create compositions of their Node-RED nodes I guess they're nodes so you can actually encapsulate their Node-RED nodes and their data flows and package them up and run them in Node-RED packaged as a single invoker under open-wisk and that's actually that comes straight from the creator of Node-RED with an IBM in the UK that's our starting point that would be our starting point why? they have a GraphQL UI, we don't that's why I shout it from the audience I think that different consumers of this technology and the ideas and the community will only be generated once we have not just a command line but a UI as well so good question, thanks any other questions? thank you very much