 Hello everyone. Hey, how is everybody doing? First day of a Peshicon. All right. Well, it's time to do Shark Tank, which is basically a way of introducing a whole bunch of podlings and subjecting them to the wrath of the panel. We're about to introduce. So first on the panel is Sally. Sally, stand up and sit on one of those chairs. Unfortunately, unfortunately, we have a little bit of an issue with AV. So I will let the panel members introduce themselves. But once I'm done sort of like calling them out. So Jim is the next one on the panel. Jim, stand up and get on the panel. And last but not least, Justin. And we will start with small introduction from every single one of them. I am Sally Canieri. I am the Vice President of Marketing and Publicity at the Apache Software Foundation. I know a lot of podlings. Who has graduated? Top-level projects. Okay. Hello. So we've worked together and whomever is coming up. Watch out. Jim Jigelski, Greybeard, curmudgeon. Lots of historical knowledge of absolutely no worth whatsoever. Hi, Justin McLean. I tend to hang out on the incubator list where I've reviewed a couple of releases. Awesome. Thank you guys so much. We have four exciting podlings for you to judge today. So be gentle on them. But if you spot anybody who is not worthy of Apache incubator, just let them know. I mean, because that's what they're here for. And with that, I will explain just a few rules. So age podling gets exactly 10 minutes. I will be showing five minutes left, one minute left, stop. And this is not one of those fluffy talks where you get to go. When I say stop, you stop because we have a lot of you to go through. And then panel gets five minutes to ask any kind of questions they feel are interesting, necessary, useful, or just entertaining. You have to answer all of them. And you have to convince them and the rest of us that you are worthy. So with that, let's go with Apache Minute, our first podling on the stand. So Aditi, please. Well, I hope I have a loud enough voice. Can you hear me at the back? Okay. My name is Aditi Hilbert. I'm a product manager at runtime. And we come up with solutions to manage devices remotely. And these devices are small devices. These are 32-bit microcontroller devices. So nothing that can run Linux or Linux-like operating systems. So what we contribute heavily into is the Apache Minute project. And that's what I'm very excited about and here to talk about it. So Apache Minute is an open source operating system for these 32-bit microcontrollers. It is a full-fledged operating system. So there is a kernel, real-time operating system kernel that is preemptive, multi-threaded, you know, has the scheduler and everything. But it also has a lot of middleware and utilities that brings the Linux-like utilities that we love so much, functionality that to these small devices. So what do I mean by this middleware? I mean things like secure bootloader, something that can verify the image that is running on your small devices. And by small devices, I mean things like wearables, smart jewelry, you know, maybe some industrial IoT devices monitoring some power somewhere, power line. So there is the secure bootloader that can verify the image that is running on these devices. There are different hardware abstraction layers that make it easy for the application developers to write applications without worrying about all the underlying hardware details. There is stats, logging that you can collect remotely. And that's what device management people love. So we want to see how is the device functioning because otherwise the data, you cannot trust the data that you're getting from these devices. You have configuration options because you want to change things while they're working in the network. And once you have field deployments, sending people out there is very expensive. So you want to be able to remotely manage all these things. So of course, to do all these things remotely, you need network connectivity. And so we provide open source network stacks. We started with Bluetooth stack. We started with Bluetooth stack. And we are also now investing in Lora and 802.15.4G mesh. We also already have IPv4 and IPv6 support. So there's a choice of network protocols for you. So the MyNUT project comes with not just the operating system, but also a very cool build and package management tool. And that is called NUT. So we have a family of NUTs basically. So that's called NUT tool. And it is a very clever built-in package management tool. It brings sanity and organization to your project when you are, especially when you're collaborating and managing large code bases. Because it allows you to have components, mix and match them, optimize them, test them as separate modules. And then when you want to have the hardware in front of you, which is called the target, you want to build an image that is exactly right for that target. The NUT can smartly build all these different attach, combine, stitch together these different components and build you the image that you want. And that will enable the application that you want. So of course it helps you debug and it helps you transition from your prototype to manufacturing. Because the NUT tool can do, for example, create images. So images that are signed, images that have manufacturing information. Because you want, after 20 years, you want to find out, oh, what was the default image that came with it? And when was it built? You know, what did it have? What packages did it have? So it has all that information and NUT can keep track and build all that into a manufacturing image. And then finally, because NUT is smart and can keep track of all these dependencies between packages and can allows you to version all the packages, it actually enables collaboration. And you can work in different repositories and then you can release, test and release them and then connect to them as you need. So think of them as connecting all these libraries while working on them separately. And NUT is the intelligent tool that helps you do that. So we are not done yet. Where there is something else, there is NUT manager. That's the device management protocol that is also part of the NUT manager, my NUT project. And that is an application protocol that allows you to remotely connect to the device and configure or do these operations on the device. The basic implementation was in Go. But we have had contributors in the community who one of them actually implemented it in JavaScript and has a runtime Node.js for it. So tomorrow, yes, so you can actually upgrade a device over BLE using from your browser, for example. And that's powerful. So again, we try to incorporate as many implementations and we want to offer choices to the product manufacturer. So besides NUT manager, you could also go say for something standard protocols such as arrest full core constrained format, constrained restful environment. Or you could just go for co-app. Essentially where the device advertises or announces, publishes the URI for all the resources that it has. So for example, if it's a light resource or if it's a sensor, it can have a URI for that particular sensor. And then backend service can discover that URI, can query the device, can collect data, delete, basically manage the device. So these are the different options that you have for device management. I also wanted to highlight some of the community contributions that we have had. And it kind of shows you the range of things that are possible in this project. So we have, we are hardware agnostic, this operating system. So we, although we work with Cortex-M, ARM Cortex-M architecture, we support MIPS architecture, we are also going to support RISC-5 architecture. So somebody in the community has already done a MIPS port, for example, you know, the PIC-32, Migratory PIC-32 board is supported. Bluetooth 5. So we, I talked about a Bluetooth stack, we started with Bluetooth BLE 4.2, now we have BLE 5 support. People worked on a sensor framework to make life easier. If you want to integrate some standard sensors, temperature sensor, accelerometer, or you know, a humidity sensor, then it is easy to plug in new sensors and plug in the drivers because there is a hardware abstraction layer. And so writing applications that actually configure and collect data from these sensors is much easier. Okay. People also worked on console and shell improvements. Somebody added an I2C protocol and that's, you know, how you connect on the board hardware. You can have a serial connection, SPI connection. Somebody added an I2C protocol. And as I said mentioned before, somebody wrote new manager in Node.js. So we've had different kinds of contributions and we're hoping for more and more and this, so at ApacheCon I'm here to pitch this project and hope to generate some interest and networking and collaboration. It's just some interesting use cases that people have put minute, used minute for. The first one that we, I became aware of was a quack slider and that was, I think, used more than a year back and there's a picture of that. It's a little duck which is basically a conference badge as well as a clicker to advance slides and this was used at a security conference. And then people optimized the code and put it into really small footprint, 128 kilobytes, a whole OS with Bluetooth stack as well as a management protocol, 128 kilobytes. Some BLE peripheral connections of many types, many concurrent connections and so on. And then finally, I just wanted to highlight that there has been a lot of activity in this project and we have 20 committers, different affiliations with different companies, 39 contributors so far and over 5,000 commits. Just earlier today, somebody, the traffic control PS folks, they said that they alerted me that we were 13th, number 13th in the number of commits so far, so yeah. So we are doing great as far as activity is concerned and we have a lot of forks and lots of stars and likes and email activity, lots of discussion and we have really focused on putting ideas forth and discussing how, you know, and people come up with proposals and there's a lot of healthy community discussion I think. And then we created lots of tickets, we've addressed most of them and so we are going full steam ahead. And that is the time's up and my talk's up too. Thank you. Thank you so much. Don't go anywhere yet, not just yet. That was a great pitch as far as I'm concerned, the project is ready to graduate, but what do the judges think? See, I was right about the curmudgeon part and give it to me first. Okay, assuming I know nothing at all about real-time operating systems, what is it about Newt that makes it so special? I mean you told me a lot about what it is, but okay, what makes it different from other options out there? Why should I spend my contributor talents on my Newt rather than something else? Sure, there are other operating systems out there, but this is the one with the most permissive license, so thank you ASF. So that's one reason. And then the other reason is that we actually have thought about the commercial aspects meaning that this OS is meant to be in production. Okay, so it's not just a development OS, not like free RTOS, which you know, or some other RTOS that are out there which are more academic in nature. So this is meant to actually enable product manufacturers to not spend upfront and yet build a product that can be taken to market. And as you heard a lot about device management as well, so we are thinking long-term and making sure that this manage and the device that is built on this operating system is actually something that will be successful and will be able to function long-term in an actual deployment. It seems like Sally has a question. Okay, as the resident non-technologist, I'm going to ask questions that may seem odd, but I'm curious. Tell me about the security regarding this. If anyone can use it, what's going on with security-wise? So security-wise, we are building in security in all the different components that constitute minute. So you heard me talk about the secure bootloader that is basically making sure that a firmware that is authorized and authenticated boots up the device. So that's the first thing. The second thing is when we are talking about communicating with the device, you want to make sure that the communication channels are protected. So we have implemented all the standard protocols. So for example, if we have UDP protocol, we have DTLS, support for DTLS, TLS for IP connections. BLE comes with security. We have made sure that that is not optional, that is mandatory. So we support all the security profiles and security manager in Bluetooth. So protocol-wise also we are covered. And then as far as the code is concerned, we are also doing regular checks. And one of the good things about open source is there are so many eyes on it. And people are looking at it. So security-wise, that's one of the better options that is out there. So for example, we run covariate scans and we are running vulnerability checks regularly to make sure that the code quality is good. So we are taking steps to make sure that security is very much a part of the project. Another question that comes up a lot, especially when you are ready to graduate, is who uses you? So I see some use cases, but are there actually organizations that use you? Yes, there are organizations that use us. Without naming any particular organization, I can say that there are things like lock manufacturers, for example. Presence detectors, in a conference room, how many people are there. There are athletic wear, where they monitor your different heart rate and other characteristics. There are rehabilitation services that actually monitor how you are moving. Then what else? So yes, there are several. I could go on. Do you want more? So in these productions, yes. Some of them are in production and some of them are being tested. I'll keep it short. Why haven't you graduated yet? We took time to understand the governance and make sure that multiple people knew it in the community, not just one person. So for example, when we did our releases, we took turns in doing the releases and going through the licenses because we don't want to just depend on one or two people. So that takes time, but I think that is time well spent. So that's one reason. And then the second is we are trying to document each and everything that we do, including all the maturity and the steps that we have taken. So that is taking a little time, but I think, again, that will help us in the future as well. So we're ready to graduate. I think we want to send out the resolution charter and I think this week or next week. Ship it. Keep up the good work. Sometimes after graduation, potlings get lackadaisical and lazy. Keep as much effort going forward. And keep attracting new committers. Your presentation was great. So make sure there's more people in your community that can do that too. Once again, thank you so much. Will will be telling us about EdgeNet. Okay. Hi, everyone. My name is Will Marshall. I am a committer and contributor to Apache Edgint. Apache Edgint has been incubating at the Apache Software Foundation for the last year, but the name was changed from quarks about eight months ago. So for those who knew it as quarks, it is now edgint. To explain edgint, I think it might first help to talk a little bit about data streaming. So data streaming, I find, is best described as the way data is consumed from sources that might run forever. For example, a connected temperature sensor, which produces temperature readings, you can't wait until you have all of the readings to process them, since it might never turn off. So the data needs to be processed as soon as it is ingested into the system. And any application which is processing data needs to be written with that principle in mind. And so there needs to be a framework, which is also created with that principle in mind. The way that this is solved nowadays with a number of different streaming technologies is that, let's say you have some sensors at the edge. If you have the GPS device on a phone, or if you have a temperature sensor, like I said, something measuring the temperature of the fluid intake of your car's engine or the humidity of your house, you could have many, many different sensors. This is a common pattern that we see. Typically, the data is sent to a cluster where it is analyzed by a system. For example, Spark or Flink or any number of different data processing systems. And the issue with this is that this could potentially be a lot of data. And this is problematic for two reasons. One, a lot of edge devices, external systems use 3G or 4G network connectivity to communicate their data. And you are paying for every kilobyte that you send. And this can be very expensive. Additionally, yeah, there might be bandwidth caps in addition to that. There are also latency issues. If the backend system determines that some action needs to be taken, then the action has round-trip latency associated with it, which might prove to be too much for some applications. And the takeaway from this is that we need to send less information from edge devices. We need to make sure that only interesting data is sent. So back to the example of temperature of an engine. If you know that the standard operating temperature is between 50 and 70 degrees Celsius, then maybe your application doesn't need to send those values. It's only when it starts to go above that range that you want to start monitoring that more closely from the backend. And as such, these streaming operations, that process data on the edge, they need to happen on the device. And so to perform data reduction. So this is exactly what the problem that edgent is solving. It is doing streaming analytics at the edge. And edgent is a community to promote that. It is written in Java. And we chose Java because it struck a good balance between how quickly we could get something out into the hands of the developers and the actual performance of the system. But additionally, because it runs on a JVM, any system which runs in a JVM can also likely support edgent. For example, Android phones, Raspberry Pis, things of that nature. Edgent is modular in the sense that its core runtime and the features that it has is composed by a number of different jar files. And if you are using MQTT to communicate to your backend from a phone or other device, you might not need a Kafka connector, which is using up extra space. And so for devices which are constrained, especially this might be very beneficial. The runtime is also extensible in the sense that while we were writing it, we had in mind the thought that someone might come along and say this is great, but it doesn't quite suit our needs. And so we tried to make it easy for another developer to come along and improve upon it and extend our interfaces hopefully relatively easily. Oh, just to back up for a second. For Java, we chose Java, but ultimately the decision for what language to choose should come from the community because it's possible that there should be a different language which should be prioritized. So here are a few applications. The first two are ones which have actually been written, so monitoring remote temperature sensors. If you were at our earlier talk today, you will have seen a face detection application which only sent frames to a backend if it detected faces in the image, which drastically reduced the amount of data which was sent. Listening to a microphone and only sending the sound intervals which contain somebody speaking if the decibel level is high enough. So these are the types of applications which might play well with our framework with edgent. So we've been incubating for about a year and there are many things that have gone well for our community. One is that we do have a fair amount of functionality. The runtime is well tested. We've put emphasis on that and is relatively mature for how young it is in my opinion. And we have a good release cadence. We released the first version which was made public about a year ago. Four months ago we released another version and about two months ago we had yet another version. So we are continuing to release and provide improvements to edgent. And lastly, we're taking big steps to integrate with other frameworks. It's very important that edgent is easy to integrate with whatever system needs to communicate with an edge device. And this can be databases, Kafka, I've mentioned this, but also like REST and WebSockets. And there are so many ways of talking to applications on the edge. And cons might be a little hard. I might say ways of improving. But we need more contributors. We need to garner more interest. We've focused a lot on the actual tech, but not necessarily on publicity. And that is something that needs to be focused on. And also diversity among committers. Right now there are eight committers. And seven of them work for IBM. So focusing on that aspect as well. But hopefully that should come along with publicity and getting more community interest. But in general, I think edgent is very well positioned. Internet of things is something which gathers interest right now. And I see a lot of applications for it. So hopefully this served as a good introduction to edgent, our community, and take it away, judges. So another exciting project, another great pitch. But what do the judges think? Hey, Will. So IoT is hot and sexy. And everyone's talking about it. And that's fantastic. And great presentation. Why so few committers? Like I mentioned, we focus more on the actual implementation of the releases and getting the functionality out there. And haven't done as much marketing or outreach as much as we could. We're starting doing meetups in places like San Francisco and Boston. And hopefully that will get some more interest. That's really the biggest problem that we see right now. So that's also part of life here. Your initial design driver was, as I understood correctly, basically that communication is expensive or slow or the combination of both. If history is shown as anything, that usually is a problem that doesn't last for a long period of time. So I'm wondering, do you have a second option or a pivot, considering that if those restrictions, which you're really, really focused on the design driver behind it, are no longer restrictions anymore? I mentioned that there were two main draws to edgent. One of them is data reduction. And the other one is, if you have a streaming service running on the device, then you don't incur round trip latency going to a back end. And so while you wouldn't use this for a control system and a car for certain applications, you might want to write it in Java and just not have to worry about 200 millisecond round trips. So that's one aspect of it. That's sort of a technical thing. But I guess other than that, this was sort of brought up earlier today. If you imagine, just you have streaming applications running on a lot of edge devices and we have a public subscribed network between them. Yeah, I actually don't know where I'm going with that, but I would say the first answer. Are you thinking like a multiplexing kind of implementation or whatever? Yeah. So do you think you might be able to attract more committers if you targeted more constrained devices rather than just working on the JVM? So to target more constrained devices, it would be likely that we'd have to reimplement edgent in something which is not Java, which is something that we have talked about. But like I said, that decision is something that ideally would come from the community. Before we take the time to completely reimplement it in a different language, we want someone to point at an application and say, hey, this is really cool, but it would be better if it were in Swift or Go or something else. So I think part of that comes from having a larger discussion about it. And from that might come, oh, yes, actually we want this to run on a microprocessor. We need it to be at a lower level. And that would be a great discussion. Who uses you? Are you deployed in real life anywhere? No. So we did have one case where it was used. There was a festival in Germany and people had little arphid chips in their badges, I think. And every room, every place in the festival had a sensor so you could track where people were going in real time. I didn't work on that application myself, but that was an actual application. But we haven't heard from them since, and so I guess it went well. Yeah, just keep up the good work and you'll get in there, I think. Yeah, I definitely think I'm also with Sally. I don't understand why you're not pulling in more people because it really is an incredibly cool project. The space itself is very, very interesting, but also just the technology behind it is the kind of stuff that people really, really like playing around with. So take advantage of whatever opportunities the ASF has to promote the project because I have a feeling it's going to be like a dam break, and all of a sudden you're going to have a crowd of people coming in. What Jim said. This is amazing. So next up is Open WISC. This is my first time in Apache Con and I might not want to give my presentation now that I've seen those two. My friend of reference was a YouTube video with a horse. I thought this was a different type of presentation. Mine should be quick. My name is Carlos Santana, no guitar jokes. I don't play the guitar. I only have 10 minutes. So I'm going to be talking about Open WISC. If we don't cover something here, we have two talks on Thursday so you can learn more about that. Open WISC is a serverless platform that you can run serverless, also known as Function as a Service, or a rent driven programming language or programming model. And anyone is also competitors, also known as Amazon Lambda, does kind of rewrite functions. But let's get to the meat of it to convince these three people that we are awesome. Anyway, with Open WISC, since we are serverless, we use less energy, right? So that's how we're going to save the planet. My pitch today is like when you go Open WISC, you go green. We go green, we save the planet. Less idle servers, right? We're less energy wasted. So the trick with Open WISC, as a implementation of this multi-tenant, is to get as many functions as possible running concurrently on a single VM. Now that you're not in charge of the VM as a programmer, we, the platform owner, are in charge of VMs. And those are not infinite. Hear me again. Those are not infinite. So the more we utilize servers, the more we function. So the challenge as a committer of if you're joining Open WISC is that you think every day it's like how do we can feed more functions into a single VM, right? In an efficient way, in a sandbox way, in a secure way. So my pitch is to committers in that way. But for programmers that they don't want to deal with infrastructure or localhost 8080 or how many VMs do I need, they just, we just tell them write functions, don't know, not servers. In terms of a programmer, you go back to basics, basics like browser. Remember your first JavaScript web application? Maybe you started with HTML5 for JavaScript, where you handle an event. This is the same pattern. We go to that simplicity of abstraction in the cloud right now. Handles an event. There's a function behind it. Like I said, functions that handle events. Remember on click? Who doesn't remember on click, right? It's the same thing as the cloud. In the cloud, there's things happening in the like IoT space, mobile applications, backend databases. It's an event and the HTTP request is an event. So you handle it with a function. So the idea of server is get with functions. Functions as a service, if you want to go in that term. But in terms of you're dealing with functions, and yes, there will be a lot of functions that you have to manage. So DevOps is not going away. It's getting more fun. I'm doing with time. This is a simple scenario. As a your committer of the core, open with platform, we're concentrating on always maximizing that utilization. So we have functions. So instead of having functions just add it in there, wasting energy, not saving the planets. You get all those functions working together into a single VM. So we maximize utilization. The idea is if you have an application that is maximizing utilization on a VM, that's okay. Leave it as a non-serverless application, which that's okay. People, you don't have to use the hot thing like right now because everybody's using it, right? But if you may have use cases where you need to maximize that VM to use multiple functions. This is a graph. So every good chart has a graph. So this shows you, like, as we go forward, the planet gets better, right? Things that we can build with serverless, right? Mobile applications, message queues with Kafka, those types of events. Web application and HTTP, single-page app, REST APIs. We can build it with serverless. IoT, we talk about IoT, doing data analytics. What else? DV processing. So something that gets inserted into a database, you want to react to it. Maybe you want to run things in parallel. Maybe you have a burst of data entries into a database and you need to process them. And then past two weeks and nothing happens, you don't pay for it. So that's kind of the benefits also of serverless. The third one is you pay as you go in terms of, in open source, you don't pay. But if OpenWiz is hosted on a platform, then you pay as you go. Talking about mobile applications, while I'm here, I'm looking for investments, right? So we're going to build a mobile app. And we're going to put it in the app store. And we're going to call it whiskers, save the planet. And whiskers is the term that we call ourselves the committers working in an open whisk. Whisk is not about cooking, right? Or whiskers. Five minutes. So the idea is that we're going to build a mobile app. As any hot app out there, you're going to play a game. So it's going to be a game. You earn points. And your challenge will be fun because if you become a committer of OpenWiz, your challenge is how can I fit more functions concurrently into a single VM? So your challenge is that game. So we came up with that game actually last night. So you get points, right? So any good mobile app, you use points to brag to your friends, or buy clothes, or build rooms, or things like that. You can donate to a foundation to serve the earth. I don't know who added that last one. There's a guy in the infra chat room always saying, hey, infra, stop giving VMs to top-level projects, right? They're waste, wasteful. Let's take a few VMs and create an OpenWiz serverless platform and just give people accounts so they can write functions to build a website, to build anything they have. I've been a PMC for Apache Cordoba, and I really hate managing my VM. I don't like to patch it with kernels. I don't like to get that email from infra saying, hey, you have a security hole, somebody hacked you last night. Okay, again, just give me something where I can put in Node.js or go snippet of code that handles a website and just serves a website whenever it uses that application or DevOps. If we have to do CI or CD, just write functions. So that's the idea. So if we can convince infra, you get bonus points. My design, so that's what I'm looking for investment. This is not Tetris at all. This is not Tetris. I didn't copy this from Tetris. The challenge of the game is to whisker safety planet. This is a single VM that you need to feed all these functions. We have Java functions, we have JavaScript, we have Python, we have Go, and all of them are running at the same time. Everybody wants to run their functions and we have to need to manage to how can we run all these Docker containers at the same time while pausing some and running some others and making it secure. So if you feel adrenaline playing this game, you can be an awesome committer on our project because that's what we leave every day. Funding. So we built two ways of funding. So you can give us, I calculated like $5,000 for one day of designer because we need a designer and then a one-week developer is enough to build a mobile app or beer is in a way. The other option which I like better is you can be a contributor to our project so you can start from using it. Like any committer that is working on a project is because he's a user first. I think that's the thing that I learned first. And then you can be a contributor just helping with the docs or opening an issue. I started like that before I was a PMC in Cordova. I started just opening issues. I was a user and I started opening issues and then the community was very welcoming and I started like fixing things and I fixed in docs and I helped with the blog until I got into the code and then I was the one that opened the Slack and then I answered the questions. So you start small. So yeah, contribution is one way you want to give us funding. That's okay also. We need a designer. I'm not very good in designers. And go back to the incubating. So open with this incubating. We just started, I hear people many months, December. I think November, December we did the proposal. Adobe and IBM are mostly the committers. We have our mailing list. So we have been doing our infra duties, right? Getting our website up. Getting the PMC. Voting some committers. What else? I've been working with infra to move our 28 GitHub repositories to Gitbox. I'm growing. We talked yesterday. Two more Thursdays about that. In terms of community, I think I put a green dot there. I think it's more yellow. What we're missing for graduating is attracting independent committers or other companies to help. So I think that's where we're missing. And the other stuff is start doing releases and getting that cadence and automation. We're big about automation. So I heard somebody documenting the process. I prefer to document it with code so anyone can do a release and get into that cadence of what does it mean to do it in an Apache way? I think folks that are doing Apache know what that means. Folks that are not just need to be helped and be one team. I think that's tough. So I think that's it. Thank you so much. That was awesome. And I don't know about the rest of you, but if you need anybody on the IPMC to ever review your releases, you just got the guy. That was awesome. But back to judges. So I've got a great idea that what we do is we remove containers, we remove VMs, and instead we write everything as one large monolithic program on a single server. That would seem to alleviate all the kind of problems that OpenWisk is trying to solve. How would you respond to someone with that sort of backward thinking mentality? I think if you have one server and one VM and that VM is big enough to serve all your users and all the data that comes in, that's a good choice. If the opposite you have a lot of data that comes in and you don't know when, and you don't have a big of a server or a budget or the complexity and the learning of doing DevOps, maybe OpenWisk is a starting point. Great presentation, by the way. I certainly like the game that's not Tetris. I was just thinking along Jim's lines, if you're just writing applications that a whole series of functions, how do you make something that's more complex than just a whole lot of separate functions? How do you modularize it? How do you put it all together? Any features inside? That's why we decided to take it into, it's been in GitHub for a year, so that's not open source. Everybody knows that. That's just source open, I guess. That's why we're building in the open source community to get feedback from the users. Right now, what OpenWisk has is, I would say, a basic programming model where you can declare a sequence. You can create a sequence of actions, and you don't get penalized for the think time. You can stitch together a chain of actions, but we're working on to see if we can create a step function type of programming model where we can create a DSL, so you can build that application. But on the flip side, since we're talking about monologues, you're going to microservices. This is more nano-services, so what happens when you have a lot of things that are doing stuff and you don't know where they are and if they work or didn't work because monitoring and logging. You're not getting away from that, so we're trying to build things that, we have a programming model where you have triggers and rules, so you can define triggers and rules, what are the actions, and those actions can be sequences. But as you can see, it gets kind of complex, so we want to build it with the community so the community can come with the answers and build a solution together as we try things and build things, and that's what we're kind of early in the process of this project. Okay, so you had a fun presentation. It's exciting, and again, we like the games, and all of that's great. And you mentioned simplicity, and I think a lot of people agree that simple, elegant design is definitely the way to go. How are you finding the projects in terms of attraction? Users, I know there's some media about it, but that's not an Apache thing, and since we're very closely associated for a lot of people psychologically with servers, are people understanding your concept easily, and are they able to gravitate towards it, and what's happening in terms of how people are being able to use it? Yeah, so I think we are getting good attraction of developers, building apps, simple apps, write a function, it runs, write a trigger, it runs. Where we're missing attraction is committers. Part of the systems are kind of complex. Actually, they're complex because they're solving hard problems. We're using a lot of components. We have Docker, we have Kafka, we have a Scala system with the AKK, things that may not be familiar to a lot of people to be contributing and helping with the source code. What we're seeing is an adoption of users that are starting to use it on the same pieces, helping other users answering questions. So in that respect, we see that, but how do we turn around and turn those users into committers? Like I was saying, every committer starts as a user. So we need to provide more documentation or steps on how to do your first PR, how to test it, how to set up your environment. Because right now, it's optimized for people like Adobe and IBM that are deploying this in scale, multi-tenants, enterprise-ready, but not committer-ready, I would say. How many committers do you have? We have around 15 committers, PMC, more than 15. But most of them are IBM. So we're trying to attract people that are independent that can sustain this, and also we're in production. So we have a best interest, but that's not the Apache way. I want to see the project that if one of the big companies leaves the project, that project consistent goes forward as a community. So I don't know if people have heard, but I wear two hats where my company had and I'm wearing my Apache hat. So sometimes people get confused of which hat I'm wearing, so my Apache hat. Keep at it. Good work so far, but keep going. As a developer, I was never really super excited about Docker containers, VMs, stuff like that. Never really made my job as a developer easier. This does. Very cool. Thank you. If you can only find a way to turn those users into committers, I think you'll have no problem at all. And now last, but definitely not least, traffic control. So Mark, take it away. All right. All right. Thank you. I'm Mark Torlumpke. I'm here to talk about traffic control. Your first question is no doubt, what is traffic control? And then the answer to that is it's a CDN control plane. Open source, of course. We're in the Apache Software Foundation. Your second question is probably, what is a CDN control plane? The answer to that is the set of software or everything that is needed besides a caching proxy or besides a cache to build a CDN. CDN can still have several different meanings or several different versions depending on who you are and what your specific use cases are. Traffic control and a cache, like traffic server, makes up a classic CDN, like Akamai Level 3, Limelight, Cloud Front, Cloud Flare. So, yes, using a cache, using traffic server and traffic control, you can build a CDN, a world-class CDN, actually, that rivals some of these vendors that you can buy a CDN as a service from. This is the obligatory up into the right graph, number one. This shows the total internet traffic delivered in a month. This graph ends in 2014, but the trend has certainly continued. And CDNs are really the technology that enable this traffic to keep growing. As we push media, we push content closer and closer to the user, we don't have to infinitely scale our backbone networks. This is obligatory up into the right graph, number two. The green line is an average hour, the blue line is a busy hour, and certainly as we consume more and more media, the focus on the content per user becomes a lot higher. And again, the need for CDN accelerates. A quick history of traffic control and the work on it. January 2012, we started this in Comcast. Nine months later, we did our first production deployment. It was very beta-y, but we got the job done. Two and a half years later, we cleaned up the code, we wrote a lot of amazing documentation, and we got all the legal approvals to open-source it, which was a really big step for Comcast. And then a year and some change later, we were accepted into the Apache incubator. And then February of this year, we got our first release through. Thanks, JDA, thanks Justin, others. Back to the project a little bit. We don't have a ton of time, but potentially we can have a hallway conversation or something about these things. We see a CDN as having these five components. We've lifted out caches a little bit because, again, those are typically separate projects, but we have the other four pieces. We have software, we have what we sort of consider to be top-level components to cover the other four pieces between analytics, configuration management, a health protocol, and a content router, a traffic router. And a quick note about the breakdown of the code in the repo. I think it's a little bit important to highlight the languages that we use. And we certainly have a good chunk of WebE sort of languages. We have a good chunk of Perl, as any good CDN control plan does. We have, of course, we run a massive, highly-concurrent CDN, so we have a good chunk of Go and Java is also sort of our workhorse on a lot of stuff. But also, do not be afraid of our documentation, or do not be afraid of the investment we've made in our documentation. 22,000 lines of RST files is certainly nothing to be frowned upon. Anyway, 200 lines, 200,000 lines of stuff. For reference traffic server, the traffic server in which we wrap is around 500,000 lines of C and C++. Find us on Slack. We feel like we're very good about bringing new community members into our ecosystem. If you are beginning, if you want to just get traffic control up and going, you will probably have a set of questions. And typically Slack is the place to find us. We're typically very reliable and helpful there. However, if you have, if we have design discussions, if you want to, if you want to talk about roadmap, any of those things, they must happen on the main list. If it doesn't happen on the main list, it doesn't happen. We know that. And a quick, one quick, a couple of quick notes about the activity of the project. This is the number of commits per week for every Apache repo over the last six months. And the orange line is traffic control. Find me later if you want some of the details here. And yes, the minute folks were 13th on this total list. Summing the area under those curves or summing all of those commits again under the over the last six months puts us like 15th, I think, which we totally understand commits is not a way to measure how active your project is. And in fact, it really highlights one of our struggles, which is we have lots of people, we have lots of interest in solving our technical problems. But we don't have a lot of interest, we don't have a lot of interest so far in solving our non-tactical problems, independence, community involvements, things like that. Said another way, those are the hard problems for us. The easy problems we feel like are the technology. And again, we understand commits is not really that accurate of a measure. And that is all I have for questions. And thanks. Thank you so much. Well, I guess this time we'll start with Jim. Yeah, there seems to be a tight, almost dependency on a Apache traffic server. I was just wondering basically two things. Are the hooks and scars in there enough that you can use basically almost anything as a cache? That's question number one. And question number two is how much cross-pollination have you seen between traffic control and traffic server as far as commits and things like that or contributions? Two fantastic questions. The tight coupling with traffic server is obvious. First, we would love to support a number of caches. Certainly, there's a lot of efforts, or there's a lot of engagement with InginX, Varnish, some of the other caching proxies. We always keep that in mind when we make decisions. It's always top of mind to us. If you look at our logo, we didn't want a copy of the traffic server logo, because we didn't want that tight coupling. If you look at a lot of our diagrams, our documentation, those things, they're not similar. We did not model after traffic service. So our heart is there. It's always top of mind. We just haven't gotten there yet. The second question. So, yeah, another great question. Another thing that's very much top of mind, the Sunday and Monday this week, Mother's Day and Monday this week, we had with a two-day summit right next door to the traffic server folks. And it was good. A lot of the traffic control folks were able to spend an hour or so in the sessions for traffic server. And certainly, the other way around happened as sufficient amounts, getting some nods. Yeah. It's top of minds. These are big systems that we're building. They do take some efforts, but I think our heart is there. So why should I use this CDN over some of the others you just mentioned? Right. Yeah, CDN. Yeah, why DIY versus when you can fairly economically buy from a vendor. That's a great question. If you're building, if you work for a large enterprise or if you have a large enterprise such as Apache, we think it's worth it. You always need to run the numbers. Which way is our investment in the people and the documentation, the monitoring potentially worth it versus just writing a check? I don't know folks. Mother traffic control folks, do you have anything else to add there? There we go. So why Apache? Why did you guys come here? You've been in existence for a while. What made you come to us? And how are you going to expand your group? Yeah, I'll probably lean on Jan again for this one. Apache has a lot of good facilities for teaching us the right way to do open source. And we're learning a lot. We're leveraging those facilities. Yep. I love to see it when one ASF project is able to like incubate or create a space for other open source projects. Fantastic. CDNs are like mojo magic that people really don't appreciate as much as they should. So Kudos. Fantastic. And I realized it must have been a major effort for Comcast to actually take this and open source and Kudos to that as well. That's major. Thank you. What James said. So how far do you guys think that you'll be continuing on the incubation path? Because we believe that there's a way to go, right? We do think it's a way to go. Like I said, we are learning a lot. Even though it's been almost nine months, we sort of feel like we're just getting started. Again, our mentors are helpful, certainly Justin, JDA helpful on the IPMC. I would be surprised if we graduated inside of the next year, but maybe similar to Minutes a year from now is probably a good goal. And that was it for today. So four amazing presentations, three brilliant judges. And let's thank our judges once again. Thank you for coming. And thank you all for coming.