 Welcome everybody. My name's Phil Robb. I'm a director at the Open Daylight project, which is an open-source project working on building a software-defined networking platform, okay, as an open-source project. So, Open Daylight started about two years ago and really what we're trying to do is create a common platform for everybody to use to build an SDN controller that the industry can use as a whole. How that relates to OpenStack, we'll get into here in a few slides, but really OpenStack and Open Daylight are two very important pieces to a variety of different solution stacks that different industries need. So, as our communities continue to grow, our ability to collaborate is actually very, very important. Having a common platform similar to OpenStack, and I'll probably do this a few times throughout this talk, OpenStack and Open Daylight are very similar in the sense that when an industry comes through a disruptive technology, looks to build something for their customers, if you don't do it on a common platform and all the different vendors create something independently on their own, it slows down innovation, it makes it confusing for the users and everything just moves much more slowly, right? Open source is all about being able to collaborate and get to innovation faster as well as get to it cheaper, right? So, having a common platform as the industry moves from very disparate, very different pieces of networking gear to some kind of an agile way of controlling a network, having this common platform that everybody can leverage is really important. So, the role of Open Daylight then is to really create that platform, all right, have that environment, have a strong robust community, and similar to OpenStack, again, create a level playing field where everybody in the industry can come and build whatever they need to solve their problems. So, again, we've been around for two years, OpenStack's been around for five years, we're just now to the point where we've got vendors who have started creating solutions and bringing those to market that are all based on Open Daylight. As a platform, these are some of those organizations that have made announcements or have already introduced products, you know, and if you think back to, you know, 2012 when OpenStack was two years old, same type of thing, right? Vendors were starting to realize what they could do with OpenStack, starting to bring products to market, but we didn't have a whole lot of users, right? OpenStack at that 2012 mark was really trying to get users, and that's really where Open Daylight is right now as well. We've just started, we've just started to get traction with a variety of users talking about what they want to do with Open Daylight, and we chose to use this slide from AT&T as an example of where, again, Open Daylight and OpenStack create a strong solution for an industry, right? So in particular, this is about network function virtualization, right? Certainly a term that you've heard throughout the show here over the last couple of days, and this particular slide was presented by an AT&T representative about a week ago, two weeks ago, on May 6th at NFV World, right? And this is a good example of, you know, they need a service orchestrator. They're trying to create an environment where what used to be delivered as hardware appliances inside of their data centers, actually virtualizing them, right? They've seen what virtualization and cloud technology has done for typical enterprises, and they want to leverage that in their environment, right? They want to be able to take their network functions, things like WAN optimization, intrusion detection, firewalls, load balancers, parental control systems, and so forth. They want to be able to actually put those out in a virtualized environment, have VMs spun up to execute those functions, steer the traffic from their network through those different functions as needed, flex up, flex down, procure new customers, be able to deploy these in a very flexible way, right? Not only that, but the other side of this is from a network management standpoint, being able to be reactive to what's going on in the network, be it's congestion someplace, be it's time of day so different services go down and you increase bandwidth from your network, or you have gear, you have a network element that goes down, a link that goes down, being able to be able to react and continue to provide service to those VMs is all very, very important to the telco industry and carriers at large, right? So being able to be prescriptive with an orchestration engine and be that a service orchestration engine at the top and then open stack as the actual infrastructure or cloud orchestrator, being able to provision all of that in their solution environment and then also be able to, again, prescriptively put that down into the network, set the network up as necessary, but also react as things happen in the network to be able to continue to manage those services. These are where separate SDN controller is really helpful for an environment where, again, they're trying to do virtualized functions. In their environment, they have a variety of controllers as well. As you can see, the middle green boxes is the open daylight controllers, what they're calling their SDN global controller. They also have local and nodal controllers that may or may not be open daylight or some other type of controller best suited for their networks in those places. But this is what AT&T is prototyping currently looking to build out as part of what they call their domain to infrastructure set. And again, open daylight combined with open stack are a key solution for them. So Comcast is another customer that started to talk about what they're trying to do and how they see SDN fitting into their world. And for them, it's about being able to manage an intelligent network. One of the projects that came to open daylight early on actually came out of Cable Labs and was somewhat of a predecessor to the work being done by Comcast, where not only did they want to manage their network, but they also wanted to be able to manage the endpoint devices in their network. And since the controller was touching all of those devices anyway, they actually added a protocol set to be able to not only manage the network and the traffic on the network, but also the endpoint devices that we're sitting on that network as well, particularly the COTS protocol and doing packet table multimedia. So and you'll see that as an example of one way to leverage an SDN controller that's beyond just managing the network as well. So I'm going to talk a little bit about community and collaboration. Again, important themes both within an open source project as well as across open source projects. So from our community standpoint, these are just kind of some of our numbers. Again, open daylight's been around for two years. We've had two releases, Hydrogen and Helium, and yes, we are following the periodic table. Lithium is our next release, and it's actually due on June 25th. So we're just now coming up on Code Freeze and entering into our release candidate phase. But for Hydrogen and Helium, you can see the number of projects, 13 in the first, 23 in the second. Lithium actually has about 40 projects, right? We're also just to the point where maybe some of those projects end up getting shook out. It depends on, you know, if they're going to make their cut off and so forth. But from a contributor standpoint, we're up well over 300 active contributors with a lot of commits and a code base sitting around 2.2 million lines of code. So we also do and want to continue to improve on collaborating with other projects. If you think about how all of us are using open source and collectively putting open source together, aggregating it, building glue code is necessary to make these things ultimately form a solution, right? Either for ourselves or for our customers, right? So the relationship that we have with OBS, certainly the relationship we have with OpenStack are very important. Etsy, the European Telecommunications Standards Institute, have industry standard groups, one of which is NFB, right? So the whole coined term of network function virtualization came out of that particular standards body and ultimately last year ended up being formed into the open platform for network function virtualization or OKNFE, which is the software creation portion of that entire activity. So again, these are back to the telcos, back to folks like Comcast, AT&T, who really are trying to figure out how to move to this network function virtualization leveraging open source. And the important thing there is they need OpenStack and OpenDaylight to work together and to work together well, right? So they are an organization that is actively trying to figure out what are those gaps in these upstream projects and how they work together and how can we make them work better together, right? So as we look at open source in general, be it the evolution of OpenStack, you know, at the five-year mark or OpenDaylight at the two-year mark, our ability to work together to solve these problems is something we've never done before, right? The cross-community collaboration is an area where we can all improve. And it's something I look forward to continuing to work with members of the OpenStack community on improving this. Another organization that's very important for the software-defined networking space at large is the Open Networking Foundation. They are the folks that govern the OpenFlow protocol. And then there's also a variety of protocols that continue to be managed out of the IETF that are also important to us. So this is the architecture diagram of the version that we currently have released, which is, again, codenamed Helium, or called Helium. Just to give you a flavor for the componentry that goes into OpenDaylight, there are a variety of applications that sit along the top there. Those are the ones in the light brown. You can see OpenStack Neutron, we consider a northbound application, right, that leverages us. We have an set of APIs that expose the functionality of OpenDaylight, and then a core set of services that are all identified in green there. There are base services like topology management, statistics management, switch manager, host tracker, hoarding rules manager, or FRM in the diagram, and so forth, as well as a variety of other features as well, be it software function, or service function chaining rather, which is specifically important to the NFD folks, as well as authentication, authorization, and accounting, and again a variety of other services. An important box here is the service abstraction layer. One of the unique things about OpenDaylight is that it isn't just an OpenFlow controller. OpenFlow is very important to us. It's certainly very important to SDN, but it's also not the only protocol out there. To be able to actually leverage more than just OpenFlow from a southbound protocol standpoint, there's a service abstraction layer that separates those southbound protocol implementations from the services that we expose to those northbound applications. Having that service abstraction layer allows us to do that in a sane manner, and then again allowing the bottom you see a variety of blue boxes. Those are all the different protocols that were currently supported in Helium. There are more that will be supported in Lithium, and I'll cover those in a little bit. From a scope standpoint, anything on that page is actually within scope within the OpenDaylight project with the exception of network elements. We never intend to build a virtual switch, for example. We may very well have applications that are part of the project, but we're never going to build anything below that southbound protocol set. Some of the things that we worked on when we were doing Helium, again the current release out were better OVSDB integration, security groups we started, distributed virtual routing we started as well as load balancing as a service, and an ARP responder. Hopefully some of you actually caught Flavio and Colin and Kyle Mestri's discussion earlier today. They went certainly more into detail on those. Also in Helium was the first place we introduced policy-based or intent-based interfaces for northbound applications to us. This is actually a really interesting space and one that certainly gets a lot of attention within the OpenDaylight community, and is also very important to folks from OpenStack. Being able to identify just whatever OpenStack as an orchestrator wants to make the network do without specifically having to tell it what to do. Give me a network. I have a certain set of hosts that I have a certain set of characteristics. I want them to have traffic that goes someplace with the level of service. Being able to define what you want as an orchestrator and have something else magically make that happen to you, given whatever your network elements are that would accomplish that, that's the idea in trying to abstract out a northbound interface that can be leveraged by OpenStack and other applications but implemented within a controller such as OpenDaylight. So again, Lithium is our third release targeted for June 25th. Specifically, we continue to make improvements inside of the core, and you'll see this if you get close to the project, you'll see every time we come out with another release, we continue to iterate to improve the functionality and the robustness of the core set of controller features as well as bringing new features on and we continue to do that with Lithium. Significant improvement with integration and testing in OpenStack, improvements in monitoring and debugging, as well as improvements in clustering and performance. And like I said, it's sitting right around 40 projects for the Lithium release. So when we started Lithium and Kilo was beginning within the OpenStack community, these were some of the things we were looking for, right? We really wanted to improve our ability to do continuous integration testing with OpenStack. Clustering and integration were also targets and improving the integration of different types of network management features that we have inside of OpenDaylight that can be leveraged from OpenStack. So there are a variety of projects and a variety of ways of managing your tenant networks through OpenDaylight, right? So the OVSDB project is a direct connection to OVS, the Virtual Tenant Networking Project, or VTN is a way of doing multi-tenancy. Group-based policy we already mentioned is another way of managing your network through that intent-based API. Service function chaining, again, very important to the network function virtualization folks, being able to set up that traffic steering that goes through the different network functions as necessary, key piece, and then just in general, neutron, and continuing to make it so that we can work well with neutron. Those were areas that we had on our wish list when we started, as well as some other tactical improvements within the project. So I can tell you that our continuous integration environment got much, much better. We spent a lot of time actually trying to make that much better. For anybody who's worked on a couple of different open-source projects at the same time that are both moving at relatively rapid paces, getting them so that you know what you're testing and when something breaks, where to actually find and how it broke and what it was and continuing to move on, that's been a significant challenge. And it's one of those areas where, again, I'd love to continue to work with the OpenStack community to figure out how we can make that better. Again, as we work on cross-project collaboration, building environments that allow both sides to test, quickly debug, figure problems out, and then fix it so that it doesn't ever happen again, these are areas that we could be working together on. And those are things that are important to Open Daylight and I think ultimately very important to OpenStack. Again, as we continue to build features that end up as solutions to entire industries together. So, some of the new projects in the service category for Lithium include a Unified Secure Channel, Time Series Data Repository again. So, you know, one of the interesting things about having a centralized controller is all of the statistics that you can be pulling from your virtual machine, from your network elements, be them virtual or physical. Having some way to actually store that off and then do big analytics against it or do some level of data science against it so that you can understand what your network is doing and improve it, that's what TSDR is all about. Being able to identify devices that aren't open flow inside of your network is really what the device identification and driver management set of projects is about. Persistence, again, being able to actually maintain over starts and stops of a controller. Consistent set of data that isn't already built in to the HA environment of the service abstraction layer is what the Persistence Project is doing. And then topology processing, right? So, in our environments now we have virtual environments. So, you've got virtual networks and virtual topologies. You've got physical topologies because we support a variety of protocols. You've got a topology understood through the trading of BGP information or you have what you know about what you've done for flows for open flow. So, you have a variety of different ways of looking at your network. And so, bringing topology out of the core controller as its own project so that it can continue to be grown and evolve to manage all those different types of topology views we might want is what topology processing is about. So, new protocols. Again, SXP doing link aggregation. IoT data management is another one of those like where it's not really a networking protocol but it's managing those Internet of Things devices through their protocol set because a controller happens to be able to see everything. So, it's an interesting project. Again, SNMP kind of goes with device identification and driver management in being able to understand and inventory the devices you have that aren't open flow as well as CAPWAP and distributed LLDP with Auto Attach. Northbound APIs, again, as I mentioned, we continue to really explore how best to represent a network abstraction. So, network intent composition is another project working closely with the Northbound interface group out of the Open Networking Foundation. They're also working with the group-based policy project to help implement what they're exposing as an intent-based API for applications. The Alto project is similar and, again, abstracting the network out as another opportunity or another method of doing this. Again, this is new so we have a variety of implementation techniques that we're basically learning from which one's going to work best. And then for us, always continuing our new neutron northbound access points so that as neutron evolves, we have the best possible way of tracking neutron changes and implementing that or facilitating the different services with an open daylight that implement what's necessary or what's exposed to OpenStack Neutron. Then finally, we've done some release engineering work to make our own release process better. We have a very nice continuous integration environment leveraging a lot of OpenStack tools, be it node pool we're currently moving to. We have aspirations to get to Zool and Gearman as well. Again, thank you OpenStack community for all the great work you've been doing. From a timeline perspective, again, Lithium is just about to come out here in mid-15. Beryllium, somewhere at the end of 15, beginning of 16, we're still under discussions. And then Boron will be somewhere in 16, roughly on six months schedules. And then in closing, again, since I'm wrapping up this theater for today, tonight we have a dinner going on at 6.15. You're very welcome. We have cards right back here on our desk on helping you to get there. We have an OpenStack summit panel on Thursday. Please come there as well as our own summit that's at the end of July out in Santa Clara. We have a variety of user groups that continue to spring up globally, get involved if you're interested. And then, of course, opendaylight.org and wiki.opendaylight.org for all the information for us. And with that, I think I'm closed. So thanks, folks.