 Hello, is this on? Yes, now it's on. All right. Hello Austin. I'm Greg Stigler, AVP Cloud. Ryan Van Wick will be up here speaking as well. He's the Executive Director Cloud Delivery. And a meat tank, Senior Principal Cloud Architect, AEG Cloud and SDN Architecture. I had to read that one myself, a meat. So we're going to have three speakers today. Then we will have a panel. And when we have the panel, we want all the questions you can bring us. And there are mics, I think on both sides, they're only one side. Yeah, there's a mic over there. It's with those hot lights. It's hard to see. So let's get started. Anyway, I've had a great time here in Austin. Last night was a good time. To be honest, and I made a new friend here in Austin and this guy is an animal to tell you the truth. And I couldn't shake him all night, but I told him I'd introduce you all to him today. You might have seen him yesterday. So I'd like to invite the bear to the stage. Thank you for an excellent night last night. I actually cut that out of my speech. So I don't think that was supposed to be there. Next slide, please. Okay. For those of you who didn't see the keynote yesterday, Sorab Saxena is our Senior VP for Software Development and Engineering at AT&T. Gave a really good keynote yesterday. If you didn't get to see it, I highly recommend that you watch it on YouTube and catch it. One of these numbers are his, 150,000 staggering data traffic growth, and I'm only going to go into one thing on here, and that's agility. I want to speak about agile. The agile manifesto is something when I took this job at AT&T, was something we lived by. That's okay. I love agile. But I want to be honest about something that's also my favorite Google search is agile space excuse. That's because it can be used as an excuse for a lack of planning, a lack of scoping. So in the beginning, we were excellent contributors to the community. We did not produce much for our business. Around 2014, we changed that mode. We actually pulled back a little bit from the community contributions, and we got our house in order to deliver for our company. Now, we're turning around and we're coming back to the community, and we're going to contribute at a great rate. We announced this in Tokyo. Mr. Van Wick will come up here and speak about some of those community actions that we'll be taking. Okay. Next one, please. All right. So we've been on a journey. There's really a 0.1 here. We were on Diablo as well in the beginning. Very painful. So anyway, in 2014, this is when we began to get our act together to combine our legacy clouds and to go with OpenStack. And we did that with Juno, or IceHouse, sorry. And then the reason I mentioned Juno is because 2.0 is the one that really got us on the map. 2.0 is exactly what we needed with Juno, and it built a global foundation for us. So last thing I want to mention is we're headed to Kilo this year, and we're headed to Mataka next year. And we want to keep as current as possible. We're going to keep two versions in production. And I have a vision. This vision started in Tokyo for the Enhanced Hybrid Cloud, and that is that with an OpenStack-powered cloud, which means you get that little logo that says you have used the core code the way you should use the core code, I would like to be able to share my workloads, run my workloads on other OpenStack clouds. So let's say in another country I do not have a facility, but there is a service provider in that country with an OpenStack cloud. I would like to run my workloads on that cloud. I would also like to offer those service providers to come run their workloads on my cloud. This is something I want to see happen in the community as well. In addition, Amit is going to talk about our multi-hypervisor approach, which is very important, and he will speak about that later. All right. My last slide. From January to October 2015, deployed 20 zones, and it was painful, extremely painful, to do a very manual process. So we spent a lot of time during that same timeframe. We spent a massive amount of time automating our deployments. The result of that was 54 zones deployed in two months. Pretty cool. There's going to be hundreds more in 2016. So we're going to step it up from there. We've got the automation to go. We need to work on the upgrade part of this. That's a very important part. We want the community's help too. So with no more ado, I'd like to invite my little brother, Ryan Van Wick, up to the stage. I really am fairly large. So it's amazing. You watched the keynote on Monday. I was in the video. I was the dude with the really big head with the loose shirt on, and it looks worse when it's on a 500-foot screen. So Greg mentioned in the last slide the rapid acceleration when we went through in terms of building out of our infrastructure and some critical enablers to make that happen. So Greg touched on where we were with respect to leveraging agile and how we had to evolve that. And so when you have a program where you're spanning 13 to 15 scrum teams just in one organization and then integrating across other organizations, it's a pretty big task. And so we've had to adopt and build our own scaled agile model to achieve that. And so laying that foundation and leveraging a unified process which we called our Agile Guide Group to manage the intake of scope and prioritizing and syncing up our backlogs across teams so that when we get to release points at the end of iterations of work, we're aligned with the deliverable we can actually ship. We have a minimal viable product. So that was a key enabler for us. And then CICD was a foundation in terms of our development infrastructure, right? So Andrew Leesink will come up on stage a little bit later and we're happy to dig into lots in this space, but essentially we had to build that automation to enable our development teams to ship our code as fast as possible. And then finally DevOps, and this is a term everybody throws it around these days, but really it's a cultural transformation as much as it is a process, right? And one of the things that we've embedded or adopted within our deployment is that we live that life cycle. So the development team under me, we own the whole end to end, right? So we build the code, we test the code, and we push that to production and we manage the build out of the infrastructure. So we're leveraging the automation that we're building. And so that feedback loop to our developers is rapid because we're the ones feeling the pain when something doesn't go well, right? So it's not the old model of throw it over the wall to your operations team and then get the feedback later, right? So we don't only live that in terms of our production life cycle, but we live that in terms of our dev and test life cycle. We have our development teams that are working on our automation frameworks in live chat rooms during the day as we're pushing stuff into test so that as we run into issues building out tests in dev labs that there's a feedback loop immediately to address those problems. So we just get faster and faster and better and better. Next slide. So part of that, there was one thing to lay the foundation with some of those key enablers, but we had to deliver some innovations in order to get to scale and we couldn't start with automation just on the back end. So it's one thing to automate deployment of OpenStack, but you also have to automate the design and the delivery of designs for these sites. So our traditional model was, we have folks that are using spreadsheets to capture rack elevations and cabling diagrams and IP mapping. All of that was slow and it was from a model where you used to build just one large data center and you had months and months to do the planning and the build out. We didn't have that. We needed to be able to go fast and we wanted to be able to drive consistency so when you get to software's code and you're driving consistency into your sites in terms of your software deployment, you want to be able to do the same in terms of the actual deployments themselves, the physical. You want that same consistency. It demands it. Automation needs to know what's there. So we started with a thing called AIC formation which is a tool that we've built that enables us to templatize the design of our sites, whether those sites are large scale data centers or medium sized network centers that we're deploying around the world. Then we had to automate the build itself. So we opted pretty early on. Our first version that Greg mentioned, it was totally bare metal. I think it was something around the magnitude of about 30 servers to a control plane. So when you have 30 servers controlling 150 nodes, that's expensive, really expensive. So we had to shrink that control plane down and fully utilize the assets that we had. So that was one driver but it wasn't the only driver. The other driver was we wanted the flexibility for the future. So we recognized we had to start building a foundation that allowed us to get to some of the capabilities that our infrastructure would demand of us. So we wanted to be able to upgrade in place. We wanted to be able to do rolling upgrades, do what we call an A to B. You saw it this morning in the keynote where they showed you with containers how they had stood up alongside an existing container, the new version. So we wanted to be able to do the same. And so we decided actually in Vancouver we sat around a table outside and we decided on our virtualization strategy in Vancouver. And we proceeded to go out and build that out. Ops Simple is the automation framework that we use. We call it Ops Simple because it was intended to deliver something that any operator could deploy a site without having to have an intimate knowledge of how to do it. We don't want to have to rely on DevOps Ninjas running around fixing everything. That's a bad model. We don't want to do anything that's in scale. So that's why we built Ops Simple. Ops Simple leverages a lot of common open source frameworks. We didn't go off and invent something new. We just took what was available out there and integrated it together to make that work. And then finally you build something at scale, you need to manage it. And so we're working on a project and our first release is getting ready to ship called OpenSAC Resource Manager to allow us to manage these hundreds of zones that we're building or hundreds of OpenSAC control planes and be able to manage that at scale to be able to drive consistency with respect to making sure we got the right images in the right places or at the right flavors set up, making sure that we're provisioning tenants in the right place based on their needs. So that's it. I think if we move to the next slide, I've touched on some of the innovations that we had to drive, but we're not stopping there and there's other things that we have to do. So I'm going to bring Amit up to talk a little bit about where we're going next and what we're looking at. Thank you, Ryan. So pretty amazing stuff. Greg and Ryan, they touched upon some very interesting areas. A quick show of hand. How many in the audience here happen to have virtualized infrastructure that is not OpenSAC? There you go. Generally, you'll see a lot of traditional workloads where you already have an existing environment and now you're trying to introduce OpenSAC into it. And it's really interesting because the probability of your success in making your OpenSAC deployment really depends upon how you integrate it with your existing workloads. So the good thing is that my executive management, Michael is here, Sunil is here, which actually we'll hear from them in the panel as well, they compelled an AT&T architecture group, an entertainment group, but they compelled us to ask questions like, can you make things like VMware work with OpenSAC? Can you make it work without NSX? Can you overcome the limitations of having a single VLAN being created and managed by Neutron? Can you do A2B testing? And what you see here on the screen is pretty much the pictorial depiction of some of those questions and the answers that we derived at. So the idea here is that if you want to have a unified OpenSAC control plane, like how Ryan mentioned, that you have a unified template, so your teams only have to deal with a very specific set of processes, but your workloads get automatically scheduled on the target desired hypervisor. We actually innovated in a bold way to achieve this, where basically we engineered an ML2 driver that connects your controller nodes, essentially it connects your NOVA to expose your existing vCenter as another hypervisor right next to KVM. And the interesting thing that this allows you to do is it allows you to set with fix, it allows you to manage fixed set of images and then essentially have your heat templates orchestrate any of your workload where you need. And once you are looking at the networking layer, it allows you to do A-B testing, it allows you to do many migration-related strategy planning, it allows you to achieve fault-contained availability zone, which I think is pretty important goal for any enterprise environment. So I'm really proud basically that AT&T is not only putting its weight behind this open-source community thing, but it's actually actively innovating the point that Ryan touched upon, virtualizing control plane. It's a very similar idea to what we saw today in the demo. The fact that AT&T is able to innovate this and is ready to give back the learnings to the community is really interesting. And so this is something that we wanted to share with you. From here, we'll kind of move on to the next community-related thing that Ryan will also cover. Thank you, Ryan. Thanks a lot. So I touched on some of the things that we did to accelerate our deployments and the neighbors, but along the lines of what a meet was talking about, it never stops. So some of the things we had to solve for last year, for example, like multi-rack issue, we were having an issue with fuel where we couldn't deploy our OpenStack nodes across multiple racks and multiple network segments. We quickly hit a wall, right? I mean, we're large and that's not going to work in a large-scale enterprise. So we had to solve for that pretty quickly and push that back upstream. You know, again, enterprise, security is key. We didn't want only security and TLS at the public endpoints. We needed it the whole way. We didn't private as well. So implementing that. Where our focus is right now is all around management. I touched on ORM, but it's more than that, right? So we need to be able to have day two operations. We want full life cycle management of our cloud. We want to be able to make simple configuration changes, drive that fully consistently across our infrastructure. We're driving those enhancements up into fuel to be able to fully integrate with Puppet Master and be able to maintain not only the OpenStack components, but the non-OpenStack components because that's a key point in that when you build a large-scale enterprise cloud, it's not all about OpenStack. There's a lot of other stuff that has to live there, right? And it all has to be managed together. So we're not only leveraging fuel to do some of that and extending fuel to do some of that, and we're doing that through plugins so that it's not proprietary, but also being able to integrate fuel with other management tools, right? Integrate with Ansible, for example. And then finally, upgradeability is key. So we hit something pretty early on in terms of we could deploy something with fuel but then once you did that, fuel, that was the fuel that you had to live with. There was no way to upgrade the fuel master itself. So that's something we've been working on and we've cracked that nut and we'll be leveraging that in Fuel 9 this year. So next slide. So I want to just close before we go into the panel discussion. Greg mentioned at the beginning how we had started on one extreme where we essentially had a team that was pretty much only working in the community and there was no mapping of that back to the business. So there was no reaping of the business benefit of that exercise. So we kind of ended up going through a transformation where we kind of pulled back, we started focusing on how can we leverage what the community has done to deliver something of value to AT&T and start to achieve our goals, but we kind of had to do that to understand our priorities and understand what was important to our company and also to build a track record of success. And so now that we've done that, we've created what we were calling our community team to serve a key function as the bridge between the OpenStack community and our AIC delivery teams. Because one of the issues that we have is we have a lot of demands on us in terms of features that our tenants need now. We have business cases and we have very aggressive goals that we have to meet. And sometimes there's not a community answer for that in the time frame that's needed and so we have to close that gap and we have to close it very, very fast. But that said, we want to make sure that when we do that, that we're working on a solution upstream that we can help others benefit from us solving that problem. But also we don't want to carry that going forward. We want to close that technical debt gap that gets created, move that stuff back upstream, commit to supporting it upstream. And so that's why we've created that team to help both not only look forward and work on themes that are important to large-scale providers, but also to help us manage our technical debt from release to release. So that said, Greg, we want to move on to the panel. Are we on? Yes, we are. Very good job. Okay, our first panelist will be Sunil Jethwani. He's a director in our AT&T Entertainment Group. Our second panelist, Andrew Lisak. He's a director of cloud automation, cloud platform development. He makes Ryan look good. In more ways than one. Yes. The next one is in the same category. That's a very interesting picture I noticed. And I love his name. Rodolfo Pacheco. Lead principal technical architect, software development and engineering. And we'll bring Amit, Amit back up as well. Thank you, Amit. If you would line up for questions, I'm going to go ahead and break it with one that we had come in early and then feel free to step up to the mic and we'll go from there. We'd like this to be very interactive. I'm going to address the first question to Sunil. Sunil, what strategic role do you see OpenStack playing in the media, video, delivery landscape in the future? Good question. So before I do that, though, I'll just briefly introduce myself. Sunil Jethwani, I joined AT&T in Q2 last year as part of the Direct TV acquisition. And before, as part of Direct TV, my focus was sort of on the video backhaul and cloud architecture and to sort of support all the services that Direct TV offers today. So with that, I think going back to Greg's question, I think we can draw some parallels here between what the networking industry is going through in terms of the network function virtualization and what the video industry is also going through right now in terms of video function virtualization, if you will. And by that, what I mean is if you really look at a typical media data center or what we call a broadcast center in Direct TV today, you see a lot of video functions still being carried out on their dedicated hardware and in a lot of cases on dedicated and non-IP interfaces that are more video-centric. But what we're starting to see right now is some of those functions are actually ready to be virtualized and to be sort of transformed into the next-generation architecture because now we can derive all the efficiencies out of the common compute that we probably didn't have five years ago. So by that, I think as we sort of start to evolve our broadcast center into our next-generation architecture, I think we can readily adapt some of the best practices that the software-defined data center is bringing in in terms of leveraging OpenStack, leveraging open source programmability and sort of adapting that to meet the video workloads. And I think one last comment on that that I would want to make is which we've all heard in the last couple of days, it's not just about technology, right? You have to also take into account people and processes when you do these kind of transformations. So I think we also have to keep that in mind as we sort of evolve in the video world. Thank you, Sunil, and I'm grateful for our direct TV acquisition because you have brought many forward-thinking individuals into the cloud space along with us, so it's going to be a great partnership. Okay, let's start over here on the left. Hold on. Microphone. Are we picking me up? There we go. That sounds important. A couple of years ago, John Donovan, AT&T's Chief Strategy Officer, challenged the open source community to contribute to AT&T's ECOMP2O platform, which is part of it. It's a foundation of the telco's infrastructure before software-defined infrastructure. I'm imagining that the open source community has probably come forth in droves with suggestions on how to proceed. I'm wondering about the quality of those responses. Has AT&T received the type and nature of contributions from the open source community that it needs to be able to implement ECOMP in the specific situations that AT&T and only AT&T could possibly be facing? Good question. I think I'm going to field a little bit of this one. I think we had 1,700 downloads of the ECOMP white paper in the first day, so that was pretty cool, thanks to all of you who did that. In the future, this is just the beginning. The responses, the feedback that we've gotten from the community, the folks that downloaded the white paper has been really strong. I actually believe it applies across industry. I don't believe it's just our industry that this applies to. To service providers, it applies to large enterprises. Your Walmart, your eBay, it could apply to anybody. Anyone else like to comment on ECOMP? Brian? I don't think we have the stats to really get to answer your questions. We can probably close that loop or fly if you want to hang around to get you connected with the right folks. Good. Next question. Hi, so I'm... Mike? Mike, too. So I'm assuming that when you talk about zones, you're talking about like a desperate independent control plane. Is that... That's correct. True. I'm curious why you architected it that way with kind of hundreds of those as opposed to some, you know, more centralized on the control plane side. Great. Redolf, do you want to jump? I'll start and I'll hand it to Redolf. I think one of the key reasons for us was latency, right? Being able to sub-second response time on things. Our workloads sit right next to where the customer is, right? What the customer expects. And so we're actually doing a lot of work in this space. Our technical architecture group is looking at creative ways and how we can solve for this problem as we continue to scale going forward, because the model that we have today, although solving today's problem isn't going to scale indefinitely. And so we're looking at ideas around regional control planes and other similar types of solutions. So we did start with some level of centralization. We did centralize Keystone at first and we centralize Horizon to provide some of what instinctively you think, right? That central function that will grow, but you're quickly running to problems with Keystone with scale, right? There's all kinds of issues. There's some way to solve some of them. And we spoke with people in Vancouver that actually were, I remember a call with Symantec. They were solving the same issue. But now, I don't know if there was a talk yesterday that we call it shared nothing architecture. We've moved away from that. Now we're putting even Keystone in the local central, local LCP and we're moving towards this idea that you can have something on top, what we're calling ORM, that can manage some resources across all the regions. We shrunk the control plane but probably further try to shrink it in the future. And the notion is that you can bypass some of the limitations that control plane has off the bat, right? In terms of number of compute nodes but just adding more control planes. If you make it small enough, you can have many of them. And if you solve the problem of managing many of them, then it's not a problem to do it that way. I'm going to give you two more points. Security is one, isolation. The second one is latency and how close we can get to our customers with our smaller clouds in our network centers. It's a differentiator. Normally, fault isolation is a big factor when you centralize everything without federation. If that goes down, you pretty much take out all your workload at one shot. So having smaller distinct sites federated with each other gives you a tremendous amount of flexibility with a lot of resiliency. Let me have one more. And that's security aspect, right? Now if somebody attacks one region, one zone, that's the only thing you're going to be affected by. You don't have this across the board impact in the cloud. Thanks. Thank you for the question. Thank you. We're going to go back to the right side because I think you were up next. This is Rakesh from Comcast. The question is that when you're building your cloud, are you building it for applications or NFVs? Is there two different clouds? Are you combining them? How are you handling that part? We're actually in the process of trying to normalize. Conceptually, we think of this idea that we should build a cloud where I could run anything anywhere. Now we know that in practice, that doesn't always work. There's some level of features and requirements that you're going to have to run a network function versus an enterprise-type application. Nevertheless, in order to manage the cloud across the board, you do try to normalize what you can, trying to do things like making sure that the hardware is configured the same way so that the cabling is consistent throughout. We're using the AIC formation design to try to drive this consistent design across the board through all the data centers, but you're still going to face the realities of slight differences. One of the things that we ran into early on, Greg had on that slide where you showed multiple different versions of cloud. We had lots of different open stack versions running all over the place. The challenge there is without a super set of functionality and consistent code base that you can deploy everywhere and you just turn on the things you need in that location, you just can't scale. You've rapidly hit a wall in terms of, I can't meet the demand, I need too many dev and test environments, too many production fix environments, it just doesn't work. We had to get to a consistent single code base that could satisfy all workload requirements and then just through our deployment automation choose the feature sets to turn on at that location. Very good. I will also add that one way we're going to attack this going forward, if last year was the year of automation deployment for us, this year is the year of introducing new features. Amazon would call them instance types. Google would call them machine types and I think Azure goes with A and P, something like that. So we will have different instance types across zones that we will have different performance profiles is the best word for it, right? You know, graphics processor intensive, IO intensive, memory intensive, et cetera, et cetera. Okay, now we want to limit that but we know we have to have them to make the right workloads run. Left side. Hi, regarding the network side of the things that you implemented with the OpenStack, as a service provider, the telco service provider, one of the key DNA inherited from the old world is the VLAN, usage of the VLANs, right? But unfortunately this is not matching with the nature of the cloud inherited from the OpenStack. There has been developments but still, by means of using the VLANs, I bet you have been struggling with the limitations by means of the VLANs and so on, deciding to go with either or not with the overlay networks. I would like to hear your vision about using pure flat layer three or going with the overlay, thank you. Bring it on that one. Huh? We're gonna phone a friend, Alan Meadows. No, you gotta get a microphone, my man. So we started at a couple different phases, right? So we started in the very beginning, there was a lot of emphasis on overlay networks using traditional out-of-the-box approaches with OpenStack. Those, again, those didn't really scale for us for our needs. We ended up in a sort of the second phase of things is where we had a hybrid approach. We had high performance tenants that we had would end up leveraging VLANs over sort of off one arm of our cloud while we continued to leverage the out-of-the-box overlay networks in OpenStack. Where we've arrived at today is not necessarily a full layer three approach in OpenStack, although our data centers are fully layer three. We've chosen OpenContrail to sort of take us to that next level in our evolution. Thank you, Alan. A quick thing to add there. So normally, traditionally, when you look at overlay solutions, it gives you a lot more flexibility in a data-centric network. So there is a definitive use cases that it can solve. There are specific use cases where overlay network adds latency to it, and those are the pockets where people consider other options other than the overlay networks. Thank you. I have to comment again. There's a difference to me between pure open source and commercial open source. I think what we just talked about in the layer three overlay category is one of the bans for OpenStack right now. And I would really like to see a pure open source play versus a commercial open source play, which means we're buying a license. And when you look at a couple of players like Moranis and Ubuntu that are pure, they're doing pretty well. It's not a bad model. I think some people should consider it. And we're looking forward to working with the service providers and large enterprises on that type of planning. Okay. Let's go to the right side. Have you guys attempted to virtualize the user plane part of the solution? Or are you guys just doing the control plane? And if so, are there any sort of challenges that you guys have faced in virtualizing the user plane versus just the control plane? Why do you call the user plane? So for example, the video data or any sort of that that would run over the network. The data plane, that's what we call it. I think that's what you're referring to. Now we're not talking about virtualizing the data plane at this point. We're focused on optimizing the way we deploy it and how we manage it. The data plane, the compute host have different, like Greg mentioned, we have different host profiles and those host profiles get related to, we use availability zones and we use host aggregates. And there is a need to manage those in terms of capacity. How much you put in each different aggregate availability zone and how many of each compute host you build for the different features that you need to support. So that we're trying to tie all the way up from the AIC formation to Opsimple to the tools that actually spawn the compute nodes to the point that our capacity management teams can go up to the top and follow, if a certain type of compute host needs more capacity and things like that. It can be automated, that's where we're going. But no, I'm not sure if that answers your question or if that's what you're asking. That's fine. Okay. Okay, left side please. So you guys mentioned that your Opsimple tool leverages existing open source projects. Can you give us more details on that? Yeah, good question. So we got this concept of Opsimple and decided to start leveraging our distributions installer out of the box, which was our MOS 6.1 installer. We quickly found some challenges as Ryan spoke to about. It's not just installing the OpenStack bits, but it's also managing the other components that result in an AT&T cloud. So what we did with that is we developed something called Workflow UIR, and it's essentially putting a user interface on top of an Ansible framework that actually orchestrates the deployment of fuels, so no human's actually ever using fuel to deploy our sites. It's all defined declaratively in a YAML that's taken from our AIC formation tool, which allows a designer to fully define what a site looks like, and then we extract that data, then that data is actually either installed through the Opsimple Workflow or handed off to the fuel UI after that YAML is digested. Yeah, so we have virtual bootstrapping as well that starts the entire site off with, and what we've used in that is Metal as a Service. We actually use that to orchestrate the pixie booting of our servers and the preparation to set up our Fuel Master and our Opsimple Master, and that sort of happens, again, with very little provisioning, human provisioning. Everything we're trying to get to is zero-touch provisioning. That's what our goal is for AIC 3.0 as we've advanced from Fuel 6 to Fuel 9. In the future, we're looking to continue to reduce down the tool set to use some of the internal capabilities of fuel to actually do the bootstrapping as well. Hey, Andrew, I want to hit you up real quick. In Tokyo, I made a pledge... Yes, you did. ...that we were going to get involved with the community and at that point, we were not very involved, right? That's right. So we assigned you. So how are we investing in the community starting now and what are we doing differently? That's a great question. So very exciting. Probably my most exciting part of any job I've had at AT&T was the challenge of taking what Ryan had mentioned is we have business objectives that are aggressive and have to be met and we're accountable for delivering those. But in that time of realigning towards meeting our business objectives, we sort of got caught up in delivery and started to move away from the community a bit. It was very clear to us that we were kind of swimming upstream against as we continued to fork our projects and build the sort of maintenance debt that we were having to, as he said, one or two releases a year, we have to repeat the process of managing forks. And so very quickly we said, okay, let's dedicate, let's look at the different dimensions of how we continue to deliver at scale and also be able to be a good community member, influence projects, help advance projects with our internal engineering. So what we decided to do after listening was a lot of folks that understand the community. We went through an initial training program to work in the community, to understand how to use the tools appropriately to really control our brand so we didn't come stumbling into the community. We wanted to be assets right off the bat. So once we identified those individuals in the training, we worked with an internal hiring program that's focused on hiring STEM graduates from major universities and through the first part of the year, we've dedicated 30 individuals to 100% pure community development at AT&T. That is a enormous community shift and cultural change for how we develop software and a mindset shift from leadership down and that's where it had to sort of start. These are jobs that people are eager to get into and we're looking to double the size of this team throughout the year. Hey, amen. Okay, guys, back there. Do we have time for a question or two more? Thumbs up, thumbs down. Thumbs are way down. I'm sorry, guys. Catch us afterwards. Hey, we'll stay up here or we'll walk out in the hall with you to answer your questions. I want to thank everybody for coming. We look forward to working with you in the future to make this the best ecosystem on the planet and look for me with the bear tonight.