 Okay. Hello everybody. This is an introduction to our AT&T's Agile journey. I am Andrew Lisek. I am the director of development at AIC. And what that entails is we have a community development program. We have a product development program, which is called AIC Integrated, AT&T Integrated Cloud. And then we have an automation program. And those three areas fall under our, under myself, and a little background on myself. I have about 15 years of experience leading emerging technologies and agile implementations. At AT&T, I've been focused in emerging technologies and into leading our agile transformation. To my right here is Jared Stein, and I'll let you introduce yourself. Hi, thanks. Thank you very much, everybody, for showing up. And, you know, it's late on the Thursday, so it's good to have you here. My name is Jared Stein again. I have over nine years of experience. I started back at AT&T about three years ago. And I've been focused on CICD. And within the last year, we've had a lot of movement in progress. And so I'm very excited to talk to everybody about that today. Hello everyone. My name is Salim Beg. I've been working in IT field for the last 13 years. I've had many different hats, from developer to tester to operations role to infrastructure. But my past four years, I've been focused heavily on the CICD space. So I've been working on the CICD project at AT&T. All right. Well, we thought this was important to share with the community was sort of where we went from and where we're going, where we're at today and kind of where we're going. We're into the third iteration of AIC cloud. And the AT&T integrated cloud is an enterprise cloud that's not solely focused on telco, but is really focused on advancing networking needs to enable our VNFs within our telco space across the world. And so we're interoperable. We are continually sort of improving for each one of the releases and there are areas of our platform that we reinvent. We've been forced to kind of reinventing as we've matured along with the platform. We've been able to prove some things out and we believe that getting past the point of inertia is really important to see innovation. And so, you know, we're always instead of trying to jump into the end state right away, we are definitely adopting the fail fast to methodology of let's get something out there, let's do it at scale and let's learn at scale. And as we do that, let's adjust and adapt for the business. And as you can kind of see, our focus is around these different areas that we've laid out in these boxes. And just to speak to some of them, the SDN enablement, I'm sure everybody knows AT&T is, you know, a big proponent of for our goals and normalized architecture that speaks to our evolution of how we tried doing snowflakes for a while, very client specific deployments. And then as we've matured from one to two to three, we've continued to move towards a normalized, share nothing architecture. AIC 2.0 had a distributed Keystone and Horizon, for instance, AIC 3.0 has relocalized those components. That would be some examples of how we've made significant shifts. And then the open source space is something that is my other program, as I mentioned earlier. Our agile evolution has been really interesting in here. I've kind of looked at, you know, I've been an XP guy since 2001 and been adopting those principles and practices and teaching those from my first program, where delivering sort of DSL capabilities for AT&T in 2001 and putting that out to the enterprise with an incredibly small lean team doing it very effectively with low defects without hardly any of the tools. Everything was very immature at the time. And I think about those days and how difficult it was and doing it at that little tiny scale. And as I continued to deliver agile through AT&T, this I looked at as a great opportunity to kind of summit some of my practices and prove they can be done at scale and something as structured as OpenStack. And within AT&T, we have a lot of different silos. So there's a lot of challenges from financials to doing top-down and bottom-up agile at the same time. If you guys want some understanding of that, we at AT&T went from a bottom-up, so teaching the process of practices initially and teaching scrum and XP principles and then top-down, they're sort of looking from the finances on down and then eventually we will meet and it'll be a complete agile life cycle. But those are the different angles that we're taking on it and I can attest to it that you can definitely deliver agile software at scale following our principles or following the agile manifestos principles, going from a bottom-up and letting the top catch up, which is usually going to take a little longer. What we're kind of showing here is whenever I took over the program sort of at late 2014 and December of 2014, I stepped into a program that was in a state of evolution from silver lining into its first iteration of AIC. And what we had was sort of mistructured scrums. There was no real agile principles being laid down. Everybody was kind of off doing their own thing. We had a bunch of ninjas on the team and working and fixing and kind of without any boundaries. And we needed to bring some structure to every aspect of it. Agile doesn't really waste any space or waste any time, so we're laying out any principles that aren't needed. And so once I got everybody to culturally shift to that mindset to say, look, we need a source of record. We need to be able to pull and measure our velocities. We need to account for our work all the time, regardless if we're doing a lab administration, deployment, engineering job, or a defect resolution for a production environment, a new scope, new capabilities. We need to make sure we're tracking that and doing that in a consistent manner across our program. Once I sort of got a hold of that and laid the structure down, even then we eviterated continuously every release, every really mid-cycle I would say for AIC. Right now we've went from sort of monolithic releases within AIC to 2.5 and what we call 2.5 now, and we've structured ourselves to have one major uplift a year. So that means an open stack version change, significant reference architecture changes, that would be a 2 to a 3 to a 4, and then within those releases we have a .5 which are targeted feature releases where we look to introduce new capabilities like adding paths layer type of capabilities to the platform and rolling that out. And then we have incremental releases within the .5s or the .0s, so 3.0.1 or 3.5.1, where you are looking at hardening the platform in real time, and that's allowed us a lot of flexibility and a lot of that churn from the top down to accept those principles. I think at an enterprise it's as hard to get your funding side of the house and your project management side of the house to start adopting these things. That comes from trust, which obviously is a major principle of our agile skills. And where we're going next is going to be CICD squared. It's something that we have in our shop, and we'll talk about it a little bit more in detail, but what we mean by that is we have continuous integration, continuous deployment or delivery, which is sort of what we focused on initially, making sure we could institute automation and get everything working and enabling the scrum teams to be more effective, iteration over iteration. And then the other two layers on that are once we became really much better at that we introduced continuous inspection and continuous deployment. And that's really what we're working on and enabling now and making significant strides towards that. And I think there are natural progressions of your typical CICD. Okay, so here's some, I guess, the money shot here with some of the things we've been able to do in a fairly short term. In about 18 months we went whenever I jumped into the program. There was a Jenkins platform laid out. There wasn't a lot of usage around that platform. We had four manually triggered jobs per month. We're at over 550 now because we actually support a Juno release at 75 production sites deployed. And we're currently deploying 100 sites in Kilo. And we'll kind of talk about that a little bit more here soon. Automated tests, we've got over 50,000 now being executed from nothing before. Things that you just want to see between your sort of unit testing, your tempest testing, and some of the other frameworks that we use are SDN. We've dockerized most of those things. So we could do sort of testing as a service in our deployed sites in a consistent manner. And that's SDN and had some SDN testing in that and a lot of scenarios in the tempest. Merge Git changes, as our merge Garrett changes, we've introduced Garrett. So the zero is obviously low because of that. But the 11,000 just kind of shows the how the team has continued to sort of adopt these practices. And they're very active, the team of developers. Our scrum team improvement, we've seen over at this is a conservative number. And so it's a 10 times growth and velocity. And we have what we call a normalized story point. It's just a scaled metric that allows us to normalize around that to give us a pretty good view of how our teams are growing and how they're doing and continuing to do. And then we've went to manual to everything automated. And that's extremely important to us. So finally, moving in beyond our dev stack, we have went to CI CD to introduce our wall guard in our labs deployments. And then we've introduced the continuous inspection that just kind of lays out what I had mentioned before from our CI CD to CD squared. And I'm gonna let Jared or actually saline come and handle our continuous integration to continuous delivery. What we're doing today. Hello again. Thank you, Andrew. You must have a decent idea about what our CI CD processes and where it actually fits in our AC program. We have come a long way from where we began. So next few slides, I'm going to talk about what is our existing CI CD process, how I see his approach to CI CD has been so unique in achieving the results we have achieved so far. So this, this is our main objective. We wanted to have the integration of providing a very stable enterprise CI CD with providing a flexibility to the community team. Our CI CD scum team has blended what we have from upstream community with community development tools to provide a stable enterprise CI CD. We maintain, manage multiple main branches and which allows us to have parallel implementation of different versions of open stack. We also have integrated vendor sender plugins and SDN solutions. Even though not most of the software and open source are frequently supported for a specific version of open stack. Go on to the next slide. This is our thousand foot level of what we actually do as a CI CD process. We have four key areas starting from code review, code mergers. Then we have something called nightly AVT process. And then we have integrated system test. First we, first we fully implement the continuous integration by running all the syntax tests, unit tests on all components. And then we create artifacts on every commit. We incrementally build out our continuous delivery to automatically deploy our reference architecture. We have made a lot of improvements and we continue to do so. In the interest of the time, I'm just gonna list out few of few of them. So we have been able to successfully incorporate package dependency verification. We have been able to make our release automated process able to lock down on artifact versioning and also unified hardware integrated component pipeline for open stack. This is our AIC verification testing site. So I'm gonna touch over the key points. As you can see, we have four different types of AVTs. But the most important one is AVT nightly, which runs on every release and trunk branch. It takes anywhere between six to 12 hours. And starts from it's basically deploying a whole new open stack environment starting from building ISO to running automated tests. We frequently deploy and verify the latest table packages by running our AIC verification tests, AVTs for short. We have enabled on commit integrated deployment testing for all components. And also we recently have been able to expand into where we have included bare metal compute with SROV and DPTK support to reinforce our delivery gates. We obviously have found a lot of benefits of using AIC AVT over DevStack because the main one is it gives us the flexibility to create production like AIC environments. And also it gives us the ability for us to converge the SDK for both development teams and DevOps teams. So they are able to it allows them to able to see their running code with production configs. With that being said, I'll hand this over to Jared Stein. Thanks, Elaine. Thanks, Andrew. I'm going to leave this slide here for a sec. How many people here are on the Dev side? Okay. How many people from the Ops side? Oh, okay. So I got some DevOps people. How many people from release management? Okay, cool. Think about this. One, we don't use DevStack. I know. Crazy. AVT allows us to deploy actual AVTs on top of a very limited set of hardware. AT&T is running much larger zones than what we're doing. But we're able to do it every single time a developer commits. We've made some limitations. Obviously, we've got a couple different types of AVTs that we've written up on the screen. But the thing to keep in mind is that we focus our testing where it's needed. We don't run any extra test cases when we don't. Okay. So we've come a really long way. We're off here. How does AVT succeed? At the end of AVT, we say this is a pass or a fail and we promote the code. But how does that happen? Pretty simple. We have a couple different test scenarios that we run through. We actually use a couple different tools just because of the nature of our infrastructure. Our reference architecture requires it. So Tempest, OSTF, OpenStack Testing Framework, which comes from fuel, and the Contrail Test Framework. But obviously I just wrote these up here. They're not that unique. What's special? So our Tempest container is a portable Tempest container. So we can actually take that Docker, Bradley Docker test container and run it in our pipeline. We can hand it to the developers to run it on their lab. We can hand it to the test teams and they can run it in their test labs. They have everything they need to run all the Tempest test cases that exist within AT&T. What's that? rback extension. If anybody is interested, you shouldn't be here. You should be Over in the rback meeting right now. But don't leave. OSTF is a fuel-provided tool. And what we've done is Extended and expanded. And actually that's where ABT came from. We sort of took the OSTF framework and adjusted it for AT&T's needs. So we actually deploy 50 different tests specifically across 60 what are called test groups. Those are Covering both deployment, HA, destructive scenarios. So we're Literally standing up AIC, testing it, tearing parts down, Testing it again, making sure that over all these scenarios That we were expected to verify, AIC is still working as Expected. It's a functional test case. And then as part of the Normalized reference architecture that Andy had mentioned, Big thing came in and that's Contrail. And Contrail isn't Easy to test. We didn't have any way to test it through OSTF. And we had some thoughts about how we would do With your Tempest. But thankfully we were able to Work with the open Contrail community and there's now a Contrail test framework available. And so that Contrail test Framework is expanding and obviously as you can see there's A couple highlights there of what it does cover. But ultimately We've integrated this into the pipeline and it allows us to Verify that the Contrail that's been deployed is successfully Deployed. Okay, so obviously we just talked about ABT. We talked about CICD. We're heading somewhere. We finish continuous delivery. We are delivering production Ready artifacts every single day. Now, there's a next logical Jump and that is let's get those into production. Well, let's get those into a lab. That's a key, key part about The progression of CICD and that's why we're really Tackling CICD squared here. For our deployments we had To really think about where can we automate. What parts of the business are going to have to make Adjustments to support what we're trying to do. Okay, so this is our mainline branching strategy. It's up on the screen right here. We have the CICD pipeline. It's fully automated. Now let's gain efficiencies beyond our pipeline. Let's gain them in the release process. So every step on here that you see is mechanized or automated. There is no unintended delays added in this process. I am not waiting on somebody to show up and do something. We have tools that will help us get from each point. Now as you can see we follow mainline branching strategy But in order to ensure the enterprise stability that's Acquired we do have branches that occur at the time of deployment. That way if there's ever a situation where I get told Hey we need a hot fix. I say go ahead. Cherry pick it into the 301 branch. This is the basics of how we get our changes into production. We have pipelines that monitor all the production release Branches and then our main focus is obviously on the main Or the master, the trunk branch. So one of the huge challenges in 2.5 was day two operations. So hey I got it cloud now I need to manage it. Oh we got an update? Okay. Well it wasn't so easy. So with AIC 3.x and using fuel9.0 We introduced the LCM fuel plugin. And what that will do is actually ease the configuration management program. How does it do that? It deploys an HA puppet master in the control plane. So now we have the ability to Through a configuration management pipeline that's been set up Go in and tweak fuel and have it redeploy All of the appropriate components. And then some of the areas that cause some problems And really did challenge the team were in provisioning the under cloud. There's not much out there. There's obviously a lot of tools out there. But one of the challenges is how we're provisioning our under cloud. So what we've done is using a sort of an interesting approach Is let's use fuel. And so NOS is actually a tool, it's actually a plugin inside a fuel That can perform actions to stand up the under cloud. This is the basic architecture of how it does that. It pretty much represents and it's going to be replacing what we do to... I can go ahead and take it from there. Okay. Okay thanks. Thanks Jared. That last slide. Let's just go back here. It's something that I'm proud of the team kind of pulling together. It's something that we've been working on for a while. For AIC 3.0 and 3.5 we are trying to simplify the tool set, The deployment tool set. We want to do something called like AIC in a box. When you work in a large program or enterprise like ours You have a lot of people that want a version of your cloud And they want to stand it up in their own labs. We need to be able to make a distributable package That deploys within our reference architecture Because as soon as we are asked to host an environment They say well there's differences between sort of an open stack Reference architecture and your reference architecture Can we have your exact bits and deployed in precisely the same manner that you're deploying. Right now we use some things to do the early provisioning. It's called Apollo. Apollo interfaces with something else. Metal is a service and to simplify that And to continue to move us towards a single tool deployment model We introduce project nitrous which is just a play on the word of fuel And that moves that provisioning into the fuel space And allows us to get to one deployment tool. The orchestrator of this is an Ansible based orchestration framework Called workflow orchestrator. And that was also he mentioned there was a difficulty in 2.5 sites Deployed with fuel 6.1 that did not account for Consistency management in the cloud and we used again We developed a centralized management controller That lived in a centralized location and can do distributed Consistency management leveraging form in And things like that across the 74 sites we deployed last year This will be open sourced which is throughout this entire process We're going to kind of talk about the things that we will be Open sourcing. This will be something that will be able to be found out And distributed out in the fuel library if anybody's Interested in using it. And that's a fuel 9 compatible if anybody's curious Which is the metaka release. Continuous inspection. We talked about this earlier. It's kind of just near and dear to my heart. It's what we've done in all my old projects. So we introduced it into this project as well And it found some really interesting results. One of them is just kind of checking the box. We have identified as we've used fuel and continued to Evolve fuel. You have to make sure that the deployment tool itself Is scaling. So when we first rolled out AIC 3.0 we found some of the Some of the bottlenecks was within the fuel infrastructure And to expose that early on in the development process We sort of emulated a 200 node environment. We're working on adding a third bare metal node to Emulate 300 nodes and deploy it. And I actually look for areas within the installer that May flex and kind of adjust those and tune those. That's just allowing us to be a bit more proactive Versus reactive which caught us in an unfortunate situation Recently. The other two we have sonar cube and fortify SCA. These are two things that as an enterprise we use quite a Bit for sonar cube is something that we've introduced Across every one of the open stack projects that we Leverage within our space. As you know we use the moss distribution as I've hinted Towards that. So we actually do sonar cube analysis on every one of the Moss packages nightly and get the results of that. There's no reason why we can't do it with the Community or right on the main line which we've Actually done for my community program as well to Expose critical severities in its analysis. Also it can give you code coverage results and things Like this for a large program or even a small program Sonar cube is really nice because you can kind of Get a very nice graphical representation of each project And all the way down to the developer and how well They're committing unit tests when they're committing Codes of when they're committing lines of code And really any level of data point you want to Get if you have the right plugins you can get some Exposure into how well your team is applying agile Practices. And then fortify source code analyzer it's something It's been like a two year quest of mine to work with Our cso department who is sort of let go of some of Their space to allow us to refine and tune the What is the name of the actual centralized platform we Use? Fortify cloud scan that's where we finally figured Out we could run 68 repositories in about six Hours typically when you run one of the open stack Projects you just use out of the box configurations It takes around like 14 hours to analyze one of those And what we found is some fairly significant Challenger opportunities to fix some vulnerabilities In the code and we're planning on using Aic program community program to sort of run The report and account for those not picking on anyone Here I don't know if this actually calls out it Does not good so this is yes it does This is cinder and we did this on mainline and This is sort of a run and it resulted in sort of This visibility of where you can kind of see where Some of the opportunities for a project or a team Of people that are just learning maybe or doing some Doing some analysis on a project that want to Get their feet wet or do some house cleaning That's going to add some real value to a project They could come right in see the export of this On a mainline and they could start opening Bugs against the project and start delivering this We've got some pretty good while we've been here This summit we've been promoting this idea to A few of the ptls and they've liked the idea Of it and so we should see this being introduced Very soon to a handful of projects as we Proof this out and then ultimately want to do it Across any projects that are wanting us to do that Finally this is it and we'll open up to q and a Jared you wanted to make mention that you Wanted to get upstream One of the things that we've been working with Is so AT&T as a community team Not sure if anybody got through to that session But we've been partnering with them and we will Be making as much of this pipeline available To the community as possible that will all be Coming through the community team initially And it's going to be pretty interesting It's all the jobs that we use are They're fully automated they can be loaded right Into Jenkins using Jenkins child builder Nothing we use is really any different Than what the community has so You'll just get to see what we've done to do it So any questions? Go yes yes yes Yes I can so in fuel so in fuel Two dot five we didn't have a lot of flexibility To introduce these sort of iterative releases Like we wanted to so we work together With our partners to develop the ability Within code to configure using a config db What capabilities we wanted to toggle on Fuel nine introduced the idea of A modular plug in so you can actually Add plug ins that are hot deployable If you code it to be hot deployable And flag it to be so that it enables Fuel to redeploy something as long as You've made all your underlying puppet manifests Item potent you're fine as the task graph Kind of rolls out it'll reapply and we can Define new roles in the cloud as we introduce New features LMA would be a good example Of that or stack light if you're following it We did three dot oh without it we Introduced stack light within a three Dot zero dot one release and that's Enabling log stash and the LMA plug in Essentially and we were able to do That just through a toggle I mean just From the database and toggle on or off Down in my open stack development teams Have introduced that it's sort of the Open stack layer at the component layer Or the project layer with just a simple Framework as well I would actually need One of my developers which they're not here They just use flags we have a naming Convention for the flags and people Will set them by default to be off And the old functionality sticks around The new functionality is turned on They can deploy without making any changes Turn it on when they need to Any more questions? Try us. Yes. Was it a vendor that told you this? Well I think there is some truth to both Of those things I don't know that we Have we deployed 74 clouds last year Starting in March to the end of the year Using a distribution that was not Ready for prime time and we made it Ready for prime time we have Business critical applications running On there every single day and we are Upgrading and delivering new Capabilities into that cloud every day We have got great partnerships Helping us with that distribution You don't have to go too far to understand That we work alongside with We collaborate alongside with Many vendors. So we're not a holy independent shop That's doing everything and ourselves With at&t badges delivering every line Of code we are dependent on collaborating With other people and as we move into Our next journey which will be the AIC 5.0 and containerized control Plane which is extremely exciting Will be continuing to work much More tightly within the community And maybe not as reliant on sort of Closed distribution solutions. We want to partner with the community. We're trying to drive the change back To the community. Nothing that we do is special to us. Our goal is to get this back into the Community, keep fostering the community Itself and nothing is going to be Locked into a specific vendor. The vendors should be focused on The community too. We have a commitment to giving Whatever we have back to the community. We are not trying to look to build too Much secret sauce. You've heard my senior vice president Give this presentation in Austin And my AVP is never shy about saying We are committed to driving this Community to help the telcos and The large operators and closing Some of the gaps that we've identified Across sort of five themes. That goes all the way down to our Lowest processes in which we Kind of talked in here. It's helping out. It's doing housekeeping. It's delivering robust broader Solutions. I think you asked if we also had The expertise to choose the right Vendors. Was that one of the questions? Okay. I can answer that. I think we have quite a bit of Expertise in-house. We are a massive corporation that Is doing things at massive scale Across the world, across many Regions which have different Rules sets for every one of the Production environments you deploy And also we're learning on the fly Along with everybody else in the Community. Along with every other corporation That's deploying an open-stack Cloud. If anybody tells you they come in And they've got to figure it out On day one. And they deploy an effective Cloud in production. And then start doing that at scale Using some of the technologies We're using. They're difficult. I feel very confident that We have the competency within our Company and within our tech arc and Our research and our tech dev Programs and our operations Programs to choose the right Products, to choose the right Vendors to help us achieve these Massive goals. But we're not afraid of failing Fast as well. We do that all the time. Do it all the time. Thank you. No problem. Thank you. Just be, I mean you. Thank you very much everybody. Thank you everybody. Thank you. Thank you. Thank you. Thank you everyone for your time. We really appreciate it. It was great to have you all here. And thank you very much to the team Back at home because it wasn't just us. Obviously there's a lot of people Involved in getting us to where we are. So thank you very much everybody. If anybody wants to have closed Conversations or anything more in depth About how to be. The deck will be shared. Be happy to talk about it. As you said. Thank you. Thank you. Thank you. Thank you.