 I guess, is it okay now? So it's 340, which is, I believe, a little bit better time than 240, because anyone follows ignoble awards at all? Evidently, it started as a joke. These are actual awards given a week before the noble award ceremony. And these are given to some real scientific research that was done for something that's totally irrelevant, no one would ever use it. And it happens in Cambridge, Massachusetts, and the pretty lengthy process. And in fact, all the noble laureates up in the Massachusetts area come to deliver this award. I think the last, there's like 38 or 39 came in this year. And one of the awards was given to the time that's the worst in the day for a talk like this one, where people are the most sleepiest. And evidently, that's right around 90 minutes after you've had your lunch. So this hopefully is a better time than 240 or 230. So my name is Tarek Khan, and I'm part of HP's NFE business unit. Focus on the cloud NSDN technologies and trying to figure out how we can apply cloud NSDN technologies to solve what's called as the NFE business case or for network functions virtualization. And I'm here with my colleague. My name is Arun Tulasi, I'm responsible for the NFE platform solutions, which effectively distills what technologies build as possibilities into products that we can deliver for our customers. And as you can see, our discussion for today is DevOps or NFE. And I'm sure, you know, if you're here, obviously, I don't need to talk about what DevOps is trying to do. It's essentially the two conflicting things that are out there. There's the development teams and the teams that are trying to bring new capabilities. They want to be able to do it as fast as possible. And if you look at what NFE is about, it's about how can you bring IT-style agility to the networks. So there's development teams, there's the business teams that have the pressure to get more and more capability as soon as possible. But on the other side, you have network operations whose job and whose metrics are really related to, you know, how stable the environment is. And for them, making changes and bringing new things is another avenue for some downtime, so they resist it. So as you can see, there's a wall between the two. And when we look at, you know, I'm sure, you know, most of you are here in some shape or form linked with telcos, either as a operator or someone providing solutions to them. We all know that for the longest, the life cycles within telcos, within the production networks have been in years, not months or quarters. And with NFE, what started happening is that now earlier, everything used to come as a monolithic system. So very easy, you know, when you have a monolithic system coming in, you apply the principles. Now, you know, they weren't DevOps, but the principles of introducing new technology, you apply it to a contained environment. Very complicated, you know, that's why they take such a long time because there were a lot of manual steps. But still, you apply it to a monolithic system. But with the network functions virtualization, at a minimum, you have three layers. You have the underlying hardware, so underlying compute network storage infrastructure. Then you have a layer of virtualization and virtualization control, which is obviously we are here. OpenStack provides that in a very optimum way and that has become a de facto VIM or virtualized infrastructure manager. And then you have the functions running on top of it. So now instead of one monolithic thing that had its life cycle, now there's at least these three layers and each of these layers have multiple components. So now you have to worry about that. And then when we start looking at introducing new capabilities, one of the difference that telcos of today have, and I know there's a lot of forward-looking telcos out there that are trying to move to the next level beyond NFE, but today most of the telcos, they consume solutions that someone else has developed, a vendor, a partner, someone else has developed. Telcos traditionally are not, they don't develop their own software. So you have a provider who has their own development and before it gets over to the telcos, it goes through the life cycle that the provider has. And then beyond that you have a solution integrator. It could be an external solution integrator, or it could be the internal solution integrator. So whatever those new capabilities are coming in, need to go through that. And then you have the production deployment. And I know it's just a block out here, production deployment is just not that. It kind of goes into multiple stages over there as well, but there is a life cycle. So, and if you're going to apply the DevOps principles, these DevOps principles are going to be applied to all of these. And for the most part, you know, people are transitioning to using the normal waterfall methodology instead of using it, using agile developments and as part of agile development, putting some automated gates and the testing of those gates. So quite likely the provider that you're using would be using some kind of a life cycle management. But once these guys are going to give it to your solution integrator, your solution integrator is going to have their own life cycle. And then when you're going to get it as the provider, of course you're going to have your own life cycle. Now the tasks are going to be slightly different, what each is going to do, but there will be a life cycle that you're going to have to go through. And then can DevOps solve this? And one of our customers, one of the telcos in the U.S., the person who was assigned to do DevOps for them, she put it, I think, the best way that anyone has put it in this context, which was that we as telcos, we don't develop our software. And DevOps is development and operations. So how do you do DevOps for someone who doesn't develop? Which is the problem that essentially telcos are trying to solve. But the construct of this really comes down to, and you can read everything that's written out there, essentially you want to be able to get repeatability, because without repeatability, without being able to do it over and over again, you cannot do it faster. And when you have repeatability, then you get the process of incremental improvements. Because we know that once you automate something, it may be working bad, it may be not the most optimum way of doing it, but once you have automated it, now you have set a baseline. You need to continue improving. And you can come up with your own metrics, and there are some very, very good metrics available from the IT side for development that could be applied. And then the whole idea of this thing is that how can you get benefits? Be the benefits be very small, but how can you get benefits to the end consumer as soon as possible? So to be able to do it, essentially what we are talking about is how can we move from infrastructure as art, which is what essentially is done at most of the places to infrastructure as code. And we're going to talk a little bit more about infrastructure and code in a couple of more slides, but essentially leading into not have these big releases that bring a lot of capabilities, but to have small releases that you can keep bringing to a subset of your population, of your user population, and then bring them over to everywhere. And then ultimately boiling down to these time vampires out there, how can you minimize them? So to be able to do it, we've got to do a lot of testing, because when you test, you find faults, and then you go back and be able to work through those faults or errors. There's a number of tests out there, and quite likely all of these you're going to be doing at different parts of the cycle. Some of these are going to be done by the provider, some by the solution integration team, and some absolutely you're going to have to do it before putting it in your production. So before we go into some of the options available out there, and we're kind of going to close it with, you know, we at HPE have some solutions and how we're using these DevOps principles to build the solutions, and how we could enable some of these capabilities once these solutions are deployed in your environment, how you can leverage some of the same principles, same tooling to be able to go beyond it. So again, before going too far, I thought that there's a lot of terms out there related to DevOps used interchangeably, CX, CI, continuous integration, continuous deployment, continuous XYZ. So we thought, you know, we'll define it a little bit, and some of you might still say that this is overly simplistic, you know, in my environment I really don't have these five stages, I have seven, I have nine stages, but the principles are somewhat similar. So continuous integration is essentially related to it's more of a development and QA efficiency improvement process. So what continuous improvement, I apologize, continuous integration does is to be able to for a developer to make a change in a code, to be able to submit the code to the source code repository, a shared source code repository, and once that is submitted and committed, then it triggers some automated tests and that becomes the first QA test or QA of the code. So before a human being looks at the code, there's a process that you have gone through and you have run through some tests so that if you don't pass it, essentially it's rolled back and the developer need to continue working, the QA team doesn't need to be engaged, but once it goes past it, then the QA team can be engaged. Then beyond continuous integration comes continuous testing, which takes the same process, but it goes from development to QA and to staging as well. And the staging environment is somewhat similar to the production environment. So over here you're able to catch issues that potentially are going to happen in the production floor. Then the next term that comes up is continuous delivery, which essentially takes the same process all the way over to production. And at some time people say that continuous delivery arguably is the nirvana land where you're able to take these changes and move them over to your production, essentially being able to do things like add new capabilities to a subset of users. You have to put policies in place so that you can actually do that and take it over to production. But we would argue that there's another step that you would go to, which is continuous assessment as well. This essentially goes to be able to, once something is deployed, so as part of your QA and staging process, you use some testing tools to be able to validate either performance or functionality. What we would argue is that you use a subset of those use cases to be able to characterize and validate your production environment as well. And when you see the variance between what you tested and what is on the floor becomes more than acceptable, then you take it back to your planning and architecture cycle and be able to close the loop. And the nirvana land of this, if there's going to be an IT or cloud example, in my humble opinion, has been Netflix. And the great thing about Netflix is they have been putting a lot of IP out there and open source. You can go read up. And I think one of the most wonderful jobs that any geek could have it is to be running Netflix's Chaos Monkey. Have you guys heard of it? This is how they have built their environment that Chaos Monkey is really a piece of software that they let loose against their production workloads to go and kill stuff. And if you're actually able to impact the end service, you get a bonus. For that is the nirvana land where we, you know, with network or any production folks we are able to get to, then we know that we are doing DevOps and CI CD the way it was meant to be done. And now coming back to the reality of what we have and as most of you guys would have seen it and we talked about those three layers, but within each layer there are other layers as well and perhaps our own can walk us through it. So one of the differences between the legacy environment that Tariq mentioned earlier where you buy a single package from a vendor which provides the services, you have a very decorative cycle of when you're going to get an upgrade and you're able to roll with it. But when you are moving to the NFV world where you want to be able to go pick up the right components that you like and build your own solution, there is a number of things that you have to continue with. OpenStack has its own release cycle. Open Daylight has its own release cycle. And then as a developer, you have your own configuration management software. You have your own release cycle. And the solutions that you want to source from your vendors, they continue to run on their own release train. In essence, what used to be a, just click a button and buy a big box, even if it takes nine months, you still have that one big box to content with, has sort of changed where you have to now do your own stuff. So how do we bring in these various different elements that go on completely different tracks together? So that's a problem DevOps is trying to address. Before we start with what does my DevOps environment need, we'll have to remember, there's one set of principles that we need to follow to make sure we are compliant with what the community or the industry is putting out there. And then choose the right set of tools that would help us get there. Looking through the processes, number one, have your infrastructure as code. So 11 p.m. in the night, you get a call, three of your servers are down. How do you recreate those servers? Do you need to find the person who actually built the servers to recreate it from scratch? Or would you like your infrastructure to be built as code, where you have a set of software packages that anyone could deploy, just like you deploy a software application to bring the servers back up again? I'm fairly certain you'd choose option B because it makes life simpler. So you need to start using or treating your infrastructure just as you treat your software components. Move from an imperative mechanism to a declarative mechanism where instead of telling the server what it needs to do or instead of telling your resource what it needs to do, tell the resource what it needs to become. So in the older days, when you want to set up a server, there were 10 different steps that you had to perform for each configuration element that you wanted to push in. But with various configuration management technologies and the evolution in that space, you're able to specify the end state of your environment or not worry about how exactly it's done. You can essentially offload it to that configuration management technology. So be aware of what's available out there so that you can extend it. And lastly, a combination of what is test-driven development or agile development. Have shorter development cycles. Test, test, and retest until you can move to the next phase. Since break down your development cycle into much smaller quantities in the sense you don't have a big bank test that happens during the time of your release. Instead, every smaller phase has its own test cycle that validates the functionality you've just built. So what are the tools that you need to consider when you want to move to a DevOps environment? What's the most elemental thing that you build? It is your code. So you need to find a way to have a source code management system that fits the kind of development model you have. You have a geographically distributed team. Do you have your team in just one location? Is your product made up of ten different streams, all of which need to work independently? All this needs to factor into your choice of source code management tool. Secondly, not all SCM tools are created the same way. And not all your deliverables are also built the same way. So you have code that you have in your source code repository. But what happens if you have a large image? So for instance, if you pick a tool like Git, it is not an ideal fit for storing a 4G image or whatever. So you need to find a completely different kind of a repository, an artifact repository for elements that do not fit that model. So you have your code, you have your artifacts. How do you make sure your code, how do you make sure the integrity of the code is always intact? So you have a review management system which is going to hold that gate to make sure your code is properly vetted before it can ever get into a release cycle. You have multiple developers, you have multiple reviewers. How do you keep track of these changes? So once your code is built, reviewed, how do you validate it automatically? So when someone submits code and that code gets reviewed, how does it automatically trigger your test process so that this entire cycle is hands-off? Someone submits code, it gets automatically tested, passes the review, and gets back into your master branch. So you need to use an integration engine to have your DevOps cycle roll completely hands-off. We talked about infrastructure as a code and a configuration management system is one way you could drive your infrastructure as code. And again, there are a variety of tools that are available, some of which we'll talk about in the future slides, how we actually use them. Test harnesses. Today, open-source communities are coming up with their own test harnesses, so Rally and Tempest, available from the OpenStack community. We have Qtip, Yachtstick, and other similar tools available from the Op NFE community. These are tools that the community has built either to validate functionality or scale of the environment. So be aware of what's available out there so that you can leverage them. And lastly, flexible deployment engines. So in your production environment or in your integration environment, you probably need to deploy bare metal nodes. However, if you're a developer, you should probably be able to test your code in your own laptop. So how can you provide a flexible deployment model without having to commit to your resources for hardware? And then there are tools available for that as well. So be aware of what's available out there and decide which one's right for your environment. And Arun, I think before you go to the next slide, I know we set code so many times. And for folks who are in operations, you'd be wondering, you know, code, you know, I don't do coding. I don't do code. But in this case, because we're talking about infrastructure as code and because if you look at what is it that NFV of today is doing, what we're doing is there's a whole platform management component that we need to be able to handle, which, you know, that's the infrastructure as code and we're going to go a little bit into it. How do you develop the or manage your infrastructure or your OpenStack platform? OpenStack controllers, host operating systems, anything related to, you know, that's providing the NFVI and the VIM. But then when you go to the VMs or VNFs, today's VNFs, 90%, 95% of them are deployed as VMs. Now, how do we deploy them? Yes, there are some people who might go and have, you know, Nova boot and Neutron, Netcreate and all of those things, but most people are moving towards, you know, some sort of heat-based deployment. What is heat? It's a bunch of templates. What are those templates? They're actually code. What we're talking about is that when we're seeing infrastructure as code, all the artifacts related to be able to deploy something need to be treated as source code, need to be version control, need to be able to do all of those things. So this, what we're talking about is not just related to the actual development of applications, but it's related to anything that we are doing on our solution, be it we are deploying things that are coming from others in VMs where you wouldn't associate development or code with those. He's just one slide ahead. So, again, the idea of infrastructure as code is to allow you the option to recreate an environment the exact same way it was before it failed without having to rely on any other resources than the recipes or the playbooks or the manifest that you've already built. So what is infrastructure on code? Again, as Starik mentioned, you know, in the active fashion he did, configuration management on steroids. So whatever terminologies or technologies you applied to the building of your code for applications is now extended across out to your infrastructure. So treat it the exact same way as you would your source code. I mean, that means using a version control, using the review mechanism, using an integration engine that will deploy it. Moving forward, so we wanted to talk about how this translates into an actual environment. How does it take you from, for instance, your source repository all the way out to your production deployment? Thank you. So as we touched on declarative topology model, I know these software developers, they keep on coming up with these new terms. I think at the last OpenStack Summit, if some of you were able to make it across the world to Tokyo, there was a discussion on IBM intent-based networking. There's a lot of other discussions going on. You must have heard intent-based XYZ, intent-based orchestration. This declarative topology model, these are all declarative. And the difference being, instead of saying go build this by doing this, this, this, and this, you essentially say what do I want it to be, what do I need to be done, what the end state needs to look like, and that delegate the doing of the things and delegate the actual steps associated with doing it to some kind of a intelligent system. Now there's a number of those out there. The Chef, Puppet, CF Engine, Ansible, these all work on a declarative model. And in this one, there's really two things you're talking about. You're talking about what do you need at the end of it, and then there's going to be some kind of configuration you're going to provide at some point so that how do you get to what state you want. And another example of this is, so another example of this is that you have a topology that you come up with. And in this case, what we've done is just imagine you have a, a CPE or some kind of a end device orchestration where you have a router, you need to have some kind of van optimization, perhaps some kind of firewalling or ACLs to be put in place. So you create a model that you deploy the router and then you need to have these things in the path of the actual user so that when they access the service, they go through these. Now, in the design, you're going to decide how, how do you want to be able to instantiate? Do you want the router to be, you know, an HA-enabled router, which means you want two instances of it? And then, you know, I don't know why you won't want HA at other places, but just imagine you don't want HA, you go over there, and then once you orchestrate, then you're able to get it to the way you need it to be. Now, you don't have to go and say that I need this kind of server for it, I need this kind of, you know, virtualization layer at it. That decision could have been made by a different team. But for you to be able to do this, you essentially assign some characteristics to it and then be able to deploy it to the appropriate one where the job of what needs to be done to actual doing is disaggregated. And this provides a slightly different workflow where you may have the operations guys, I apologize, you may have the operations guys who are essentially managing your configuration management engine, but you have the providers that are providing the VNF, actual VNX that are coming in, they'll be providing, you know, all the metadata related to it. To be able to run this VNF, I need XYZ. I need to be able to deploy it on this kind of platform. These are my requirement. I need this kind of, you know, bandwidth or this kind of networking. Then you put all of these things into that single source repository. Your artifacts go into, you know, the artifact repository that Arun had touched on earlier. Then it's the integration engine that decides based on which gate you are in, where you're deploying. On the basis of what policies you have put in place, in dev I need to be able to deploy this way, in QA this way, and so and so forth. So operations is involved in the decision-making process and operations is the one that controls this part, the different targets that you have. And the provider or in your case it could be a developer, it could be the folks who are defining the service. They have a part in it where they're defining some of the characteristics that are required. And what we thought was, since this is a telco crowd, so let's take an example. And, you know, being at OpenStack Summit, why not take an open source example. So there's a project clear water out there. Our friends at MetaSwitch, they have put it on open source. It's basically a full IMS system. An IMS IP multimedia system, this essentially is the system that allows you to make your phone calls or watch videos and so and so forth. So for something like this, this is their actual architecture. And if you look at their components, you look at this that there's these two which are providing some telco functions, but they're really using a Cassandra database, which is a big data type workload. Then you have these two workloads. They are basically using, very heavily using memcached, which is very memory intensive workload. And then you have this, which is providing this CSCF or essentially providing WebRTC, which is the actual call, the actual communication that's happening. So different characteristics for this. And then when you go over to it and you try to use some of the processes that we talked about, essentially define the topology. And when you're defining the topology, you don't say where it needs to be deployed. You just define the required characteristics, which may be these. And you define what the required capabilities are, where you're going to deploy it. And then when you go ahead and deploy it, depending on where it's going, it could be deployed as a combination of, you know, wherever containers, KVM, or it could be deployed in your production environment where you may have a very structured, structured resource pools where you're providing storage intensive. You may choose to combine the memory and network IO workload. You may keep it separate. But this is how you're able to disaggregate your topology from your actual deployment or environment. And now Lance is very politely telling us only 10 minutes left, so we've got to hurry up so there's some time for questions. This is, Arun had touched on OP NFV a little bit. By the way, OP NFV is using CI CD process, and that is a real-life deployment, open-source community-based deployment. They are using it very similar to what everything we have talked about. And now Arun is going to walk us through what we are doing with some of the HPE products that hopefully some of you are going to end up using or are already using. So until now we looked at what are the possibilities, recommendations, best practices. Have we actually put these things to use? Has HPE put these things to use? Has a community project put these things to use? So that's what we wanted to capture in the rest of the section. So the earlier slide you looked at the reference architecture deployment for OP NFV's Octopus project, which is their CI CD project. And just quickly jumping back to it, it's very similar to what we talked about. Developers push their code to get it, it automatically creates a process, Jenkins process, everything gets validated, the results go back, and then the code is merged back to the OP NFV repository. And then as you can see, there are external artifacts like OpenStack or Open Daylight or ONOS or any open-source component that you want to bring in. Jenkins ensures they get pulled in as well. So we have an example of a very relevant telco-friendly community already using this process. We also have HP's own Helion OpenStack using a similar process. So we have Ansible playbooks that effectively take in the right tree. So the Git repo you see down there, they're able to locate what version you're trying to build and build the appropriate system. And this approach has been extended to non-OpenStack products within the Helion portfolio as well. Overall, the CICD mechanism that we are talking about isn't vastly different from what OpenStack itself does, which is one of the complex projects out there. So many different streams, so many different developers, so many different countries. So it's easier for us to take a proven mechanism that OpenStack already validates, the OP NFV already uses, and take it into a platform that we can build on our own, which is the NFV platform. And as you can see, it has components that span from the physical infrastructure, what we call the NFV infrastructure layer, right to a production-ready VNF. And for us to bring together the infrastructure, the infrastructure management tools, OpenStack, an SDN controller, a bunch of VNFs, along with their VNF manager, and a global orchestrator, it's going to be built without using what we talked about earlier, a true CI CD-based mechanism. And the goal for the platform is to be able to easily deploy it. Again, that's where the infrastructure as code principle comes in. All the components you see below, the servers, storage and networking, they are built in a predictable, repeatable way because the platform uses the infrastructure as code principles. It has to be easier for us to support and maintain the environment. Again, the only way we are able to do that is by using a configuration management tool that can provide converged upgrades for the platform instead of trying to upgrade every component at a time. So, you know, there are actual use cases that the platform requires that come in from the CI CD approach. And this is just an example of the various tools that we've used. Again, our sources use, you know, Git as a repository. We use the Maven project for our artifact repository. We talked about trying to use test-driven development, so we have a three-week sprint cycle that helps us test every feature at every release gate instead of trying to test it when we go to a general availability or a lab release gate. You know, code components are managed under Git which ties into a Garrett system and a Jenkins system. You know, what we saw in the Project Octopus slide earlier is something that we have tried to replicate on our own because it's telco-friendly. And essentially, our goal is to be able to build this entire platform. One, it's simpler to buy. It's easier to deploy. Again, one of the challenges that we have in C-World is you order a system from ideation to implementation, it goes anywhere between 12 to 18 months but with the system that's built using the CICD principles, you know, using infrastructure as code, having all these release gates, trying to incorporate test-driven development, we're able to go from, you know, 0 to 36 days from the time your order has been placed. Managed as a single system, again, because we use a configuration management tool, it's easier for us to look at the entire platform as a single platform instead of just a collection of multiple products. With that said, I think we have just a minute or five minutes for questions. So if there are any questions, there are two mics. Hello. Hello. It's a beautiful presentation. Thank you. So one of the key things that telco-service provider operation teams once, apart from delivering stuff, is a good documentation. They love documentation. So how do you produce that high-quality documentation as output of your sprints? Are you converting your user stories into a documentation, or you assign another user story, continuous user story for documentation? Thank you. Every user story for its success criteria needs to have the relevant documentation. And if you look at how our external facing documentation looks, what we effectively do is take the documentation that's attached to the user story before we close it and pull it into our web page directly. So that's exactly what our documentation team does. Without someone submitting a test plan that can be automated and without having the documentation for the feature that you have built, you cannot close a user story. It does not leave that sprint. So that's one way for us to address this. Any other questions or comments? Yeah. No, so that's we're not there yet. And if we go back to the second slide which talked about different folks have their own CI CD. So the providers and the the solution ingrators and the operators. Now one of the each of these could have their own tooling. It doesn't require to have the same tooling, but if they have the same tooling and it's the open source tooling that's used by a huge, you know, large community, it makes it easier. What we'd love to be able to do is, but you know, love to have any operators over here and get their thoughts, but we'd love to be able to capture the the events and the the from the running system that we're able to pull back to our development systems to be able to close the loop. Right now we're just closing the loop between our testing and our architecture cycle. Well folks, if no other, there's one more question. Certified material for your packet solution. Is there a bill of material for that? Yes. Yes, it is. You just want to google up, you know, NFE system and you should be able to get information on what the bill of material is. Multiple options with like small, medium, large or that is you're taking slides from the other larger NFE system presentation. So yes, there is a that is you can grow from like a four compute node version to a compute node version. And to the question the thing is we, if you have bought HP equipment, which we hope you have, are bill of materials like anyone else. They're not easiest. They're pretty long. They're pretty intensive. What with the NFE system we have done is that we have building blocks and those building blocks are limited number of SKUs. But of course, you know, should you want to delve deep into, you know, I don't care about just this starter system, four node, one enclosure, starter system. No, I want to know what actually is in there. So that information is available as well. Folks, thank you very much. Thank you and we are absolutely