 All right, good morning, everyone. We're going to get started with some introductions and stuff like that as people trickle in. But we have quite a few slides, as you'll see, to cover. So anyway, good morning. Ting and I are here to talk about orchestrating an OpenStack DevOps cloud to achieve continuous delivery, something that we both worked on in a collaboration a few months ago. And we'll get into it. So I'll introduce myself first, and I'll hand it off to Ting. And then I'll come back later. So I am Thanay Nagy, Solutions Engineer at Electric Cloud, which is a software delivery system, acceleration automation company. I'll introduce it a little bit more later. And a former engineer on the Electric Commander project, which we'll be talking about today. All right, thank you. Good morning, everybody. My name is Ting Dao. Currently, I'm working as the director of the IND Cloud Data Center at Huawei USA branch, where I am responsible for bringing the latest IND engineering methodologies and best practice, such as cloud computing, infrastructure, and DevOps environment back to the company to continue improving our engineering efficiency. Before Huawei, I was the principal solution architect at a company called VCE to deliver the VBlock 300 converted infrastructure product. And I also worked as the senior product manager and the solution architect at a different company, networking vendors, such as Cisco and Juniper, responsible for the data center switch product families and enterprise solution architecture design. Before we get started, I would like to give you guys some background about the company Huawei and the challenges we try to solve together with our partner, Electric Cloud. Overall, Huawei is a privately owned, multi-billion international technology company with the headquarters based in Shenzhen, China. Huawei has been an innovative industrial lead for the last decade in the telco and IT domain. Huawei has very broad product and solution portfolios, and its customer covers all the corners of the world. Internally, in Huawei, we have three major business groups, service provider business group, enterprise business group, and consumer business group. Their product covers the service provider routers, LTE wireless stations, data center switches, service storage arrays, IP telephony, and even the consumer products, such as media patterns, smartphone. Even in very weak, in the past few years, we all know the global economy is very weak, but Huawei still has successfully maintained a very rapid revenue growth year over year. In 2012, we achieved around 35.4 billion US dollars in the revenue, and we are expecting around 10% year over year growth in 2013. Huawei is really a global company. More than 65% of its revenue come from the region outside of China, including EMEA, APEC, North and South America. Huawei has been very committed to its R&D innovative investment since its early founding stage. Globally, we have around 150,000 employees and 16 R&D centers. Around 50% of our total employees, which is 70,000 people, are R&D engineers. In 2012, we spend around 13% of our revenue back to the R&D innovative investment, which is around five billion US dollars. That's a lot of money, and we would like to spend them wisely. In other words, the cost of R&D investment is so big, a small percentage of the saving in the R&D cost or a little bit improvement in the engineering efficiency can help bring the hundreds of millions of dollars back to the company's stake. So let's talk some challenges we are facing these days in Huawei. As we discussed, Huawei offers hundreds of product families every year, and the internal R&D environment is so complicated. A lot of time during the peak business hour, the development tools requires hundreds of CPU computing power to complete certain R&D activities or to accelerate certain R&D activities. Instead of provision more local physical build server, for example, we would like to reduce the software build time from hours to minutes. So instead of create more physical build servers in the lab, certain technologies we used in this acceleration solution requires a huge amount of distributed CPU computing power on demand whenever they are needed. Most of us come from the engineering background. In a traditional way, we all know how time consuming are tedious to set up the compile or build environment for the development team or to set up the software testing environment for the QAT. In average, let's assume 10% of the engineering forces for a company will get involved in those environment setup every day. If we can help them to reduce 30 minutes in those provision process to a large company like Huawei with 70,000 R&D engineers, in total we actually can save the cost of 450 engineers for the company per year. That's a huge return in the RPEX to any large company. In these days, data produced by the R&D activities are growing very rapidly. And unfortunately, they turn out to be more and more valuable to be stored somewhere so that we can use them to analysis the pattern of the R&D process or to trace back the product issues, so on and so forth. So the R&D environment required that we should be able to expand our R&D storage capacity dynamically scale out without impacting the existing running application or existing environment. In order to achieve more efficiency in the engineering productivity, the integration of roles between the development and operation engineering become a hot topic these days. The progress in the virtualization technology and cloud computing converged infrastructure in these recent years, along with the automatic provision and configuration management for devices, make it ready for the R&D organization to provide a multi-tier and multi-platform infrastructure in an easy way. DevOps gives the engineering more control over their development environment and even the production environment. It bridges the gap between the development and operation activities, and also helps to automate as much as possible to avoid those unnecessary human errors. The dynamic infrastructure especially helps those giant companies like Huawei to support globally distributed R&D teams. So obviously, virtualization and the cloud are the solution, way to go solution here for us. But keep in mind, the internal R&D data center or internal R&D environment is always a cost center for most companies. We spend money without bringing the revenue directly. So let's take the commercial software as an example. In order to build R&D data center with 3,002 CPUs service, right? Most likely the company need to spend millions of dollars every year depending on what type of license or support contract they choose. But in the meanwhile, the open source community, especially the cloud technology, become more and more active in the past two years. The graph shows the population of the open source cloud software community grows dramatically in the past few years. Among the top four open source cloud software, OpenStack definitely gained the most and the best attention from the individual developers, users and even some big cloud vendors. The development and the maturity of the OpenStack is evolving day by day and that's why we are here this week. So in the project, for the project, we might team did with our partner ElectroCloud. We chose OpenStack as our underlying cloud infrastructure. And the rec mount server Huawei IH 2285 are used and our OpenStack is deployed and managed by Huawei ILCN, which is called Huawei ILCN, Intelligent Lab Configuration Manager, which is the in-house lab configuration tool framework. The Huawei ILCN deploy the OpenStack and manage the OpenStack, use a modulated script and it also leverage the OpenStack community existing best practice deployment by integrating those open source tools such as Chef, Puppet, Cobbler in the backend. We are considered to contribute back to the community when the solution get mature. In our system, we also created a portal for the cloud admin and the cloud users with a different level of privileges to log in and manage the cloud resource. With several clicks, the virtual machine pre-installed with different tools or application can be provisioned very quickly and easily. So up to this point, literally we achieved a low cost cloud infrastructure for our IND data centers. But can we go further down to reduce the cost by leveraging the open source as much as possible? Looking into those open source tools in each IND process area, the answer definitely is yes. And that's exactly the idea on top of which we built this dynamically DevOps system solution here. So in the project we did with ElectroCloud, as we discussed, we called Houdos, Huawei Unified DevOps System. And we chose OpenStack as our cloud infrastructure provisioned and managed by Huawei ILCM. On top of the cloud, the user also have the option to create couple virtual machine provisioned with the open source tools along with the cloud provision process initially. For example, we use the open LDAP for the user identity management. RedMind for the backtracking tools and track for the project management. Revealable to help with the code review. And Jenkins to provide the local build compile as well as the continuous integration. We use Git or subversion to provide a source code management and graphite for the resource monitoring. Also the working station virtual machine pre-installed with the customized open source Eclipse can be created dynamically to provide the IDE platforms to the engineering group. At last but not the least, we chose Electric Commander, which is a perfect framework here to integrate all those tools together, including the OpenStack cloud. To provide us an intelligent, lean, low-cost DevOps system empowered by the latest cloud technology. To the individual users, such as development engineer or testing engineer or release engineer, we also provide a portal or dashboard as the daily upfront user interface so that the users don't need to jump in between those queries for different tools. The portal and dashboard give the most functionalities by backhand integrating with those open source tools through Electric Commander. Another big benefit for that is that engineering only need to get trained with this who does portal or dashboard GUI without the need of being aware of those different open source tool or cloud technologies we used in the background. This dramatically reduce the engineering training cost and improve the efficiency of tool usage. Depending on the user privilege in a specific project, the portal and dashboard can provide the users infrastructure as a service and the platform as a service accordingly. Now I'm going to turn this to Tanei, let him to go through some more details how we use Electric Commander to integrate those tools and the cloud together to provide an end-to-end automatic DevOps system. And he will also go through the design and development and user case by using our Huda system. Is that my call? Thank you, Tane. So I'll jump a little bit with a little bit of introduction into Electric Cloud. So Electric Cloud is a software company in located in the Bay Area or headquartered in the Bay Area I should say. And we have a software delivery system. Our products, Electric Accelerator and Electric Commander enable you to automate and accelerate your entire software delivery process from build, test, deploy all the way through to release and really something that helps you achieve continuous delivery. Electric Commander is the product that we used at Huawei for the Huda system. And that is the framework with which you can automate, parallelize, resource manage, schedule manage and pretty much run whatever you want to run, wherever you want to run, whenever you want to run it using Electric Commander workflows, which you see in the background which we'll see a little bit more of. So the OpenStack integration, it was not something that we previously had before we started working with Huawei, but it was something that we created on site. Very, very quickly, it's something that we're formalizing as we speak right now, but for those of you who are familiar with OpenStack and the REST API or the NOVA tool, it'll be pretty familiar to you. So you can see over here, this is a project. This, what we're looking at over here is Electric Commander. The project contains a few procedures to deploy one VM, deploy multiple VMs, undeploy one, undeploy multiple and update some local information. So this is a deploy procedure. The details aren't too important, but just kind of to show you guys what the integration looks like. And it uses the Electric Commander Pearl API to talk to the OpenStack REST API as well as parses some output from NOVA. This is a second procedure built on top of the original deploy procedure, which uses dynamic job step creation in Electric Commander to basically run and create as many VMs as you want to dynamically. So I'll walk through a solution with a couple of scenarios for what we achieved using the Huda system. And after that, I'll turn it back over to Ting to go into some summary and then you guys can ask some questions if you like. So this is a development scenario. We have a developer Joe on one side, a reviewer Mike on one side and the Huda system in the center. So the developer Joe modifies their code using Eclipse and launches a pre-flight build. How many of you guys in the audience are familiar with the concept of pre-flight or pre-commit builds? Not too many? Okay, well, the concept is with Electric Commander, what you can do is you launch a pre-flight build. It takes your local source changes, uploads them to the server, downloads them onto the agent on top of a clean source snapshot. So you're effectively simulating your check-in and then it runs a whatever procedure, whatever job you would normally run in production. It runs that against that simulated check-in and only automatically checks in the code at the end if that pre-flight was successful. So what we'll see over here is once the user modifies the code and launches Eclipse, if you see that little cloud lightning icon, that's where it passes over to Electric Commander, which is then going to orchestrate all of the different tools that Ting spoke about earlier. So the first thing we do is check out the sources and overlay the deltas from the developer's box to simulate their check-in. Then in Red Mine, which is the issue tracking system we use in this case, you mark the issue as build and unit test. So that way you can track from your issue tracking system what each issue's status is. So we added some statuses that reflect what happens when an issue is being passed through the Huda system, excuse me. After that, Commander will then launch a build and test on Jenkins, passing it instead of the normal subversion repository, the pre-flight source directory that it created. After that, if the build or test was successful, the Commander job will do one of two things. In the case where the build failed on Jenkins, it immediately reports it to the developer and the developer's check-in is not allowed and they're then told to go take a look at the build, see what failed, try again after you make a fix. So in the case where the build test succeeds, it then moves on to Red Mine, marks the issue as in code review. This is using a Red Mine integration. And then in review board, which is the open source review tool that we use for this case, it creates a review request, which then goes over to reviewer Mike. So now review Mike has a chance to review the modified code. Electric Commander will wait until that review is processed and whether it was successful or not, it's again going to notify the developer. So in the case where the reviewer rejected the code, once again in eclipse, the developer receives a notification that the code review is rejected, they can go and take a look at that. And their commit is not allowed. In the case where the review is successful, Commander will automatically mark the issue as resolved using the Red Mine integration and then notify the developer, automatically committing their code in eclipse. So this window that you see over here on the left-hand side, from the time the developer modified their code and launched their pre-flight to the time the code was checked in, that's a huge time savings. You don't have to worry about what happened with the auto build test. You don't have to worry about breaking production builds. You don't have to worry about closing out your issue or about even the code review happening. It all kind of happens in a black box, so to speak, for the developer and they can move on to other tasks. So I'll walk through some screenshots, kind of just to go through the same thing that we just spoke about. It'll be pretty quick. I won't spend too much time, but this is Red Mine, if you guys are familiar with it. This is the bug that was marked in progress by the developer as they started to work on it. In Eclipse, they modify their code and launch a pre-flight using Electric Commander's Eclipse plugin. And over there you can see it's a run configuration. They run their build and unit test. In Electric Commander, the subversion pre-flight is created. The subversion pre-flight source snapshot is created and then passed over to Jenkins where it launches the pre-flight build using that simulated check-in. It also automatically updates the task in Red Mine to a new status which we added, which is build and unit test. In Jenkins, you can see that the build was started by user Electric Commander, which is the user that's blessed by Jenkins to be launched from Electric Commander. So this job was launched from Electric Commander. It's the exact same job that you, or the exact same build that you would launch if you wanted to run this in production. So the same build is running under CI as well, monitoring subversion changes. In the case where the Commander build fails, Commander picks that up from Jenkins and it fails the workflow. It never moves on to code review. It reports the error. And on the Eclipse developer side, it's hard to read that, but it says the automated build and test failed and the developer's code was not committed. So the developer modifies their code, launches another pre-flight. This time the Commander build succeeds and it's sitting there waiting for review. So in the case where it's waiting for review, this is review board. This is the automatically submitted review request. And so this is the reviewer on the other end. They choose to reject their changes in this case. Commander picks up that the changes are rejected and the workflow fails and it's gonna report that back to the user in Eclipse. The change was reviewed and rejected. Eclipse, he received the error message, makes whatever changes he wants, relaunches the pre-flight, third time's a charm, except the changes on the reviewer side this time after getting through the build and unit test and the workflow succeeds. And at the bottom, the changes have been successfully submitted within Eclipse. I'll walk through another test scenario. This one talks a little bit more about the open stack integration over here. So the test engineer Jill, she picks an issue to verify and which test cases she wants to run against that issue in Electric Commander using a procedure that was set up for that task. Once again, we use the red mine integration to mark the issue as verifying. In open stack, we provision the virtual machines based on the test cases that she wanted to run. Now this is completely dynamic and flexible. How you want to deploy machines when you want to deploy them is really up to the designer of the procedures within procedures and workflows within Electric Commander. From Commander, it launches the automated tests on agents that are on those open stack machines that were just deployed. And whether the test was successful or not, will then get what happens next. In the case where the tests failed, the developers notified and that the VMs are ready to inspect. So the VMs are not automatically undeployed because typically if a test fails, you might want to, as a test engineer, you want to go and take a look at those VMs and see what happened. So they're notified. They're given some information about how to get to those VMs based on the floating IP addresses that were assigned and the links to the web VNC via open stack. The user then goes in and manually decides when they want to tear down the virtual machines that were deployed. In the case where the test succeeds, Commander automatically tears down the virtual machines. This is, again, a design decision. Whether or not you want to do that is really up to you. But in this case, we decided, in the case where the tests succeed, there's really no point keeping those VMs around. Might as well keep it as elastic and dynamic as possible in that case. Mark the issue as closed in red mine. Notify the developer, and sorry, notify the test engineer that the test passed and closed the issue. So once again, the same type of time savings over here. The test engineer does not have to worry about provisioning or tearing down these virtual machines that's all taken care of through an abstraction layer. And the tests are automatically run through it based on, sorry, did you have a question? So yeah, in this case, so this was definitely more of a kind of a proof of concept. And so it really depends on how you want to do that. So in this case, what we did is we just kind of had a certain few test cases which run on different platforms which are represented by open stack VMs. But that's not the case really. I mean, depending on what you want to have it as, it could be based on you select test cases, you select test directories, you select a product and it runs through the whole test suite. So it really depends on the designer. So this is the time savings for the test engineer. Once again, it doesn't have to worry about any of the stuff that's automatically taken care of by the Huda system. So I'll go through some screenshots again. You can see over here there are no dynamically deployed VMs. These two are the actual machines on which the commander servers are running. So this is the test workflow. Again, this is just an example. It's not something that is hard and set, these parameters and what the behavior of the workflow and procedures and commanders completely customizable. But in this case, the test engineer selects a red mine issue, selects which test cases they want to run on which platforms automatically provisions the machines from commander. So the VMs are dynamically deployed using that open stack integration, which I showed you guys earlier. You can see that they're spawning. Once the IP addresses are assigned, those IP addresses are automatically picked up. Once again, using the open stack integration. They're links to the deployed machines and these are the web VNC links which are also available through the API. And in the case where the test failed, it sits there in the error state waiting for the test engineer to come back and decide to tear down those machines. It sends an email out to the test engineer with these same links. And once they've taken a look using those web VNC links, you can log in directly to the machines, which is really cool. And commander sits there waiting for the manual transition. Once the user decides to take down those machines, they run the transition, click okay. The machines are automatically torn down in open stack. We're back to our same two statically deployed VMs. Now, in the case where you relaunch the test workflow, this time same issue, same case, same test cases, something was fixed along the way. If the tests were successful, the VMs were never, sorry. If the tests were successful, the VMs were dynamically deployed and undeployed. The test engineer never really had to run anything in that case or never had to interact with open stack at all in that case. So I'll hand it back over to Ting for a summary and then we'll have some time for questions. Hello. In Huawei actually we have very aggressive goals to improve the engineering efficiency with very detailed target metrics. For example, in the coming years, we would like to reduce our software build time from hours to minutes with those tens of millions line code embedded software. And we also would like to, for example, reduce the full automatic regression testing cycle from execution time from days to hours. And those complicated solution testing cycle from months to weeks, none of them are easy task for any organization. But we do believe the next generation only data center empowered by who does solution can help us to get closer or achieve those aggressive goals. Thanks to the team from ElectroCloud and the team from Huawei, in the project we did jointly, we designed build and validate live and working system to help us to increase the resource utilization and the productivity to reduce the cost to deliver software and hardware and also shorten the product time to market with better qualities. In the meanwhile, our China team has also started the work to deploy and roll out the INE data centers across different cities in China by leveraging the who does solution we did jointly in Huawei US branch. And we definitely look forward to share with you guys more exciting story next year with those large scale real world deployments. So this concludes our presentation today. I think we still have some time. Any questions, comments? Tonight I know I will be. Yeah, that's the tool that we use. So yeah, the tool that we were using is the tool basically. So the question was what kind of tool are we using to orchestrate the whole who does system basically. So the answer is electric commander is the tool that we use. That's electric cloud software and I was working with Ting as part of the partnership. So yeah, so electric commander, as you saw, you saw some workflows and maybe some resource type, some dynamic resource integrations and the integrations that it has with other tools like OpenStack. But the software is really an enterprise class build, test, deploy, automation, and orchestration tool. So yeah, so the question. Actually it takes some time to establish a new tool with all the development systems. So since you have mentioned Eclipse, let's say Eclipse Python I'm going to use for my development system and let's say I have five users who are developers. So to do this kind of thing, what kind of, is it one server, 10 server, 15 server? How do you estimate and how fast you can do this? That's the first part we mentioned. Actually we use Huawei's in-house tool, Huawei ILCN to provision the whole system. It's like one hour, two hour. If you just have a couple of servers, two hours, you can have a fully functional OpenStack along with the integrated tool, only tool environment. Depending on what type of like programming language you choose, you want a Python development environment, you can choose that environment during the provision process. We actually, the next step, we would like to move on to the past level, platform as a service. Depending on what type of program language or program environment the developer needs, we can easily provision or provide those type of environment with our tool. Your software is agentless, right? I'm here. Okay. You're asking the commander. The software is agentless, so how do you provision the stack on the machine? How do you configure that? The electric commander does have an agent. You don't have to actually install an agent. You can run it through an SSH proxy, but it's not agentless. Okay. And so your agent coordinates the installation on the virtual machines? Yeah, so whichever agent has access to, whichever machine or host has access to OpenStack, that's the agent that you use to provision and deploy machines. I mean to provision the software that you want to test on top of the machines. I see. Yeah, so you use the commander agent or you use an SSH proxy to actually run commands on that machine, on the machines that are dynamically deployed over OpenStack, and so that's how you would use that agent to deploy your software. Okay, thank you. Is anyone using this for financial control? In other words, you seem to have a very good feel for what everyone's doing. Is there any kind of reporting, you know, so you can understand what all the developers are doing, and if they're actually working on your projects, things like that? There will be some extended development on those open source tools. For example, we use a track for the project management, right? We need some resource to do extended development on those project management software. But in Huawei, we also have some legacy project management software. So we need to see which one we are going to choose. But that's definitely one of the options. Also, right now, Huawei is developing in partnership with Electric Cloud. The dashboard that Ting spoke of, that's very much under development. So there is an administrative view where you can see all the different products and where in the life cycle they are, where in the kind of continuous delivery pipeline each product is, each version of each product. Developers that are associated with each product can take a look and see where they are. Actually, Tanay is going to, for the next two weeks, he will be in Shenzhen working with us to do some extra further work. I want to ask about, and I guess it addresses both of you. So I have a software shop, and there's folks all over, in the U.S., all over the country. And we need to integrate them all. And we're using a variety of tools. We have Jira, we have Jenkins, Gary. That's the same idea pretty much. Developers go, range from Eclipse to VI. And how to, A, can this, I'm assuming, can address the issues we have of lack of connectivity and doesn't involve, is it like, I'm assuming this isn't a downloaded product that would involve, you'd have to help with integrating it with all this, or do I just click Next? Oh no, yes. So I mean, if the integration doesn't exist, something Electric Cloud would work with. Okay. Would work with you for. That's what we do for our customers. But to your point, from Eclipse to VI, we showed Eclipse as an example, but obviously the world doesn't use Eclipse, so, or everyone in the world doesn't use Eclipse, I should say. So you have command line utilities available that do exactly the same thing that you saw. So really that's kind of a wrapper around the command line utility. So the ID integrations that we have are Visual Studio and Eclipse, but for the most part, developers who use VI who use Emacs, they tend to use the command line utility. Communicate back and forth. So the communication mechanism, basically so the client preflight submits a request over HTTP and uploads the changes over, I believe, STOMP. I'm not sure of the exact protocol. And then the commander server, basically, will notify that client on the other end once the job completes, and that's how you receive the feedback. No, it's not an agent, actually. It's just a tool that you run on your machine that communicates with the commander server and then waits for the job to complete. Just a couple questions about your OpenStack install. How big was it? How many compute nodes and what version of OpenStack were you using and did you use any deployment tools like fuel or crowbar? Yeah, actually, probably came late. Actually, we used the Huawei in-house tool called Intelligent Lab Configuration Manager, which is the Huawei in-house lab management tool framework. But in the back end, actually, we will integrate those Chef profit cobala to leverage. Because in the OpenStack community, we have so many best practice deployment tools. So actually, in the back end, we use Chef and the Puppet, depending on which we have the option you'll get. Oh, I meant deployment of the OpenStack infrastructure. Right, right. And in that pro concept, actually, in our lab, we have eight servers and three servers are the controller nodes to provide high availability. Three compute nodes and two story nodes. The ideal is actually, I would like to have this system to support our U.S. branch, which is around a couple of hundred people environment. Yeah. This is a little bit looking ahead, but are there plans to maybe offer this as sort of like an SQA or test as a service? That's exactly the second testing scenario. We have those testing scenarios, right? This is kind of a pro concept in the U.S. branch, but the current work we are doing is we transit this work to the headquarters. They have much more complicated environment. They have very large scale testing environment. That's work we are doing. Basically, we integrate those like test management, test case management system, test execution servers into this solution. Whatever kind of dynamic created and the tear down, we will use the OpenStack elastic advantage to provide that. Yeah. Any more questions? We have one more minute. Yeah. I have one question. How do you make a new test for new feature in your product in testing phase? That's a great question. We do not create the test. It's definitely the onus is always on the developer or the test engineer, whoever creates test to create the test. Commander is a framework by which you run the test, so it's not actually a test creation framework if that's your question. Right. So, usually in typical enterprise environment, we have the test management system. Basically, this one server or multi-server to manage all those test cases is centralized. So, as I said, we are going to integrate those systems into our cloud system. Make it dynamically created. Okay. Thank you. All right. Thanks, Tai. Thank you. Have a good day. Thank you.