 Hi. Welcome to our talk. My name is Deshukla. I am part of Cisco IT, a cloud design engineer working on OpenStack. And my name is Steve Pierce. I'm the OpenStack Solutions Architect for Cisco IT. So in today's talk we want to talk about what the enterprise application trends are that we see in our enterprise. What has been our OpenStack implementation journey? What are the lessons learned as we went through this journey and talk about our next steps and our future plans? And if you have time we'll talk about some Q&A. So today's application architecture, these are some of the common patterns that we see. The applications are elastic. They can dynamically allocate resources. They can shrink down depending upon their requirements. They are flexible. They are built on different operating systems. They can run on different platforms. And they need to be deployed faster because they have the promise of having time to market in a shorter time. These applications also have resiliency characteristics. So they have designed to deal with infrastructure failure. For such an application architecture and to help promote cloud adoption within our enterprise, we came up with a framework to identify applications which are either cloud tolerant or all the way up to cloud native. The basic difference between these three paradigms are for legacy applications like ERP which have state maintained within them. They have applications which are monolithically built. They do not have resiliency in their architecture and rely on their underlying infrastructure. We categorize them as cloud tolerant application. We then had the Uber state which was cloud native applications. They are fully API based. They have stateless interactions among their components and are built to recover from any resiliency. By having these buckets, it helped us to classify what applications are the key targets for migrations. Obviously, the cloud native applications are more geared towards faster adoption on programmable infrastructure whereas the cloud tolerant needed additional work. To deploy such a cloud native platform, we needed an infrastructure which was programmable and had these characteristics. They needed to support multi-data center deployment. They need to provide full infrastructure visibility of how my workloads are running. If I need to do auditability, how do I do that? Provide auto-scaling features for applications that are running on it. Provide integrations for how do I go about putting platform as a service on such an infrastructure platform. The obvious choice that we came up with was OpenStack because it met most of these key requirements. This is the architecture that we came up with for our OpenStack deployment. It's a pretty vanilla architecture where we run our management nodes virtualized. Our physical compute are running on UCSB series hardware. We use Red Hat's OpenStack distribution for our OpenStack deployments. We run our Cinder and Swift on Ceph back end and they're running off UCSB series software. We run routable networks within our OpenStack and they use our traditional Nexus fabric. What makes our architecture a little unique is we have consumed these OpenStack APIs through service catalog items on our OpenStack cloud. That helps us to improve the adoption of OpenStack within our broader Cisco IT. Here's the journey that we took to get to OpenStack where we started in September of 2013 on Grizzly Release. We did our first full-scale data center deployment on Havana using the Havana release. We will be doing shortly another data center deployment on Havana in August. We intend to move all our data centers on an ACI fabric by end of this year which will help us give us the flexibility of providing policy-based application deployment and modeling characteristics which we'll talk more about in the coming slides. Here is the example. Let's talk about a case study that we deployed an application on our OpenStack platform that we recently went live with. This is so-called a poster child for the modern new generation application that has APIs on the northbound where it can be consumed easily by the web applications and the mobile platform where also it's able to consume the infrastructure by using APIs. This is a large-scale application that almost consumes multiple racks of our cloud infrastructure. It had some strict application requirements that needed to scale linearly. It required zero downtime across the different components and also had some strict performance requirements in terms of meeting up to 2.5 million page hits per day. The application has a lot of open-source components built with them. Most of them are stateless and they interact, coordinate across them using RabbitMQ bus, pretty typical architecture. And deploying such an application, we learned a lot of key valuable lessons that we're happy to share as part of the process. We learned that there is no magic bullet for application migration. A good due diligence that if done ahead of the time helps alleviate a lot of pain points. So one of the key things that we quickly understood was, if we're able to understand the application architecture and the interaction patterns ahead of the time, that will help us to iterate quickly as we move across a different application lifecycle. The application was pretty complex. There were too many moving parts. There were interactions which we needed to track and had we had a documenting process where we were able to tell A component talks to B component in a certain fashion. We would have been able to quickly deploy certain applications as we move from dev to QA to production. As I mentioned earlier, we are running traditional networking and there were some strict requirements for our application. It was internet-facing, so it comes with own DMZ requirements. It came with its own hardening requirements and doing that later as an afterthought was pretty painful. One of the other challenges we also came was that we should have done a performance testing of an application as we went along the process. There were things that happened in an agile fashion and some of these things were taken as an afterthought. But in the nutshell, we came to a conclusion that application deployment is still difficult in open-stack cloud environments. The reason being that there are two different paradigms still exist, that deployment developers are still thinking in an application-centric way. This is how we need to interact across these applications and infrastructure people are focused on implementation specifics, meaning how is my VLAN working, how do I enable these firewall ports. There is still a distinct parity between when we deploy these applications and that is something that I'll hand over to Steve to talk about how we can do better as we move along. Thank you, Dash. Like Dash was saying, we're finding that application deployments are still too difficult in the cloud. As Dash said, it's because of the difference between the application developers, their application-centric, they don't understand the infrastructure, it's not their job. The infrastructure engineers understand their infrastructure but they don't understand the application. So there's a big disconnect between these two groups. The question is what can we do better? What can we do better next time and how can we get there? We need to separate the two concerns. We need to separate the application concerns from the operator concerns, the tenants from the administrators. And with that separation, the application developers can specify how to deploy their application with a model and then the infrastructure engineers can go and deploy via that model. Another good thing is we want to have dependency mappings. We understand when service A requires service B so that we can do planning. The infrastructure operators can operate the infrastructure better. We eliminate surprises when we take down one service and discover that something important depended on it. The other thing we wanted to do is we wanted to enable network services. We want to be able to chain things like firewalls and load balancers as part of the model rather than specifying them in terms of specific details for the domain like IP addresses and round robin characteristics. With this, we looked at group-based policies. Group-based policy is an incubated project within OpenStack. It is available in the Juno timeframe. It's a 100% open source project within OpenStack. The intent of the project is to capture the intent of the application and model that application so that the infrastructure engineers can deploy it. It's built by a community of developers across many companies including Cisco, IBM and Intel. What is group-based policy? Group-based policy is consumed very similarly to the existing neutron model where you have the command line interface horizon and heat. Instead of interacting with neutron directly, they interact with the group-based policy APIs. Then the group-based policy APIs interact with the network either through a neutron driver using classical neutron constructs like networks and subnets or through a native driver where the native driver interacts with the network directly. The policy model is very interesting. Instead of having security groups where you're applying a permit and deny rules to an existing group, you have an asymmetric model where you have producers and consumers. With that producer and consumer model, the flow of data becomes obvious. The intent becomes obvious. This group depends on this group. When you add things to one group, you know that the things in the other group will need to consume or they'll need to produce from that other group. The policy rule set defines the rules and how these two policy groups interact. Those rules can also include chaining. One group could specify that the access to that group is to be a load balancer or a firewall. This allows us to do governance so that we can add firewalls and load balancers for InfoSec and for reliability resiliency reasons. Why do the developers like group-based policy? This one should be fairly evident. The intent-based model is very similar to how they see their application. They understand the data flows or at least they should understand the data flows inside their application from one service to another and also what services that they need to consume from the outside infrastructure and what they're providing and the APIs that they're providing. The automation piece here allows operators to deploy an existing application that they already have modeled into multiple data centers either on a disaster recovery basis or simply for scalability and adding additional capacity. The service chaining, as I got into before, that's a framework for describing the network services and how they interact with the different components of the application. So how do we implement this? We're using a group-based policy driver for OpenStack. That is a APIC driver, which is a native APIC driver. That native APIC driver communicates from the group-based policy APIs to the APIC directly to configure the mesh based on the model that's being input. This group-based policy driver is supported in Juno. It has one-to-one mapping with ACI constructs to group-based policy constructs. And so when you are adding your group-based policy constructs into the model, you can see those changes reflected in your APIC and in the network. So what is our plan to next OpenStack infrastructure? Well, we're looking to go to Juno and Kilo with the group-based policy model using the group-based policy driver for APIC. This requires us to be on FCS plus 12 ACI release. What that's going to allow is that it's going to allow the OpFlex agent on the hypervisor to make OpenV switch a fully participating member of the ACI fabric. What that means is that that means that when you configure APIC, APIC will then configure the OpenV switch. There's scalability goodness here because Neutron doesn't scale to hundreds and thousands of OpenV switch agents, but with the OpFlex agent it will. And of course, we're using native VXLAN from the server directly into the mesh. And so we're talking the same language as the mesh with NIC offload so we get the good performance with that. So with that, we'd like to open up the floor to any questions. Both Dash and I have been involved with the deployment here and both with the traditional and the ACI, and we're happy to answer any of your questions. Please use the microphone in the back if you have any. After the deployment, have you changed the support model overall before OpenStack and after OpenStack deployment for the internal environment? Did you hear? They're supporting the environment. I'm sorry, we didn't hear the question. You didn't hear the question. So the internal IT environment, you deployed the OpenStack. So after your deployment, have you changed the support model? After deploying OpenStack, have we changed our support model? OpenStack is driving a lot of changes in our support model with the ability for us to give access to the dashboard to our clients. They are asking for more and more functionality so they can move faster and faster. Yes, and I think there has been a shift in the way we do our operations. We have started to move towards the whole DevOps model where it is fostering a programmable infrastructure option. Yeah. Our goal is to get a programmable and elastic infrastructure in both compute, network, and storage. And ACI gives us that in the network side. Yeah, and I think one thing I wanted to play back was when I was talking about the application architecture, there were different components that was a pretty complex application and the way what this new architecture with group-based policy will be able to help is we will be able to not define layers. So we will have a web layer, an app layer, and a DB layer. So it doesn't matter how many components you have, we will be able to deploy them as VMs. And from an application person, he will be able to just say, my web layer wants to talk to my application layer on a particular port and application wants to talk to a DB on a certain port. So there is, like, a clear distinction. And as part of that definition, we are able to capture the documentation as part of the deployment. So when we have to move such a workload from Dev to QA, it will be pretty seamless. And with service chaining, we will be no longer having, like, the firewall challenges that we had, right? So we will be able to dynamically deploy any services in between app to a web layer or web to a DB layer, a classic three-tier application. So that's the benefit that we're aiming with this architecture, with the new group-based policy. And with ACI, we will get the scale that we need to deploy at such large scales. Use the microphone. So can you talk about when you introduced group-based policy into your phases where you did your first data center, your second data center, and then how much savings did that approach get you in that data center deployment? While group-based policies is available with IP tables, with traditional neutron networking, we have not deployed group-based policy in our open stack deployments as of right now. That is our plan to do. We are looking to do that by the end of the year. Yes. We have worked and seen if that architecture worked for us in our labs, specifically how the group-based policy interacts with a tenant network, not necessarily a provider network, and how that architecture works. So we have a design ready, and that's planned for as a next step for us. In terms of cost savings, I think these savings would be more around operation cost savings, but that's yet to be realized, so we really can't comment there. Just to be clear, group-based policies was only released last year. Last year late, I think. November timeframe, right? Yes. And with that, we're one of the first adopters to really try to push that out into our development, staging, and production data centers. Yes, we have been working closely with the community to kind of make that deployment successful. Go ahead. What sort of upgrade strategy do you have in mind to go from Havana to Juneau? We were in an earlier session with Todd, where we were talking about how to upgrade from Havana to Juneau. That's still a matter of some great debate amongst the operations and design staff. It's still under discussion. I think the short answer is we're probably taking the well-documented route, but with the more conservative approach where we'll go through the hops. I'm curious about the upgrade path in general because I haven't heard a lot of discussions about that. The safe way is to obviously move up one level at a time. Havana to Icehouse, Icehouse to Juneau. And because Icehouse exists in both 6.5 and 7.0, we obviously use OpenStack from Red Hat, you can use Icehouse to bridge from 6.5 to 7.0. We brainstormed different options and I think we settled on going one hop at a time to minimize impact to our service. So on existing hardware, though. On existing hardware, exactly. Now the ACI deployment, because the model is so different from the security group model, and because the intent of applications is captured in the security group model, the existing applications will need to get some overview and working with the application owners to build those models. So when we move to ACI, we'll be building that green field either Juneau or Kila. Hi, the title of the presentation was Migrating Virtualized Applications to OpenStack. So I'm assuming that means migrating VMware virtualized applications to OpenStack. That's question one. And then within that context, is everything you said applied to any app that's virtualized under VMware or which type of apps under VMware do you recommend moving to OpenStack in which you don't? Do you want to take that one? Yes, the title was around migrating application and this is the step. We want to give a pre-talk around how we move, what are the things we are doing in place as a pre-req for making that happen. So the goal that we have is once we have group-based policy implemented applications which are more cloud-ready or cloud-native, the two and the three in our hierarchy, become a good fit because they have API-based and they have agility for cloud. And they can move regardless of whether they are VMware virtualized or any other virtualization and then they can be a good fit because you can now bundle them together and as long as you know the interaction patterns across them, you can able to define a contract across them and hence it simplifies the architecture. But we are not there yet in terms of implementing in our production data center, but that's the journey that we are going. So we will be happy to come back and talk about how we did. And we plan to submit a talk about what our outcomes have been of going down this road and modeling applications and working with the application owners. The bullet item that I give people for the 15 second elevator talk is there is no magic bullet. Migration into OpenStack requires knowledge of the applications, requires you to classify the application as cloud-native, cloud-tolerant or cloud-ready and take the appropriate steps at that point to migrate that application or decide not to. The monolithic applications that require infrastructure, resiliency, that don't have application resiliency are not good fits for OpenStack. Or will require substantial effort. So you're not recommending any app that's virtualized be using your steps to move to OpenStack? If an application has a single point of failure that requires high availability at this point, we do not recommend to the application owner to move that into OpenStack. OpenStack does not have a pets mentality. It's cattle not pets. Now in the community we see that kind of changing. Part of the community that wants to see more HA features I believe that there are a number of vendors that are talking about providing those features. When that happens then we would change our guidance along those lines about saying for those applications that require these kind of HA feature sets we will support you in this manner. But at this point we would say no. And just to put a finer point on that one. So I think there is a merit on why people want to migrate to OpenStack. I think it's not just like yes I want to get away from licensing and I want to just put on OpenStack. So what our philosophy has been within our enterprise is migrate those applications that can truly get the benefits of a programmable infrastructure. So hence I think in our view there has to be a median where you have cloud ready and cloud native applications which are more intelligent and they are able to use a flexible programmable infrastructure to migrate there because yes a lift and shift can happen but what's the value there? So that's something that we are promoting within our enterprise. You talk about programmable infrastructure but doesn't VMware and vCenter they also have APIs to instantiate? I mean isn't some of that already there? The APIs available for VMware are not well first of all they're not RESTful. The orchestration team of which a number of members are here can attest to some of the difficulties with orchestrating against VMware. Against OpenStack it's a RESTful call which is very easy to do you can do those on the command line and in fact for testing we do that all the time and so you say that there's an application programming interface to VMware and I'm sure that by some definition there is but I would say that the OpenStack APIs are much easier, much more well thought out and provide much more feedback in terms of success or failure. So my question actually is that do you have any customer who was hosting applications built by non like proprietary programming technologies such as like Microsoft hosted on Hyper-V? Did you have such a customer and how did you manage to convince them to migrate over into OpenStack? Okay so we are primarily at Cisco IT we're primarily in ESX shop so ESX and bare metal and so there's no there's no appreciable Hyper-V infrastructure for us and so we don't have very much experience with dealing with Hyper-V and that technology sorry, not our thing. Very simple question where can I get the slides? I will provide the slides either through the conference or if not give me your card and I'll email you. Yeah we can email you Jim. Any other questions? Other questions? Alright, thank you very much.