 Good morning. No coffee yet? Okay. Thank you. Thank you. Welcome to the Cisco sponsored room. My name is Gary and I'm just sort of acting as the host for the day. Welcome. This is our third session today. We have two more later in the day, one right after this and a far fifth and final right after the lunch break. So I hope you can come back and join us then. Just a quick note, when you all came in the room, I think you were probably given a little card. We are doing a drawing at the end of the session for very, very cool Phillips Bluetooth speaker. So if you want to fill out the card, we'll do a drawing at the end of the session. But without any further ado, welcome to OpenStack and the Cisco Next Generation Data Center. Your panel leader or moderator today is going to be Mike Cohen, one of our senior product managers, and I will turn it over to Mike to introduce our panel. Very good. Thanks a lot Gary. So as Gary mentioned, the session is OpenStack and the Cisco Next Generation Data Center. And my name is Mike Cohen. I'm a director of product management at Cisco Systems and I'm joined today by a distinguished panel of our customers. So this includes Martin Klein, a principal architect at SAP, Brenda May, the enterprise architect at Standard Bank of South Africa, Cesar Martinez-Segura, the IAS Solutions Network Architect at BBVA, and Maxime Popov, the head of R&D at CAS Transcom. Each of these individuals has actually played a key role in designing and deploying an OpenStack cloud within their organizations. In the previous summits, I would use a talk like this to talk about the advantages of our Cisco Data Center solutions, including our ACI, Nexus, and UCS products. In this session, we actually wanted to take a different approach. We wanted to give you a chance to hear directly from our customers, the architects that are building and designing their solutions, about the challenges they faced, how they resolved them, and how Cisco helped along the way. So without further ado, I want to introduce Martin Klein and let him start the presentation. Hello, everybody. As Mike mentioned, I'm working for SAP. We are a software company based in Germany. We are actually a market leader in enterprise software. And as most software companies, we are currently undergoing a transformation from delivering our products on-premise to delivering cloud solutions. And for that, we need a private cloud optimized for actually our workloads, which are not yet in a state where we can deploy them in a public cloud environment very easily. So we have very special availability constraints and also data security is very important for our customers. This is why we decided to build our own sub-optimized cloud powered by OpenStack. And we are currently running that in 13 data centers or in 13 regions. Some of them have even multiple data centers stretched around the globe to deliver to all our key markets. We have quite a big portfolio of services that our internal customers or our software lines of business can consume on our cloud. We offer the standard Nova, Cinder, Neutron keystones with stuff. But we also offer the little bit younger projects like Manila designate and Barbican for the more advanced OpenStack stuff. And we put quite some work on top of OpenStack to have an automation service for our customers that does the on-machine automation and software installation. We call it Lyra. And we have some more sub-specific enhancement to OpenStack in regards to billing and the dashboard would make a more fluent experience for our customers. One key focus for our OpenStack deployment is that we have requirements to have multiple types of workloads work side by side in a single network environment. So our cloud actually needs multiple hypervisor types, bare metal machines, our NFS as a service in the same L2 network for performance reasons and also for security reasons. So we needed a solution where we can basically scale to a large number of L2 networks because that's an integral part of our security design, but also have a large number of different devices and different workload types all connected to the same L2 network. To give you a little overview of our architecture, how it's actually looking like at the top facing the customer, we have our OpenStack layer. There we have our API contract which is stable for our customer and that is itself running on our what we call control plane. It's a Kubernetes cluster where we are running all the API servers, the databases and what else you need to run an OpenStack deployment. On our operating system layer, we are using CoreOS, which we are also automatically bootstrapping to run Kubernetes on. And for our hypervisors, we run KVM also on CoreOS and VMware, obviously not on CoreOS. And we are running that all on top of a UCS fabric, which we use to run our hypervisors, the bare metal workload and also the control plane itself. Our entire OpenStack is built in a way that we are not leveraging a lot of standard implementations or reference implementations, but we are trying to have a strict separation between the control plane and the data plane. So we are heavily relying on orchestrating enterprise equipment that we already have built up in the data center, or that we at least have an operational infrastructure in place that guys that need to operate that at scale and monitor that. So we decided not to leverage a lot of the reference implementations at the moment, but more rely on proven technology that we already have in the data center. So this is why our Cinder is running our NetApp boxes, Elbas is basically controlling our F5s, and Neutron is calling out to my Cisco ACI. We are doing this to have basically the ability to run independent service level objects and scale them independently from our control plane. So our data plane needs to stay up at five nights. Our control plane is not so important. So taking a hit on the API and having a little downtime there is an inconvenience for our customers, but since we are not a public offering, but a private one, as long as the solution that our customer builds and presents to the outside is still running, that's the bigger issue we needed to solve, and that's why we decided to have a very strict split between control plane and data plane. Also we can in that way leverage all the fancy new stuff that Kubernetes gives us in terms of self-feeling, supervision, scaling, and an easy way to deploy actually the same payload to all those data centers we need to manage in parallel. We decided for the Cisco ACI part as the L2 part, because we, as you have seen, need to integrate a lot of different pieces of equipment into a one L2 domain, some of them being able to speak some overlay protocols themselves. The software solutions we need to integrate, of course, are the most advanced in that part, but we also need to use a lot of equipment like our storage boxes that have limited or no SDN capabilities and will most likely not get them. So we invested a lot of work into writing drivers together with Mike's team to have a Cisco ACI using Neutron HPB scale up to more than 4,000 networks whilst having a VLAN as the base contract between the device and the fabric. So we moved a lot of the intelligence that is needed for scaling that up and running that large number of networks into the fabric itself and not try to push it on the device. So all our connected devices only see the most simple network model available, so it's the VLAN network model in OpenStack, but we try to basically run on all our scaling inside the fabric and do all our intelligence inside the fabric and have the fabric do the hard work also performance-wise like encapsulation and decapsulation of all the overlay protocols. So thank you very much. Morning everyone. My name is Brenda Mayant from Standard Bank of Africa. I'm the infrastructure architect for Standard Bank and I've been very involved in deploying our private client. So just to tell you a little bit about our bank. We're 153 years old. We are the largest bank in Africa. We operate across 20 countries. We've been listed on our stock exchange since 1970 and we have about 44,000 employees. If you take our insurance subsidiaries, it's about 55,000. And in South African rands, not euros, I suppose it makes a big difference when you bring it down to euros, we're about a two trillion hand operation, but down to euros it's 128. So I don't even need to tell you the problem that we were solving while we were doing our private cloud. It's very similar to everything you've heard from everyone this week. We were trying to move faster. We were trying to reduce our costs. We were trying to standardize. We were trying to create a platform to increase innovation for our clients. When I say reduce maintenance and support, it was very much targeted at people viewing that automation is that silver bullet that's going to drive down those operational costs for you. And then trying to improve our end user performance. That's that whole mantra that you're going to be able to allow your customer to self serve and in so doing you're driving based on their demands. Very much specific the slide is to the principles we put together for our private cloud because we create a perfect storm for ourselves when it came to our data center network. So when it came to our private cloud, we wanted to deliver what matched with customers. We'd been on this journey for a while and fortunately we're longer than we'd wanted to be on this journey, but I do believe as a large organization with a lot of history we had a lot to learn. So we had to reevaluate the principles we set out for ourselves up front and then turn around and say we actually have to deliver what matters up front. At first we wanted to do this one hit wonder, this absolute Nirvana, everything we thought the customer might want all that to sing and dance for them. We had to reevaluate our approach and turn around and say that it met what matters most to the communities that were going to use it. At the same time I might add that our organization was going through a large organizational transformation doing things like going down the DevOps journey. We were implementing or we had taken on this scaled agile framework. We'd taken on an agile journey. So we'd done all these things inside the organization to reorder how we were working. All that did to our infrastructure services is actually push the demand higher because we now had these feature teams who wanted to move quicker, release quicker to DevOps teams who wanted an API that they wanted to engage with the organization. So we really had to be quite deliberate about those principles. The next line, it's really just code for don't assume. We did a lot of that up front. We did this big, as I said this one hit wonder, we assumed a lot. We had to fit into a CRCD approach for our customers. I don't believe that we yet are very good at the CRCD below the line for our private cloud thinking but our consumer community above the line would definitely in that space and we need to be able to enable that. Working hard to keep things simple it's so easy to have magpie syndrome. It's so easy to see that next silver object fly by and think, oh that would be nice and if we added this and if we just put this and if a new layer came in and we just did this it would be fantastic and our consumer would really love this. We had to stop that behavior. We had to stop and say keep things simple. The whole cloud native versus any other workload we have a massive legacy environment in our organization. You don't be 153 years young and not have a lot of legacy that you're carrying with us. Inside that legacy is process legacy, it's technology legacy, it's even thinking legacy, it's human legacy. So we had to try and deal with that. We are a bank, we had to be secure by design. We are a bank that's gone through some drama recently as it relates to security. So it's our number one program in our organization is related to security. So we had to be secure by design and we could not compromise that at all. Open source, becoming a first class citizen in our bank was quite a pivotal day. We had this very much ISV type of way of approaching things. So becoming a first class citizen was really important to us. Commitment to engineering. So I think at first this was lip service on our part. We kept saying we were going to be doing it. We kept saying how important it was to us, but we actually didn't behold it. We had to choose information, innovator, partisan. When I say rinse and repeat, if I tell you that we've been on this journey for a while, we honestly had to take our approaches very often, clean them often, say, well, that didn't work for us and what is the next step in this? When I say we had this perfect storm, these were the objectives of our private cloud. We had chosen OpenStack. At the very same time, we had approached a life cycle position where our data center network needed to be refreshed. Five years previously, we had built out a massive brand new modern data center and now we were facing the refresh of our data center. Unfortunately for us, our private cloud journey definitely rushed the goaly on our data center refresh. And what had happened to us is that we were still thinking about how we were going to fix our network. We had historically allowed our data center network to become very fragmented. Quite often a product of the way we funded projects, not necessarily a product of the technology we were using, but our problem there is if we wanted to have a private cloud that now held automation self-service provisioning as key, we couldn't work with this fragmented network that we had anymore. So the rushing of the goaly was that we had these slow, gradual plans to transform our data center network, go down the ACR journey together with Cisco and then suddenly we needed to fix this fragmentation, get our ACR in, break down the security boundaries, follow our security team's new isolation strategy. So we were really in a lot of drama. We needed all the support we could get. I've got a little note there at the bottom that says thank you and this is a thank you to Cisco and Sousa. I think we put them under a lot of pressure. I'm not even sure if they did it because I nagged so much once a week on a Monday afternoon. But I don't know why they did it first, but I'm just so grateful. So what I'm saying there is that they weren't really with the drivers. We had chosen Sousa OpenStack Cloud as our OpenStack distribution and they weren't ready. We needed to use the Liberty Release. The reason we needed to use the Liberty Release is you'll see from our architecture, we are a Zed series site and we do use Zed Linux and we had a number of workloads and services to our consumer community that needed us to be able to orchestrate to a ZedOS outcome, I mean a Zed Linux outcome. So we'd really put them under pressure for that. Other challenges we face when I say losing the shackles of the past, we're a bank and all the other banks I'm sure can attest to this, especially if you're not a young bank, is that you build processes on process to manage process and that process belongs to a person who's quite passionate about the process and they become quite religious about the process and honestly we kept trying to fit in instead of turning around and saying that we're changing the way these people are working. We're taking this new digital native community and we servicing them and the people who are attempting to be digital immigrants really need to go down the journey. We don't need to change this enabling technology to accommodate their current thinking. So that was hard for us. All of us would know if you're talking digital and cloud native versus anything else in your environment and the legacy, you have to get people to start thinking that this is now about the applications resilience and not about the infrastructure redundancy. When I say how to eat the elephant and I'll tell you now that we have been on this journey for two years and this two year journey has made us change the approach dramatically. In that change of approach, we have had to turn around and say especially with the position we had where we were rushing the goal on our data center implementation, we really had to turn around and say what can we deliver in small pieces? So it literally became a site of two data center implementations. We are a dual fabric across those two data centers. We literally had to turn around and say we will deliver this little piece of the fabric in this data center first. We will follow that. We will allow the automation and the cloud implementation to start in that data center for that availability zone and then we will move on. So we've really had to take agile to the extreme. One of my comments at the bottom about our vendor supporting us is that it's really hard to take an agile approach on a physical deployment and infrastructure physical deployment but we've literally had our third parties of vendors supporting us in this agile approach to implementing our network, never mind our private cloud. Third party vendor readiness was one of the major challenges for us. I'm sure I speak to everyone in the organization that has got third party proprietary vendors that are going to stay there for a while. We choose a release, we need certification and we really did put a large number of vendors under a great amount of pressure to support us in time. I will tell you that everyone's rallied together. We're not unhappy with the position that we're in now. It's really just a mud making the delivery very real. So to that point, both SUSE and Cisco were fantastic. They gave us an early release of drivers. They set up a lab's purpose built for our development of the releases and honestly they've had to play referee and a duty cater for us. We are, I go back to the shackles of the past. We've got people with legacy thinking. We've got the digital natives. We've got these two schools of thought in our organization and we needed someone to adjudicate and almost mediate for us across these parties and then the vendors have been fantastic, thanks. Thanks very much Mark and a whole bunch of the advanced services teams inside Cisco played a massive role in that. Also, and I think we do this all the time. I imagine a lot of organizations do. We use Cisco's advanced services team to work through our data center architectures with us and to help us in the deployment and they've been with us throughout this. I mean, literally we sat in Las Vegas at Cisco Live this year in a quiet room fighting it out. So I think we have a photo of all the protagonists who couldn't agree for over many months. We have a photo of us with this piece of whiteboard paper and the mediator of our drama. So I couldn't be more grateful to them for that. That's really the way we've, the journey we've taken. There's probably a lot more, when I was preparing for this I had to try and acknowledge that every single one of these line items I could probably lacks wax lyrical on for about two or three hours. So happy if anyone wants to catch me to go through how we actually drove this out but it's been a fantastic journey and we're quite happy with where we are at now. It's now just making our customer happy. Thanks. Well, good morning everyone. I have been told that I have to be very briefly so I'm going to try to go to the point. I don't know if, I know all of, most of you know BBBA but for all of us who don't, BBBA is a global financial service group that has presence in 35 countries around the world. It has, we are like 130,000 employees and we have like 67 million customers. So what we are doing in BBBA is to build a cloud, a global cloud in order to implement in all, in some important countries where we have presence and we are, we were focused on the principles key to install this cloud that were, that has to be automated, that has to be completely automated in order to implement the same solutions in these principal countries and this cloud has to be self-served in order to provide the APIs for the developers to use it. Of course, this cloud has to be open sourced and one of the main goals was to reduce costs. This has to be as well to data centric where all the data has to be in real time in the inside the infrastructure. As well it has to be reliable where all the infrastructure could be changed instant, very, very fast without any outage and of course it has to be secured. It's very important for us being as a bank. As you can see, we have to combine our cloud with our traditional IT and because most of the services that this cloud is going to consume are in the current IT and because of that we have built two separate fabrics in order to try to isolate the changes we have in the cloud because of the versions we are having in OpenStack and because of that we decided to separate these two fabrics and thanks to the solutions we have with ACI we can balance the hardware between these two platforms in order to take an advantage of this hardware. Then if we are growing in the cloud and we are decreasing the current IT, this is the objective, it's very simple because we can move the leaves we have in our CBD from one world to the other. We have divided our CBD as I told you in two fabrics and now in Spain we have like 150 leaves that is like small switches and well, so the idea is when we started with this new project we thought okay, we have to find a new network who could accomplish the goals we are having in the new cloud for installing the new cloud and then at the end we decided to install ACI because we needed to have a distributed layer two network and a distributed layer three anycast gateways for the traditional IT and also for the cloud and for the cloud as well we needed a distributed NAT, distributed floating APs, distributed DHCP and then distributed metadata. We were as I told you very focused on security so we needed an infrastructure and a solution who you can invite it, install the security in order to avoid put so many security devices around the CBD and thanks to the solution we have with them you can make contracts in order to isolate the villains, the sadness you want in a very, very simple way. We were looking for automation and as I told you it was a focus point and we can do that thanks to Ansible and even we have the possibility to use Chef or Puppet and of course the programmability. We wanted to configure our infrastructure in a programmatic way and we can use it using the API with that can be accessible via gson or via XML or even you can configure our infrastructure using Python. We were looking integration with third parties and now we are using OpenStack and we were looking for the integration with them but also we have the possibility to use BSphere or Hyper-V and as well we can integrate using physical domains because we were very, very focused on how we can integrate physical servers, any bare metal servers because our idea as I told you is to move the current, the services that we are doing and serve in the current IT to this new global cloud. So for us it was very important to very easily move bare metal servers to this new world and of course you can install in a very easy way any external routes in order to give the layers throughout for avoiding the service going out to outside our CPD or even between CPDs. So apart from that we wanted to optimize the infrastructure and give high performance to our CPD. We can do that because we can expand the DXLAN solutions at top of the rack and even we can avoid, put a lot of devices in the CPD like for instance firewalls, load balancers because we can redirect the traffic, exact traffic we want to these devices and not put every single device in each rack. We can make a kind of service graph in order to send the specific traffic we want to these devices. We were very focused on the integration between the overlay and the underlay. So we can do that because the ACI can talk with our ECS, UCS hypervisors just installing an agent, the flex agent who can talk with the OBS via open flow. And finally we were focusing on the facility, making here the troubleshooting and the management because instead of trying to configure each device individually, we wanted to have a unique management console where you can see the fabric as a virtual enormous switch where you can add or delete the switches in a very simple way. And just to say this is the main problems we have to accomplish. And at the beginning it was a very hard way to install this new solution because it was a new solution for all of our network people. But thanks to the engineers from Cisco who were very focused on helping us to develop it, especially Hector Fernandez who made a very good job with us and I hope he will do the same in the future. So thanks a lot for Hector and for all the engineers from Cisco. I'm just it. Thank you very much. Hi colleagues, I want to share my experience of cloud. We're one of the largest telecom company in Kazakhstan and we maintain thousands of mills and hundreds point of present big network. And now today in this year we don't have a new customer because penetration for telecom service our country very very big, very near to 100%. And we must find new revenue for our business and we investigate our market and for new service for our customer, for business customer, for operator. And we come conclusion, OpenStack is differently way to go. One moment. It's our network on country map. And now I won't talk you a few words about our company. In our work, we have four main principles and all this principles is our standard side. First is customer focus. We work for our customer responsibility, innovation ability and professionalism. When we find solution for our cloud, we won't find solution who matches these requirements. And in our market, work to company, big company. It's Cisco and Red Hat. This company for match this requirement because they have relationship and create a Cisco validate design between Cisco and Red Hat. It's very important for us. For our cloud, we use Cisco equipment on physical layer. We use compute from Cisco, it's UCS, we use network, it's fabric and storage on base Red Hat and CIF, CEP from Cisco and Red Hat company. Also for virtualization, for create virtualization where we use hypervisor, virtualization KBM from Red Hat and SDN from Cisco. For cloud operation system, we use OpenStack from Red Hat. This validate design give big advantage because we create our cloud easy and minimal risk. It's very important for us. It's complex system for our customer. We use this system not only how us and SAS service, we also use this system for creation new service for our business customer, for government customer. Now we have Windows protection on base cloud. We have management Wi-Fi. We have DPI service, many different service for our customer on base one reliable cloud platform. Now we have advantage. We can create test proofing and create new service and allow this to market in very small period of time. The last service which we run few days ago, it's CDN network for video on demand service. Now are we beginning of our trip and I think in future we create many services for our client on base our cloud platform. It's all, thank you for your attention. I think at this point what I wanted to do was give everyone the chance to ask questions if we have questions of the panel. I know we don't have a ton of time left, but we do have a couple of minutes if folks wanted to ask a couple of questions of anyone or of the group here. Feel free to come up to the mics if anyone has questions. Well otherwise I'm gonna ask a quick one myself maybe. You guys actually covered a lot of the topics that I was planning to ask you about, but I figured since we only had a limited time I wanted to see if you had any advice for folks in the audience that are thinking about doing OpenStack, the way you already have. What kind of one or two key learnings have you had that you would advise people to think about as they embark on a journey like you had? I don't. I'm going to tell you. Okay, I didn't focus on this because I focus in the technical part, but it's very important how the engineers from the different departments have to work all together. It's true when I heard about the DevOps and what does it mean. Exactly, it fits very well in these solutions. And what I mean is that in the traditional IT is true that all the departments were very focused in their job, in doing networking, their security, their compute, or whatever. And now it's true that in the cloud solutions, if you want to go to the cloud services, you have to work all together in just one team. It's true that it's very important and in order to obtain the goals for the success, for the success, all the groups have to be working together and not just working as a silos, it's very important. I don't think I can, it's definitely, I agree with everything that my peers said here, but I think the one thing that we as an organization did is we got ourselves very confused between services that were below the line and services that were above the line. So services are servicing your consumer and the reason that happened to us is because we are actually our service provider and our consumer at the same time. And I made a comment in my presentation that said you've got to drop the shackles of the past. And that's what I meant by that. You've got to stop thinking that, well firstly the silos are an issue, but you've got to stop thinking that you have to treat something the way you've always treated it because that's the way you've operated and that's how you are. You really need to encourage everyone to totally, totally, totally rethink not based on what you know, but based on the outcome you're trying to achieve. And very often you have to shake somebody and turn around and remind them that they need to think about whether they're answering a question above the line as a consumer or below the line as a service provider to your organization. We didn't do that right up front. And I mentioned we'd been on the journey for a while and it was one of the things that when we had to reevaluate our approach we had to stop ourselves and say what happens below the line is our first focus, assuming of course that we are meeting the above the line consumer demands based on what the client wants. Well, do you guys have anything to add or we? I would say taking a look back on how our journey went with OpenStack I would say limiting scope is a good thing when you begin. So OpenStack is really giving you a lot of toys and all the customers usually want all the toys, but also it's very important to have such a big distributed system really rolled out and having stably run for your customers because that's always an expectation on their side. You need to limit yourself a little bit in the beginning at least. So pick a scope you want to deliver and try to chase that scope and not try to basically broaden it every time a new release comes out but first try to get it rolling and then think about gradually moving stuff in. That's a lesson learned from us. Our team is not infinitely big and we are working very, very hard to basically get the workload done without stalling our customers on new features but it's a hard choice you have to make in the beginning how big your scope actually should be. Well everyone, I think at this point it looks like we're about to get the nod that we're running low on time but I wanted to thank everyone for coming today and I wanted to thank the panelists for joining us and for sharing their thoughts and I hope you guys stay for the next session. Thank you Mike, thanks everybody. Round of applause for everybody please.