 Hello, everybody, and welcome to our session, which is enterprise cloud offering models and service strategies. So we're here today from Cisco IT to tell you about how we deploy our internal OpenStack clouds for internal applications. And really, if I had to sum up what we're trying to do, we're trying to lead the lean OpenStack machine. And this is our team's mantra. We're really passionate about it, and we've created the logo, as you can see here. And this is what we try and live by. We're going to go on and cover about our different offering models and how we really deliver the lean OpenStack machine. So I'm Rob Douglas. I'm the OpenStack program manager and product owner for the Cisco IT internal clouds. And I'm István Blasco. I'm a business operations lead for the data center compute services, Cisco IT, Rob and I are on the same team. So let me introduce the agenda very quickly. So as you see, this session is not very technical. This is more about how we define and deliver the service within IT and how we execute on it and what it means for us. So Rob is going to quickly introduce our OpenStack journey at Cisco. And then we are going to talk about different aspects of the service, starting from the operating model, the support offering, the support and the cost models. So that's the gist of the presentation. And in the end, we are going to talk about how it's packaged up in the right user experience for the clients. And then we do a little bit of a demo of our application center cloud, which I'll explain what it is. Rob. Thanks, István. So I'm first of all going to cover the OpenStack deployment within Cisco. I thought a good place to start was when I first joined the compute service in Cisco IT. The first thing I did was some trainings and webinars and internal documentation about our global cloud strategy. And that is the overall strategy for cloud computing from Cisco IT for internal use. Now, there was about 10 hours of training on that, lots of different webinars. Unfortunately, I don't have that kind of time today. So I'm just going to give you a very brief overview. So the first and most important trait is we're really focused on programmability now. We want automation, we want self-service, and we want speed of deployment. And we want this for our infrastructure and our applications. So for our applications, we're moving towards cloud native architectures as a big drive to get those applications developed using those traits. With that, we get the benefit of software-defined intelligence. So we can really take humans out of a loop. We want to get away from IT, being a bottleneck, having to do manual things that people get very frustrated because they have to wait a long period of time. And one of the big benefits we get out of this is we get to really focus and use our capacity effectively. Rather than having to build multiple data centers or increase existing footprint, we can really focus in on optimizing our current capacity. And underpinning all of these is our security and resiliency. We want that for our infrastructure and our applications. So what does that mean for infrastructure as a service? We have OpenStack Clouds deployed. And we want to offer them as infrastructure as a service. So there's a quote here on the slide that is from one of our chief IT architects. And he's defining what programmable infrastructure is. He's not asking me to put this in. This isn't a plug for him. And unfortunately, I'm not having to pay him to use it. So I'm going to try and explain exactly what he's trying to get to. And what we're trying to do is interacting through the system. We're trying to remove humans from being involved. We can know about it, but we shouldn't be involved in delivering it. We want to have immediate change. We don't want to wait the minutes, hours, sometimes days, even weeks and years to actually have something available for someone. They should be able to deploy in seconds or less. And lastly, we really want to focus in on utilizing industry standard APIs. Get away from proprietary CLIs and GUIs. So what does that give us? What's our benefit? So we're focused on programmable infrastructure. And as we work with our application teams for them to deploy cloud-native applications, we get a cloud-native environment, which gives us benefits for resiliency, real-time analytics, auto-healing, auto-scaling. So here's an overview of the kind of organization we have within Cisco IT. So we're part of global infrastructure services. And we have data center and platform services as our large organization. And that offers a portfolio of different services. Goes for storage. We have a middleware. We have a containers team. The most important one here, from my perspective anyway, is compute. Our team deploys bare metal, which is the foundation for all of our different virtualized offerings. We have a virtual infrastructure available for traditional applications, for our non-cloudy applications. And then in the middle here, we have our private cloud, which is all based on OpenStack. And that's the focus for today. One thing just to add on about the future is we don't have the ability to provision our applications through Cisco IT onto a public cloud. We don't have the ability to burst today. There's something we're working on and something we want to get for our application teams as well. So to give you just a quick overview of how we've implemented OpenStack within Cisco IT. So we first of all started in 2013. And we really wanted to get an environment where application teams could try out OpenStack. So we created a POC environment. We termed it our express environment. And it was the proving ground for the applications. They embraced this, adopted it. Our hardware footprint listed here started off very small. And it's grown up to today to be quite large. And we have a large hardware footprint. So as we went ahead and deployed OpenStack, we've made it available for our production applications. We provided high priority support. We put it into multiple data centers to meet the client's needs. And then up to last year, we upgraded our clouds. We'll be up to Juno. And then we stopped. We've put them into maintenance mode now. We're not deploying any new features on those clouds. Instead, our strategy is to deploy new clouds, utilizing them at our code base, and really focusing on our software-defined networking we have available at Cisco. So we're looking to add new features there and then make those available for our application teams. And then on into the future, we'll migrate our existing applications from Juno onto our new clouds that we'll have. And we'll take those clouds onto later versions. And we'll also look to expand to a more of a global footprint. So seeing here, this is the locations of our current data centers. We have around 3,600 VMs in our Allen, Texas data center. We have around 400 VMs in our Richardson data center, which is also in Texas. And we have about 1,000 VMs in our Research Triangle Park, North Carolina data center. So that adds up to about 5,000 VMs or so. Obviously, they change. Its applications grow and shrink their footprint as needed. But we hover around 5,000 today. For the future, we're looking into locations potentially to have a cloud in Amsterdam and into Bangalore. It's really dependent on what our clients need. So you've seen a kind of podded history of where we are with OpenStack and our deployment. So let's move into how we actually deploy our OpenStack service. So I like to turn this models, models, and more models. We're going to cover four of our models today. We're going to cover our operating model, how we internally operate our OpenStack teams. We have our offering model, what we present to our clients. We have our support model, how we support our platform and our clients. And lastly, we have, which is very important, our cost model, how do we reclaim the money to be able to pay for this service? So the first is our operating model. We run DevOps organization. We have an OpenStack virtual team. It's under this umbrella here. We have four teams that work together to work on a DevOps model. So let's take a look at this ops model in more detail. It's obviously a bit of a complex diagram here. And I'm going to take you through a few layers of it. So we, first of all, start with where do our requirements come from? So we start at the top from a client-driven perspective. We have a customer success team and some product managers who meet with clients and stakeholders and take their requirements. They then feed that down into the middle layer, which is our strategy layer, so we can understand what our clients are needing. We then have a scrum structure in our development layer. So these are our co-mingled engineers and developers from our development team, our operations team, and our architecture and design team. And these will assess what technology is available and they will feed those requirements and that information up to the middle strategy layer as well. Now, to deploy the different features on our clouds, we've currently split into three scrum teams. We have a scrum team on cloud design. We have a scrum team on metrics and enhancements. And we have a scrum team on monitoring and internal custom tooling. Now these teams, as I said, are their co-mingled different resources. We don't want any siloing amongst our different groups. And they work towards developing the features. Once the feature is developed, it's released through a release process. It goes into productions. Clients can then consume it. But the scrum doesn't then step out at that point and go on to something else. They'll still say responsible for supporting that feature until we go through quite a detailed definition of done. Once that is complete, then our operations teams can take it on fully and the scrum teams can go back to developing other features. So we have also two extra scrums as well. We run all of our OpenStack work, whether it's process or technology or anything it's all through different user stories, tasks, features, and epics. We really want to focus in on that agile model. So we have a platform breaks fix team, which focuses on short-term fixes, short-term changes. They don't interact with customers and clients. Instead, they'll receive requirements from our support organization to do any changes. If it's going to be a long-term change, then that is pushed across to one of the other scrum teams and they work on it instead. We also have a process scrum as well, which focuses on making sure we keep our processes up to date, making sure documentation is done, whether it's end user facing or whether it's internal. We want to make sure we still get that done and that doesn't get left behind. We also have an organization layer for our scrums. We have scrum masters for ceremonies. And we have a proxy product owner for each of our scrums. We don't want to hold back a scrum while they wait for any prioritization or any needed user stories. Instead, we want someone in that team to be able to know what the priorities are, make sure the needed user stories are created and prioritized, and then once they're developed and completed, they can be accepted by the proxy product owner. So they get their direction from the middle layer, which is in there, you'll see the chief product owner. And that's me, so obviously the most important piece on the slide. So I set high-level strategy. I take all the different feedback and all the different requirements and we set that. I can provide roadmaps to clients and stakeholders. I can help prioritize things as needed, but I also make sure I feedback the general strategy, the general roadmap to the internal teams. There was feedback from those teams. They work in a scrum. They want to know what's going on with the rest of the program. So we've got to make sure that happens. So we do that. We keep external but also our internal groups up to date. So we've got an internal organization that can go forward and develop features on the platform. I want to go through just a brief overview of how we do a feature enablement and the process we go through to make sure we can deliver them effectively. So we run a growth hacking kind of funnel approach. We take a lot of time identifying what our clients need and analyzing what they currently use and what we think they're going to use in the future. And then we'll get our kind of feature list, what we're going to enable. At that point, we don't just say, okay, you want X feature, just come back and see us in six months and we'll have it ready for you. Instead, we run something we call our minimum viable experiments. And that's taking the feature, putting it into a potential lab environment or an early view of it, work with the client to take their feedback and make sure it's going in the right direction if there's anything else they need. We can also, at the same time, utilize it to review our internal impact. If we're releasing a feature out, we've got to know that we are staffed and able to support it, make sure it's what the impact is going to have. So we can do that at that time and that saves us not getting any surprises when a feature goes live. And once that's done, we'll then move into full development mode and we will work towards a minimum viable product experience. So we want to get something out there then iteratively add to it. So at that point, we can still continue to get feedback and make sure that we're going in the right direction. Our clients will have known about this because we'll have worked with them about the feature. So they still will need some education but it shouldn't be a very long ramp-up time. So that means we'll start to get our ROI in the last phase. As you can see, the graph goes up quite highly for client satisfaction because they'll know about it so they can start to use it. They won't have to spend a long time ramping up or learning about the feature. So because we spend a lot of time with our clients, understanding what they need, I wanted to give an overview of what was like our customer feedback mechanisms. So the first thing we do is spend a lot of time, as I said, talking to our clients, understanding what they use and what they need. We have a customer success team that meets very regularly with our clients. We have a set cadence with them. Our largest strategic clients, we tend to meet more often. But we do meet when they need it and when they want it. And if a client wants to reach out for anything, they have a mechanism to engage us. From a kind of high-level program perspective, we have a quarterly webinar where we give to our clients and to our internal teams. We'll give them information about new features, about the different things we're working on, our roadmap update. And we'll also provide training and overviews of the new features so clients are aware and they can potentially start to use it very quickly. We have a twice yearly customer advisory board where we meet with our clients. It's a select group of clients. We can't meet with them all at the same time. And it's kind of a deeper dive into what they're working on, what they're expecting to use, what their future plans are, and then we can give our future plans and make sure we're working towards the right direction. From an end user perspective, which is the day-to-day piece, we provide a lot of training, a lot of documentation out to our clients. And we also have a regular email communication mechanism. We send updates about what's available, what's coming, if there's any maintenance required. Also, unfortunately, we do have to communicate if there's any outages or any impact. We want to make sure they're aware so they can react as needed. But we're also making sure that we close out issues. We let them know when something is fixed. There's a lot of feedback from the clients. We're very quick to tell them something's broken, but we don't ever tell them we fixed it. So we make sure we do that. So a couple of snapshots of our kind of training information, the documentation we provide our clients. We have an internal community which we publish out some process information. It's hosted on an internal system. We give them information about how the different processes work, things like funding and support. And we also provide them a lot of training information. Our clients can use a self-service model to set up their OpenStack projects and then provision their infrastructure. So we wanna make sure that they understand how to use our self-service tooling and we then give them some basic information about OpenStack. We want them to, if they're new to OpenStack, we give them brief information about how to set up kind of their first VM or project. At that point, if they wanna get more advanced, we don't recreate that information. Instead, we link them off to the community and the industry information and they can use that. There was no point spending time and effort recreating that great material. So that's how we staff internally, the things we offer our clients, the engagement we have with our clients. Ultimately, we're there as an infrastructure provider. We want to provide them the needed resources for their applications. So when they come in, we want to give them different support offering models. And basically that means there are certain things. So when they want to set up their infrastructure, how they can actually utilize some of the IT processes. So this has been a progression. When we first started, we had self-managed as our first offering, which was for our express, our POC clients. Then as production applications came on board, we offered an IT managed offering, which was more of the traditional what IT has previously done for infrastructure. But as time has changed and we've moved forward onto really emphasizing programmability APIs and building that intelligence in, we've come up with our new offering model, which we call Managed Cloud. And this is really focused on programmability. So here's a comparison about what the three options cover. And I'm going to give just a little bit of detail about each one. So the least intrusive and the most open is our self-managed. IT just supports the platform at that point and we provide the new features. The clients are responsible for supporting their own virtual machines, their own operating systems. They have freedom to do whatever they want. They have full access. They can install any operating system that they choose, but they have some responsibilities. They are responsible for that VM, so they're responsible for security compliance. And then on the left-hand side here, we have our IT managed, which is our most intrusive offering. And this is much more traditional, what we provided from IT for the non-clarity applications. IT provides full operational support and has full ownership for the virtual machines. This means we're responsible for all the compliance aspects. We'll support any issues, we'll own issues. The problem is it's the most intrusive. We don't allow root access to the VMs. We make them use very particular IT approved operating system packages. We let them create their VMs, but they can't change the VMs at that point. They have to engage IT to make any changes. With that pushed to programmability, that's not working out for our clients anymore. They need to move away from those traditional support options, but they still need IT support. A lot of our application teams are experts on their applications. They're not CIS admins. They do need some help for virtual machines. So what we do in this model is kind of hybrid of the other two. We provide operational support, but we don't take ownership. Application teams own their virtual machines. They can do certain things. They have higher levels of access where they can do things. They can use orchestration tools. And if there's an issue, they can engage support, but we don't actually own that issue. And that's a big important distinction now. We're trying to get our application teams to take some ownership. They're responsible for compliance. It's not IT having to do all the work for them. So that means as we move forward with our global cloud strategy, we're deprecating our IT managed offering. We're not gonna have that anymore for our clients. We are working towards just having the client teams be able to use IT managed on our more traditional virtualized infrastructure. This means if they want it, they can have it, but they won't get the features that they may need and they get with OpenStack. And here's some of the examples of some of the documentation we provide about our offering models. We want to give our client teams as much information as we can about the offering models, make it very clear what accountability is and what can be done in the models. We give them process flows so they can really understand what's going on and what they need to do and how they can engage the IT. We do provide some guidelines about some of the cloud native architectures. They're only guidelines, we don't enforce them, but we want to try and give our client teams as much information as possible. The largest challenge we're gonna have is with our traditional IT clients that have been using our infrastructure for a long time, they're kind of used to the IT managed support. They really want to use APIs, they really embrace cloud native, but they still kind of have some expectation on IT as well. Now we're not just gonna cut them off and say, okay, you're now managed cloud, we don't have to help you on this or take the ownership. So it's a journey, it's a progression we're going with them. We're giving them a lot of education, but we're finding the more we do that, the more empowered they feel and they're really starting to adopt it. So with that, I'm gonna hand over to Isfand who's gonna take you through the support model. Thanks very much Rob. Yes, let's look at the support model. So everything that Rob just explained means that we need to fundamentally change how we look at service support and service delivery. There is a shift from being the middleman between providing the infrastructure and the clients. Now we want to be simply the enabler, just the infrastructure provider without the middleman. That means that for the client, all of this should be a service and we only provide platform support by default. That means we are abstracting ourselves from whatever is on top of the platform. We give the responsibility to the client to build their application in a way that is most working for them. So that means that if they smartly engineer their applications, they will probably be on the cheapest option because they will automatically, they will set up their applications to automatically self-heal if need to, up-size, down-size, or change horizontally. That means that for both sides, the easiest if the clients are engineering in a smart way, instead of over-engineering, because if they over-engineer, that's going to cost money for them. Also, if they don't adopt the cloud-native application mindset, but they still want higher SLA support, if they still want support, then they will have to pay extra for that. So that's the change that now we are just simply the infrastructure provider instead of being a middleman to slow things down. Let's look at how exactly the engagement model looks in terms of self-managed and managed cloud. So first of all, I want to highlight that it's always very important to make your end user facing instructions as good as you can make it. Because in my experience, I noticed that in many cases, service teams neglect it. They don't put enough effort in it and the burden is going to be on your support teams. So if you put enough effort and then you make a very good user experience right on the top layer, which is the self-service pages, then you're going to save a lot of time and effort and most importantly, money when it comes to support. Rob showed a couple of example pages that we have. On the self-managed side, all we do in tier one support is supporting the provisioning tools. Nothing more. On the managed cloud side, there is more support. There is break fix issues. There is OS compliance help, helping with upgrading the operating systems. So you get all that support if you're on the managed cloud. You have multiple tiers. Tier one we call service operations center and then we have solution groups on tier two. And there is also something else we call operating or operation command center, which is there for those applications that are marked as P1 in terms of criticality. And they oversee and try to accelerate the resolution for those P1 applications. And then when we come down or escalate up, depends on how you look at it, escalate up to the platform support to tier three. That's where you have your open stack operators, your open stack SMEs. And that's the platform level support. That brings us to the one that is most important to me, the cost models. I'm going to explain four aspects. One is that how we look at our TCO and how we analyze the TCO. And from there, we move on, what is the user facing price list? What's that the user see out of all these numbers? And then from there, we move on how we actually process charges and how we do billing. And then I will call out a couple of challenges that we need to deal with. The number one thing for us is we are IT. We are a cost center. We are not here to make profit. We are here to enable the rest of Cisco to create applications and run them on our infrastructure for as cheap as possible. So what it means for me when I look at creating our pricing is that I need to understand absolutely 100% our TCO, all the cost drivers. And that's the data that I will use to create our prices. And now I'm working on a shift from what we have been doing up until now and then we are going to change for the next fiscal year. But I don't think there is absolutely right or wrong way to do it. I've been to another session earlier this week and I've seen another approach which I didn't have any problem with and that's what made me realize that there's not really a right or wrong way to do this. I just simply went looking at what are our fixed costs. We have data center, we have hypervisor and license, other license costs. We have foundational support, platform support which I'm gonna put in the fixed bucket. And then there is the actual hardware resource that we made these unit costs for CPU and RAM and that's going to be variable from the client perspective, depending on how much infrastructure resource they use. And on top of that, there is a cost to OS, different licensed prices. And then they can also go and see this, well I need to go and see how much support costs to us. And what they can see is here on the next slide, this is really the client perspective of it which I originally called the compute service menu, you can call it the compute service price list. The point here, it needs to be very simple and consumable from the client perspective but at the same time reflect what I find in our TCO and make sure that we collect all the money back we need to cover for our expenses. So in terms of open stack, there is going to be a set of flavors available and based on those flavors, there are going to be prices. So client, let's say, would go by, I don't know, let's say two, four by four VMs and they could look at this menu which is pretty much like, we have these sort of, this is my association that we have these sort of noodle bars in Europe then you first pick type of noodle then you pick the sauce then the meat and so on. So for me, the noodle is the server here, that's the most important base element in the pricing. And on top of that, if we are talking about a new VM, then it's not going to be an additional cost, it's going to be a compliant operating system on it. If the client doesn't need any additional support, completely self-managed, then again, not additional. I build all that cost into that open stack flavor price. However, if we are talking about a VM that is already been out there for a while now and operating system might be out of compliance, then we are building in a penalty charge for that. Also, if they opt in for managed cloud or even additional support, then it's going to be more money for them depending on how much we are spending on those support teams. And if we are smart enough, we can build in the hidden cost for our Ferrari in that, but that's not something I do yet. I'm not planning to do it. Storage cost, so Rob explained that we are the compute team, so storage for us is not us, we just ask the storage team, what are your prices, and then we build it in our price list and they have their own TCO, they come up with their own price. And they're three different offerings. How do we process collections and buildings? Basically, we just mostly check the active VMs, we associate the price, but then what's very interesting for us is that we have some central IT funding for the service, which means that we must provide infrastructure to a certain extent for free in Cisco. And only when we cannot fund something, what we need to be able to fulfill the demand, that's when we actually charge. In the centrally funded element, we still do something we call showback. So every time that we are doing this absorb cost, there's always, to every service consumer, we show how much we are absorbing on their behalf, and they need to report up about that because higher level management, RCI will look at it and they will say that I know that it's not your cost, but you're causing this much to the compute service so I still hold you responsible to try push it down. But it is still a difference from actually charging the department. And then if something's out of central fund, then we actually do the charges, which brings us, sorry, wrong way, which brings us to the first biggest challenge I have when I look at our collection processes and our pricing and billing. Is that how do I identify and how do I create differentiation between what I'm gonna offer for free and what is going to be actually charged for. And what we have been doing in the past for long is that we just dollarized everything, that every offering we checked how much cost, and then we know our IT funding. And then up until that point, it's going to be free to the client above that we'll charge for it. But we had various issues with that and we started to think about maybe a better idea is to do something like we call a capacity model, which is more like a data plan for your mobile services. That to certain amount of CPU usage and RAM usage, you get your infrastructure for free and then you pay about that. You can also differentiate between the infrastructure element will be free but the support will be charged on top of it and so on. So this is still not completely decided for us for the next year how we're going to do the differentiation. And the other factor is that if we really go with just simply pricing out the exact cost, then we're going to run into an issue that way. Today OpenSec is new, it's new technology, we are investing so much money on it today. It's just more expensive for us than our traditional, than running our traditional VMware environment, for example. And if we want to reach that economies of scale, then we need to think about incentivization strategies which is going to probably impact our pricing strategy. Because maybe we want to say that okay, let's inflate the old infrastructure and then put price down on the new one. Or say that certain size of VMs on OpenSec is going to be free or infrastructure element support isn't. That's something that I don't have a Cisco way defined right now because we're still in discussions. And the third one, which I think compared to the previous there's a bit smaller issue but still a challenge, is that today we're running OpenSec on blades and on rack mount as well. And obviously the cost-wise that's a difference. But I don't want to introduce that complexity on the price list that hey, if you're on blade then you pay differently than if you were on rack mount. Also it may not even be your choice. So that's another thing that somehow we need to solve. So that brings us to the end of the models and to ACC which I'm going to do a little bit of a demo in a second. Just wanted to explain that again, we're all about self-service in this new IS approach. So we provide OpenStack Horizon to the clients to create and manage their VMs. We provide the APIs to the infrastructure so the applications can directly interact with the infrastructure. But we still need something else that is very specific to us in Cisco. Whatever we need, we have rigid service mapping requirements, there's financial approval processes, and then we need to make sure that all applications are registered in our application portfolio. So we have ACC that brings all these models together that Rob and I explained. And I think right now I'm just gonna jump into ACC and show how it works for us. So if all goes well, I think, I'm not the Mac user. Yeah, I'm just gonna come here, all right. There you go. The client can come here, this is the landing page and click on register and then simply put in the application name here which I'm gonna do OSS 2017 for now. And they need to pick their tenant or in other terms the service category because as I said, we make sure that every application is mapped to a service. We do TCO calculations on the service level. So if a cost occurs for this application that needs to be somehow mapped back to a service. So let's say I'm gonna pick employee experience and collaboration and let's say email and kindering. What I'm doing here is really not, it's not gonna make any impact on the infrastructure. This is really just for registering an application. And the goal life date can be as early as today. And what is really important for us is to add the funding requirement for it. So let's say the application, a team or the developer here says that maybe this will go up to 10,000 and this is the money I'd like to ask from my financial analyst. And then this is really just a quota. So if the financial analyst approves, which I cannot demo here because we didn't prepare to bring our FA, financial manager here, just gonna open up an application that is already approved. But once it's approved then, and if we go with the 10,000 example, it means that that's what's available for this application. But if you don't use it, you don't use it. Here there are four profiles defined. I'll say pick this one which I created for the Cisco Live demo in Berlin a couple of months ago which runs in Richardson and SAF managed. And the profile itself doesn't run anything but here this instant is actually running in that data center which is a two by four PM. And if you want to add, ACC simply tells you to go to horizon and start adding your VMs or you can just do it through APIs. And every change you make to this profile, you create more applications, you resize them, you change them. It's gonna show up here and it's gonna have an impact on your budget as well. So I realize that we only have four minutes and we need to give some time to questions. But it's not an issue because that was the end of the presentation. How do I go back to the slide here, right? Yeah. There's a small conclusion here that is simply about, since industry and future is going to cloud native and that's our Cisco global cloud strategy as well, we need to make sure that we embrace it and our application teams embrace it too. We need to think again about how we do service delivery because fundamentally we are changing the service. DevOps is critically important to be able to deploy iteratively new and new iterations of the technology. And of course, user experience must be in the center of everything when it needs to be simple and it's still a challenge for us as well. And then I'll open it up to questions. Thanks very much. And if you have any questions, please use the microphones on the side. So are you also leveraging Cisco ACI for the networking piece? Yeah, that's what I'll do. The new Metalka clouds that we're working on are gonna be functioning on yes. So it's OpenStack and Cisco ACI together. Yes, it will be yes. And the user interface that you were showing for the demo, so behind the scenes it's talking with OpenStack APIs? Yes, we use StackStorm currently to do the interaction with the ACI, the OpenStack APIs. Okay, thanks. Can you give a ballpark of the headcount you have supporting this service level we have? So I would say, are you talking about, sorry to clarify, are you asking about platform operators, development, which kind of team are you thinking of? Everything? Kind of everything, to support your models. So I mean, for everything I would say we have around about 20 people probably assigned to it and we split them up between design, development and our operators that are focused on the OpenStack platform. So it's around 20 people also. Thank you. If normal questions, then thanks very much for attending. Thank you everybody.