 So, dweud ei bod yn gwahanol cydyddio ar unrhyw y DownloadCentric動画 yw Cloud. Mae hi yn edrych arno dawbl yn y cyfrifwyr, ond rynnu o'r cyfrifwyr, ond ond rydyn ni fyddwch ar hyn i ddweud y ddymonion fyddwch, y tavallaeth yn chi'n i ddwyll y Collu Clwyd yn y market yw Cyfrifwyr. Rydyn ni'n i'n i'n oed, ac rwy'n cyfrifwyr y dyfodol byddwch cyf darniol gyda'r unig ar 451 Rysbry adnod ac ystod y gweld i'w bwysig yw'r ysgrifennu bwysig o'u clwyddau yng Nghymru yn ymgyrch i ddweud y bwysig yw'r bwysig. Rhywun ar y cyfnod cyfrifyddiad yw'r cyfnod cyfrifyddiad, ac y cyfrifyddiad cyfrifyddiad ar y cyfrifyddiad yw cael 30 benysmarcs cyfrifiddiad ar y cyfrifyddiad. Rydym yn cyfrifyddiad 90% yn y gweithio mae'r cyfrifyddiad. ac wrth gwrs, ym mwy o'r cyffredin ddweud o gwyllwll ei amddangos, er mwy o'r cyfrifredid gynhyrch flwyddyn hefyd. A'r cyfrifreddau lleol o'r cyfrifredd o'r ddynat茴 erbydd. Ond yw'r cyfrifredd o'r ddweud i chi'n gwybod i lyfr i ymwyng, ac yn amlwy o ddwyng a'r bydd o'i plwy o'r gwyfans. A fyddwn i'n cael ei wneud am y dywedio i'r cwmach o'i cyfrifredd o lwyfans deoliol ddweud i mi gŷl gan gyfrifredd. diving to investigate in depth the value of OpenStack against commercial offerings such as VMware and Microsoft and the public cloud as well. What emerged was not a surprise in that there is not any clear stand out leader. We cannot say OpenStack is always better value in one place, Felly mor gallwn ni'n cael ei gael bryd yn ymwyaf. Y ffordd o'r ffordd o'r wgodfyn, oherwydd y bydd yn bwysig o'r ffordd y bydd yn bwysig, y ffordd o'r ffordd o'r ffordd o edrych, ac mae'n bwysig ar y ffordd o'r ffordd. Felly, mae'r rhaid o'r ffordd, yw'r ffordd o'r ffordd o'r ffordd o ffordd y bwysig o'r 400 mewn cael ei wneud, mae'n cael eu gwneud yn bwysig o'r ffordd. Y ffordd o'r ffordd o'r ffordd, OpenStack engineers are really difficult to find at the moment, although hopefully this will increase with events like this. And because of that issue, at the moment if you're running a small scale cloud, you're not very efficiently managing it, you're probably better off controversially using something like VMware or Microsoft. But if you are very efficient at managing it, if you have a high level of utilization, OpenStack gets increasingly better value. And in fact, OpenStack gets more and more better value the greater scale it is operated at. Now, OpenStack distributions are often used as a method for reducing cost. And we actually found that OpenStack distributions pretty much write themselves in terms of value. So if you're using an OpenStack distribution, it is always, practically always going to better value than using the OpenStack source over a long period of time. So there's loads of different nuances here, and I'm not going to go into all of them because that would take forever. But what this means for me is that to have just a single cloud platform for everything is a bit of a dangerous strategy. And I believe end users, consumers need to consider each different workload and work out where the best place to host it is. And obviously, this isn't just in terms of price and TCO. It's also in terms of security, in terms of geography. And increasingly, we're hearing from end users that a cloud-first strategy isn't now about going down a particular cloud route. It's about having the options of choosing different clouds. So my general advice I give to anyone is to consider a multi-cloud strategy rather than getting fully on board with only cloud provider or one cloud option. And that is the basis of today's panel discussion. So I'd like to introduce the panel. Starting from the left, please. All right, yeah. So my name's Scott Snedden. I'm with Juniper Networks. I'm a senior director at Juniper focused on software-defined networking, virtualisation, cloud technologies. My team looks across all of the go-to-market-and-sales efforts at Juniper, and we work with customers as well as with Juniper people to help understand the challenges that customers have in moving to cloud, adopting these technologies. We spend a lot of time with telecom operators helping them look at and work around some of the challenges of NFV and virtualisation and running their own cloud infrastructure. So we spend a lot of time business-modelling a lot of the same sorts of things that you've talked about here and are in your research report, but also around the impact on the network and network operations. And so, you know, being a networking company, we kind of look at it from the view of how does automating your processes around managing network infrastructure help with that, and then culturally, how do you evolve your teams so that the cloud and DevOps teams are working with the networking teams and that sort of becomes one consistent workflow. And we think that there's a lot of optimization that can happen there. We'd love to talk to you a little bit more as you can continue your research about, you know, how network operational practices impact these economics that you're calculating around cloud utilization. Hi, I'm Dan De Betrieu. I'm a co-founder and head of product of Metocra, and what we do is we make an SDN solution for network virtualisation for OpenStack and other platforms, primarily based on Linux. So our customers are actually in the enterprise space with a few companies in the hosting space, not really any telecom as such. So being in the network virtualisation area, one of the things that we attempt to do is to reduce the cost of network operations, you know, obviously, for the cloud at least, at least for that. And we actually might be moving into other network areas as well over time. Thank you. Hi, I'm Chris Lindgren. I'm a senior Linux systems engineer with GoDaddy. I am on our OpenStack team, one of the founding members of our team. We operate eight public and private clouds in four different regions, pretty much in charge of everything on our OpenStack infrastructure. So anything from deploying new services to spinning up new capacity to choosing flavour sizes, qualifying new images, making sure they conform to security standards, pretty much everything that deals with OpenStack at GoDaddy. Great, thank you. So I realise it's quite controversial me coming here to an OpenStack summit and saying OpenStack isn't the lowest TCO option for everything. Do you guys find that in your work in life? Do you think this multi-cloud ideal I'm talking about is something end-users want, or is it just some analysts sprouting numerical research? For my view, and what we see at Juniper is exactly that, that we'll get called in, especially when we talk about contrail and the SDN solution that we have, which is quite OpenStack focused, a lot like Dan and Midochura, but also supporting multiple cloud systems. So we'll get in with the customer because they say, hey, we want to go OpenStack and we want all of these benefits and how can we make the networking work better there. But as we go down that path of working with them, they start to realise, gee, that VMware environment that they had installed that they'd love to bail on, maybe they can't, maybe there's some things that still need to live there. And come to find out, there's a development team off in the corner that's using Amazon really, really heavily. And as much as they want to pull that in-house, there are things that exist in Amazon and that ecosystem that they're somewhat tied to. And so the reality is, at least for the foreseeable future, most customers that we talk to, especially in large enterprise, are going to have a multi-cloud strategy for the foreseeable future. And so then what we get into is trying to help them figure out how to operate in these mixed environments a little bit better. We try to guide them on the best path towards the most economic solution, but the reality is different applications, different teams, different cultures fit in different ways. Do you see the same done? Pretty much, yeah. I don't have a whole lot to add to that. I'm actually curious about the GoDaddy. Yeah, me too. We do use multiple clouds, so we do OpenStack. We have some teams that we may have acquired that were already on AWS. They still have some workloads on AWS. When we make an investment in a region where we put a data center down, we'll put OpenStack in there, but there may be some regions where we want to put a presence, but we don't actually want to put a full data center infrastructure there. So we've been looking at using other public clouds and doing workloads there. Great. So obviously if there's any questions, put your hands up. And there's a little microphone there, so if you'd just like to come up and ask, that's great too. Just a little tip bit of information I'll mention now actually is, in our research we found that OpenStack qualified engineers were earning on average $40,000 more than VMware or Microsoft qualified engineers. So I would argue you've made a wise decision coming here. Most of the findings we had were because of the expensive labour in OpenStack, and now there's certification programs and the like. Hopefully that will get better, but I can't see salaries getting lower soon. So wise decision. So ultimately we found that private clouds that have a high labour efficiency, so that are very well managed, and private clouds at a high level of utilization can actually beat public cloud on cost. And I was at a panel yesterday with some enterprise end users and Walmart were there, they were these huge banks, and they were saying they were achieving the level of scale necessary to actually beat public cloud on price. But obviously doing this in practice is hard work, and I would argue you need some kind of tools to actually increase the utilization, improve capacity planning, and to have the tools to be able to automate things. Is that something you see? Do you think the tools are critical? So we work with actually a relevant question. So I think that one of the inhibitors of OpenStack, especially among small, medium enterprises that will never have enough OpenStack skilled engineers, that's a huge factor. And so some companies are attempting to address that, like we work with Platform 9, which is basically doing the much more integrated automation of the whole experience, OpenStack storage networking. I think that would be critical to go more broad into the enterprise space. Chris, what do you see? I mean, tools are definitely a problem. From my point of view, it's managing a large fleet of servers. So we have a large number of servers and a bunch of internal network security zones. And mainly our problem is trying to, there's other teams that are involved, setting up some of the network and some of the other things, and they miss things because they do not have things automated. So we have a bunch of tooling on our end to double check other people's work. But for our end users, I think the biggest problem is they have legacy applications and they've been on either bare metal or they've been on VMware where the method of operation at that time was make the infrastructure the VM runs on or the physical server itself highly redundant. And we're moving more to make the application highly redundant handle failures and stuff in the application. Exposing fault domains out through like availability zones and telling customers to split their app between availability zones so we can do better forms of maintenance. We can do a better job of letting them create applications that are more fault tolerant to certain issues. And that requires a large amount of philosophical change on their end. And that I think is the hardest part for us is just kind of keep on preaching the, your VM maybe is very important to you, but from our point of view, it's one of a couple tens of thousands of them. So if we have a hypervisor that goes down, we're not doing backups on it. We're not like your job to keep your backup of your data. Like I don't know what's in your VM. So you need to be the one to make sure that your data is protected because when we lose that hypervisor due to a disfailure or raid failure, something, I don't have a backup of it. I don't, I can't restore it for you. So it's your job to make sure that you're protecting yourself. So that's really interesting just to pick up on that. Every time I see a report in the media about a cloud provider having downtime and all these enterprises and end users are unhappy with the situation. I constantly think, well, they did tell you that it wasn't 100% available and all the tools are there for you to build resilience and there's multiple availability zones, but you didn't actually take that advice. Do you think this is more of a cultural change then? Yeah, I mean, absolutely. We tell people we're going to do maintenance and availability zone a week in advance. And then we'll have at least four or five complaints about taking down a server in the middle of the day or at night or something like that because they didn't, they didn't handle the failover or something like that. But we gave them a large window for them to figure out how they want to handle that maintenance. So they had, I mean, it's cloud. So you have capacity on demand. We highly recommend that people automate the workflows. I think that's the biggest thing. I'm going to hate on some windows people here for a second, but most of our windows teams do not have good automation around spinning up an application or server. They're, they don't do a really good job of doing that. So they typically run through like a documentation or a workbook or they try to make one server look like another server by going through settings and things like that. And that's, you can't do that. It's just from a scale and turn up, it just doesn't work. This is a really good point. If you attend Amazon's big conference re-event on Las Vegas next month, what you'll see on their agenda is a handful of classes on how to use Amazon and use their tools and their APIs. But the majority of their sessions are just teaching their customers how to build what will look more like cloud native applications and how to build around what's inevitably going to fail in a, at best, two or three nines environment. It's not five nines anymore. And I think we as an open stack community could probably do a better job teaching users and teaching at this summit. A little bit more about application architectures for these clouds because the assumption and the approach that we're just going to be able to bring everything into open stack that used to run on VMware bare metal probably isn't the best approach. And we really need to help our users of these clouds evolve to cloud native. So it sounds like you're saying there's a skills gap not only in open stack particularly but just cloud in general? Yeah, I would agree with that most certainly. And do you think the industry is resolving that? Slowly but surely. I mean we've seen more and more providers that are starting to develop that way. One of the side effects that we see of the container buzz is that people are starting to think about refactoring applications to run there instead of when we moved to VMs it was just take that thing and package into another thing. But containers doesn't really fit that model very clearly. And so, yeah, the container movement is really I think driving a lot of people to rethink application architectures also. Yeah, especially people who deploy from my experience on Kubernetes because it forces you to already break your application into those chunks. Well just a bit of a deviation then. So do you think containers are the death of virtual machines? Let's say five years time. I keep falling back and these guys from Juniper have heard me say this a bunch of times. So I sat on a software defined data center panel about three years ago at an event in Silicon Valley. And next to me was a guy from UBS, big banking firm. And all morning we had been talking cloud native and the legacy has to go away and software defined is the future. And he chimed in on that panel and says all you guys have been saying legacy this, legacy that and the banking world we call that shit that works. And frankly there are a lot of mainframe type things that haven't gone away and we all hope they would but there's still a guy out there maintaining cobalt code somewhere. And I think VMware continues, virtual machines have a place and will have a place for the foreseeable future. Containers are a really great way forward and maybe we start to see VMs taper just like we've seen mainframes taper significantly but they don't go away. It never dies is my opinion. Do you see the same done? Yes and it occurs to me that what becomes really complex in that environment is networking. Sorry, self-serving buzz here. But it's true. Containers worth the hype? Yes and no. I don't think you're ever going to replace completely replace VMs. From our side VMs are supposed to replace bare metal but we have a huge interest in ironic and there's a number of companies out there who have a huge interest in ironic. I think what people really want more than anything is I think what people want out of containers is that from an application standpoint I just want to care about my code. I don't really want to care about the architecture or the infrastructure. You give a bunch of software developers, Kubernetes and they don't have to care so much about like they define their pods and their services and say like here's the things I care about like here's the front end things that I want you to keep running. And it takes care of keeping everything running but I think that's what I see from our development teams that they want but anything else you're giving them kind of a level of complexity. They own it right now but they don't want it. Yeah it's levels of abstraction and automation is how you start to get there. And Kubernetes and these container ecosystems are another way to automate application deployment in a simpler fashion where the developer or the user doesn't have to understand the infrastructure. VMs were a level of abstraction that kind of took that away. And self-serving again Dan and I are very focused on how the network automates and exists in an environment. So just like a VM lets you not worry so much about the infra and the container lets you worry less, the network has to be pulled along with that. So the architectures of the network are represented in an abstracted way so that you don't have to worry about topology so much as just the application framework that gets deployed. And the network follows that appropriately with security policy with connectivity models automatically created to go along with that. I actually have a question for the moderator about containers. Have you done any cost modeling of that relative to the other platforms? So funnily enough I have so the problem with being an economist in cloud is the answer is never simple and people like simple answers but I did actually do an algebraic comparison of using virtualization against containers. And what I actually found was that containers in terms of server efficiency containers are always better or equal to virtual machines. And the thing that most impacts the benefit of using containers over virtual machines is the size of the operating system footprint. So essentially if you squeeze let's say you've got a server with eight virtual machines. If you were to replace those eight virtual machines with eight containers, containers would use less space and would better utilize the server than if you were using virtual machines. But actually what matters more is how big is the repeatable operating system footprint. So let's say our virtual machine was 50% Linux footprint just for the sake of it. The greater that footprint, the greater the savings achievable. And we did a scenario of I think it was 16 containers versus 16 VMs and it was like a 50% cost reduction relative to virtual machines. The primary driver was the, it's almost like the sharing of resources. So obviously in hardware virtualization you're sharing hardware resources, right? But when you have operating system virtualization, which is in many ways what containers are to be simplistic, then you're sharing the operating system resources so you don't have to repeat them and the hardware resources. It was a very theoretical approach but I generally think that theory is where the practicality begins. And if we see that opportunity in theory, then in practicality I think we're going to see savings of at least something. Great, great, what's the time? That gentleman went out his hand very quickly. People who are looking at it, you can expect them to understand economics, but somehow they can't grasp that a disk is a cost of $1. Somehow if we look at it at a disk it costs $10 but it's going to be 20 times better and never compute and it's always going for the $1 disk. So do you have a view on quality versus quantity and where is it cut over perhaps? Yes, so the question is essentially at what point do you see end users choosing value over cost, right? What's been interesting in the cloud price index for public cloud is because we have market share data and price data, we're able to measure commoditisation of the market. So we're able to see well does having a cheaper virtual machine drive a greater market share? And actually what we found is being cheaper doesn't always mean that you're going to win greater market share. So in public cloud at the moment, I'll start on public cloud, I don't think end users are particularly price sensitive. And I think the reason is if I was a CIO and I had been using something, I had been on premises, I hadn't taken any risks, I was fairly secure in my job, I'm not going to pick up and move everything that my job relies on, on something that's cheaper for a 10% saving but then put my job underline. I think if you're a CIO you obviously want to make cost savings but you're not going to give up your security and reliability and all the responsibilities that come with that because something might blow up. With private cloud I think you're right. I think it's a lot more difficult because the complexity of the solutions means that it is possible to pay more and get greater value but then how do you measure how that value is going to contribute to the overall application? And this is the whole issue of price performance. It's easy to understand how much something costs but then working out if you're getting value for that price is a different challenge. And I agree, it's not easy. Do you guys have the same views on that? So going back to what he said about the quality and I think there's an assumption where if you go to a high end storage architecture and stuff like that you get much more reliability and I've had the exact opposite experience. What you effectively do is you enclase your blast radius for any sort of issue that you have. So you have a switch issue, you have a storage issue. Instead of impacting 25, 30 vms now suddenly you've impacted 1,000, 2,000, 3,000. So you take a small problem and you make it really large. You also, with a network attached storage from my experience, when you have a network issue specifically you have a very short period of time, typically shorter than the time it takes for the monitoring system to tell you that there's a problem to actually fix the problems. And when you have a bunch of vms that are booted from volume and their root disk is on the sand and the sand network goes down you've got about a minute and a half before that root volume becomes read only. And trying to, once you fix the networking problem and everything comes back up now you have this vm that's sitting out there. It's got a corrupt disk, it's going to have to be rebooted, file system checked someone's going to have to log in to the console with a password. The recovery time for that suddenly becomes hours. Where if you had something where you had exposed fault domains and you built your app across three different commodity servers if we had an issue with one server you didn't just take down the entire application. Does that mean you don't use the sand, you use local storage and rely on the application to take care of it? Right, we had a public cloud product that was built on 5.9 net apps so we had gold-plated net apps that backed it up. We have more, aside from reliable messaging the second most unreliable thing in that infrastructure was the network connectivity to the NAS share or the NAS itself. So we specifically went from shared storage to local disk. So back to you the gentleman. Do you feel this is almost a culture issue that the client is still getting used to the cloud way of actually the hardware in some way is temporary and it can die? Do you think clients where you are working are actually a bit confused that it's almost the application that needs to be architectured? So maybe from what Chris was saying, what really matters is how the application is going to be redundant rather than the expensive hardware. Yes, we get that a lot. I've been talking to telecom providers about this NFV sort of thing in telco cloud for three or four years now. Three years ago there was a big push saying, I'm going to virtualize this thing that I used to run on a piece of network hardware but I still need 5.9s and QOS and high-touch MPLS all the way down to that hypervisor, how do I get there and how do I manage queuing on the network within the data center. The conversations were always back and forth about if you're going to build a data center and you're really going to virtualize these things, you kind of don't want to do that. I mean we'd be happy to sell you a really high-end router with all those features to attach all your servers to and you know, jeez our stock would go through the roof but you'd probably go out of business going doing that way. So there has to be a compromise. What we've seen more and more over the last couple of years especially with all of the telcos getting involved in OpenStack is a realization that maybe that's not the right approach for networking things. That's been trickling out into more areas and people are really coming on board. Us in the networking industry are doing a lot to develop virtual network functions that follow this cloud-native model a little bit more. GPDK and SROV and the things you have to do to get high-speed I-O sort of break some of those models and so I think the industry has to find the right balance for high-speed I-O and pack it through-put versus distribution and cloud-native architecture and so it's an evolving thing for sure but I think it's moving in the right direction. Great, so there were some other questions. Yes. Sorry. You're close to the mic. Thank you. Two seconds. My name is Nathan. So my question was around the continuous bit that you gentlemen kind of touched upon at the beginning and we talk a lot about multi-cloud approach. With containers, it's become with containers and I'm using Docker and Kubernetes specific. It's more architecting your application. Do you think that we are moving in a direction where we are abstracting away the underlying cloud where it's not really a multi-cloud, it's more like a distributed container strategy and it can be on any cloud irrespective? Like is containers truly the holy grail or is that something that you feel that we are moving in a direction towards? Thank you. I'm going to check this over to you guys while I think about it. I don't believe so. I think the architecture is very complex beyond just the compute which is really what the container is. The storage I don't think can be really abstracted like that. In your example, the application had to be rewritten to do its own distributed storage modelling, whatever it needs. And then you have a bunch of other external services like you have external network access, you have DNS, you have all kinds of things, which the geographic distribution, I don't think can be hidden away for an application. Do you think it is? It needs to be today to work in an even a triple nine kind of a setup, but I'm hoping that eventually that is what it would be that the underlying cloud becomes in material whether you're on AWS or Azure or on maybe a Rackspace cloud. Essentially what you're doing is you're architecting your application and abstracting the underlying cloud away. In theory, that's what cloud foundry is to some degree, right? Right, I was just going to say. In theory, you can do the exact same thing with VMs without having to go to containers. Just don't tie yourself to a certain provider's APIs. That's one thing we've seen. You can do it with cloud foundry or even from some point you can also use shade to handle the differences between OpenStack clouds. But that's what we tell our... We try not to have people that go directly. We have some people who want to write bash scripts to go hit the OpenStack API. And it's like, please don't do that. Go use a client that's native to whatever. The underlying factor is cost. I've seen a lot of customers who have just complained that even the smallest version that companies are offering right now which makes financial sense for them from a VM perspective is still nowhere close to the utilization that they're looking for and that's why containers provide them more elasticity in terms of their usage and makes it more cost-effective for them. That is why my question was around whether containers form the truly that abstraction layer that allows you to pull back and forth, grow or contract and just pay by usage. I have a smallest setup with even 20 nodes but I hardly use 10 at any given point in time. So why am I paying for the other 10? The ultimate version of that granularity so-called serverless lambda, right? That's also the ultimate lock-in. So in that sense, I think you've got to pay for it one way or another in terms of complexity or poor utilization. You know what I mean? So I had a conversation with Google last week about their cloud platform and Google are always talking about how they're cheap and they're going to kick Amazon's backside and they're going to become the cheapest. And I've never really seen evidence to back that up but last week, they described to me their infrastructure is essentially all based on containers, right? So they argue that because all of their infrastructure services are based on containers, they can do exactly that. They can grow and shrink the containers and because of that, they can pack more in. So all of Google's services for cloud are on these containers and what that means is they can not only put more on a server but then they can also spin up temporary services and only last a few seconds for things like indexing. So they are using containers in this highly utilized fashion. Whether it will give them an edge in being cheaper in the future, I can't be sure because I'm sure Amazon have got little tweaks like that as well that they don't discuss. But it was interesting that they fundamentally see this containers as key to achieving their optimization of cost. So maybe some hyperscalers are already doing that. And there was a company called Elastic Hosts and they were structuring their virtual machines in a container fashion. So they were arguing, well don't choose your size of virtual machine, just are consuming and will bill you on the CPU utilization, which I thought sounded like a great concept. It sounded like they're just oversubscribing, right? I agree, yeah. I think that is the risk because great, they're metering on CPU and storage but that doesn't necessarily mean the server is going to be more packly dense, does it? Right. Any more questions? There were a few hands. This gent. Yeah, I was curious in your container. Fair question. So was manpower cost included in that calculation? No, it wasn't. So it was a very direct X to X and it was on the back of an envelope. Do you think the skills in handling containers would be harder to get and do you think the maintenance would be harder to deal with the maintenance? So I live in San Francisco. I live in San Francisco in this framework. You know, not as material as why I was curious but I feel like there is probably a noticeable difference in the amount of administrative overhead as well as the amount of just sheer talent that you need to successfully deploy talent, deploy container. So it might be interesting in the future for me to investigate if containers I think that's kind of why I was... Do you have a follow-up there? I think you're dealing with the shiny object syndrome as well. Really? Yeah. About containers in general? Coming along I'm trying to solve but one thing I don't think it was disgusting of as part of the factor is not just the very economics is the CIO or IT management looking to change the way that you application is about trying to move towards DevOps where I can trust the amount of time between their release cycles. If they're trying to do that they're not really worried too much as much about their ideal infrastructure costs as much as they're really trying to think how am I going to move fast when the business leaders are off my back? I think challenging as part of this consideration is this... you talk about multi-cloud part of that I think will come into the size of the organization because if the organization is big enough you're going to get a discount on your VMware or Microsoft licensing just by starting to talk or having your open stack cup on the desk when the... so you'll get part of your economic benefit just from pulling the code open and showing the revolver if you have a big enough organization to make that investment multi-cloud again you're probably doing it for a number of reasons economics point. So I've got some interest in tidbits on the economics of multi-cloud so essentially... so we... what's the time? oh wow right well I'm going to have to wrap up but my tidbit will be that we found that if you were to build a complicated application of database, compute, storage blah blah blah and you would use multiple cloud providers to deliver it which I agree would be a nightmare for that complicated application you could make 74% saving on the cost of deploying on a single provider by using multiple providers to do it but if the application is really simple just compute and storage you're better off shopping around for a cheaper provider rather than going to the messing round or the complexity of multiple things which is why most of the websites are on shared hosting providers where we pack them on to... pack thousands of websites onto a single server because most of the people have some PHP app with the database back end and they just need it online for some amount of time and you don't need a cloud provider you don't need containers you don't need we might want to put your stuff in containers to provide better resource isolation from bad actors but 99% of the web is going to run on some shared server somewhere Well we've reached time I'm afraid although I feel this could go on for a while yet so can we get a round of applause for the panelists please and thank you very much I think one of the key takeaways was the definition of legacy IT architecture is it works Yeah, yes and no one wants to change too quickly if it's not going to work Thank you very much