 Rwy'n meddwl, Simon. I'm from Bandanovo. This afternoon I'm here to present to you the ultimate showdown in hosting. First of all I'd like to say that this is a panel session and it is on the business and strategy track. It is a non-technical presentation. If you expect a deep technical insight into a hosting solution this isn't where you should be but if you want to learn about making hosting decisions in the right place where you should host your projects then this will give you some good insight to that. I'd like to explain the fight rules this afternoon this is not a debate but it is going to be a point by point percent review of the key hosting decision drivers, okay? So this afternoon we're going to present points for these three options. Traditional hosting, kind of DIY build your own, managed hosting service and platform as a service, okay? So I'd like to introduce to you from the white corner, representing platform as a service, Aaron from Aberdeen Cloud, a PAAS provider in the black corner, representing bare metal in-house, Rupert from Arrow, a B2B publisher, and in the blue corner, representing managed hosting, it's Greg from Code Enigma, a development and hosting company. All I'd like to ask these three guys to briefly explain their platform, then after that we're going to go through points, looking at the benefits and disadvantages that we've got. So, thank you Aaron. All right, so thanks everybody for coming, nice to see the turnout. Aberdeen Cloud, we provide a platform as a service, it's a cloud hosting and development solution, it is out of the box ready to go, everything from very small sites to enterprise level solutions. Oh wow, my microphone's really loud, sorry about that. I'm Greg from Code Enigma, we tend to be more on the managed service side, so we help people manage their own infrastructure potentially or manage infrastructure that we've helped them to set up, sometimes it's in the cloud, sometimes it's bare metal, but it's not a platform like Aaron's. I'm a web developer for business-to-business publisher called Arock. We run five-ish sites, we sell basically niche information for specific industries like automotive, food, drink and style, and yeah, kind of it. Okay, so the first thing, round one, capital expenditure, aka cat mix. So, we recently relaunched one of our biggest sites and to do that we built a new infrastructure for it and we had a fairly low capital expenditure for that because we had a lot of what we needed already. So, we have an office with about 50 people in it and you have that many people in an office, you have a pretty big pipe into it. So, it's pretty easy just to slip another server into the rack and set up our infrastructure on that and you're probably thinking about a new server, that's money, but all our servers come off eBay and that might sound insane, but the reason we do that is one of the guys in the office, he's really good at sourcing hardware, making sure it works, setting it up right, and stick it in the rack so us developers can take it over. So, for a pretty low expenditure, we have a crazy overpowered server, it's like an eight-core Z on this 32 giga ram, and we have an identical one sitting next to it, which we can either switch to or cannibalise as appropriate. All right, so our take on capital expenditure is it's not just buying the bare metal and the actual servers and everything to make everything work, which it often is thought of, but upfront costs and capex, we're also talking about the biggest part of that expenditure, and that is exactly the people and their salaries that it takes to set up those systems. So, the people who design the systems, purchase the parts, put everything together, test the stuff, install the software, configure, tune and tweak. That's all money that has to come out of pocket when you're not using cloud. So, with the cloud services or platform services, you don't have to come up with a sizeable investment to pay for the hardware or the salaries for the people to design the systems and implement the architecture. It's actually a pay as you go from the moment you sign up, you just pay for what you use, and most companies today don't even require a minimum contract. Okay, well, I mean, I can see both sides of that argument. From our perspective, I think that the whole capex argument shifts depending on what you're trying to do. For example, if you're an agency that's got lots of websites, you could potentially run them all on a small cluster of servers or something like that, then it's going to be more cost effective to do it that way than it will be to run them on a platform where you have to pay per site. So, if you're in a reseller situation, if you're in a situation where you've got a lot of websites to host, but each website doesn't actually need very much resource, you're probably better off with your own machines whether that's managed or you're managing them yourselves, it doesn't really matter. So, yeah, I mean, it's a sliding scale, depending on how many sites you've got, what kind of traffic they're handling, whether you've got internal resource already like Rupert does or if not, as Aaron says, some people don't or all the training up would be expensive, so it varies. Round two, flexibility. So, I carry on at this point. I think in terms of flexibility, what an organisation like ours offers is pretty much second to none because essentially what we do is manage Linux servers for people. So, if you've got a development team that just want to kind of get in there and use the software, what we allow you to do is basically forget about all of the running of Linux and all of that stuff. It doesn't matter whether it's your own servers or whether it's something that's been put in the cloud on virtualised hardware, it makes no difference. The point is that your servers manage by you and you can do whatever you like with them. All we do is manage the backups and keeping things up to date and the security and everything else. Let you worry about running Drupal on top of that. So, yeah, we can run pretty much whatever we like and we do. So, our current platform is Debian 6, Varnish, Nginx, Memcash, Pacona, Solar, Ticca and PrintSexML. If you haven't heard of the last two because they're probably the oddballs there, Ticca is a thing that extracts meaning from documents and PrintSexML is a tool for producing PDFs and they're kind of the oddballs in the stack. They are what a platform couldn't provide. So, we have all those things set up exactly as our application needs it and if we want to add a new service in the future to supporting your feature, we absolutely can. So, I think this is where self-hosting beats anything else. Okay, so, flexibility. Yeah, cloud platform loses this one. We don't really give people all the access to things, some of our special sauce in the background, but I would say to be fair, it would only be by the smallest of margins. Cloud platforms are meant to be closed systems and the reason for that is they're not servers, right? They're distributed systems. So, although true distributed systems, they will include specialized nodes that serve one dedicated function and only that function, custom apps and services run on separate server nodes. So, it's much more inflexible as a system. But, I don't know, I would say it's definitely, in our mind, it's more secure that way because suppose your Ruby-based ticketing system has a bug which a hacker uses to break into the server. Well, there isn't much to do since the app runs on its own dedicated server node and most of your data, code and content will remain secure on their own server nodes. So, it's me to lead off on this one again. Skill sharing is an interesting one. I guess it matters more to myself and Ruth than it does to Aaron in so far as that obviously because we're providing machines and in Rupert's case there's internal machines that have Linux on, then there's a degree of knowledge required there. Now, I see this as an opportunity because a lot of our customers will already have very good IT teams in place that are already managing infrastructure for them. It may not be Linux infrastructure, it may be something else, typically it's Microsoft and there's an opportunity here to actually transfer our knowledge to our customers' teams. We're used to doing onsite training, we're used to mentoring people and taking them through the experience of shifting from other hosting solutions on to managed Linux servers and I think you grow the value of your team by taking that approach because actually they can learn as your business changes technology and that adds value to them and it adds value to you. Yeah, so I would actually list this as a trade-off or I think from our perspective it would be a trade-off because on the one hand well I completely agree with Greg's comments about skill sharing, our perspective would be that it's a plus to not need this, to not need to be able to do skill sharing. The impact of not needing specialized skills and also specialized systems is two-fold. First of all, the absence of having system administrators and specialists on your team no longer prevents you from competing at the enterprise level, puts enterprise contracts within the reach of the average development shop without having specialized personnel. So in realistic terms it means that a handful of talented Drupal devs with nothing more than a credit card and their own talent could be chasing and winning contracts at the same level as some of the bigger companies. The second part of that would be it's sad to say but considering the economic uncertainty of the time that we live in, every dollar, euro, check crown, whatever local currency you're using that you spend on the business is money out of the business's pocket. So the fact is adding people and trying to improve productivity by having specialists is an expensive approach. Specialists can be very expensive and the plain and simple fact is pause systems, remove the need for sys admins specialists. So like Greg said, skills sharing is pretty much essential to how we operate as a business. So we're a relatively small team, you know, we're quite experienced and we have a lot of overlap just to reduce the bus factor. So like I said, I'm a developer there and personally I like operating in an environment where I have to learn things, you know, it's why I do what I do, it's fun. And we're a slowly growing company and the bosses frequently say they don't want to grow the business too fast because, you know, they want to avoid the tendency to boom and bust as a business. So yeah, Aaron's right, you know, we're not going to be serving, you know, thousands of pages tomorrow, you know, we're not going to be getting enterprise stuff tomorrow, but the business is slowly going to grow. And as a team, we can grow with the traffic and the demands on the system and scale up with it. Okay, next point, physical control. All right, I'll take the lead on this one. Physical control. There is no argument. I have no argument there. If you want to tinker with hardware, pause is not the right solution for you. Okay, pause is built so that you don't have to tinker with hardware. So I cannot reiterate that enough. If you like to tinker, pause is not the right thing. If you just want to get in and do some Drupal Dev stuff, okay, it might be a faster approach. So pause systems are typically built on type, on top of some sort of cloud offering. You have to remember, I think, actually, and we might have, I might have skipped a point before. Cloud offerings usually come in the form of just infrastructure, then platforms and then softwares. So pause is built on top of a cloud offering, usually infrastructure like Amazon, Rackspace. So, therefore, any time you use pause, you're also subject to the availability of the initial cloud provider. If you use one without multiple data centers, you may experience increased latency, non-compliance with certain government regulations, and you might not have true high availability. It's always good practice to look at these things before signing any sort of contracts. So yeah, self-hosting pretty much wins physical control. If you want to go and give your server a good kicking, you absolutely can. Although, you know, it's obviously not a huge advantage, servers don't really like that. And, you know, I work remotely. So, for all I know, the servers might actually be in a data center somewhere, even though they're actually just upstairs in the office. Where there is a big advantage is there is transparency, which is a sort of control. So, because we built these servers and everything around them, we know exactly what's set up there. You know, if you're using someone's pause and they say they're highly available, are they highly available at every level of stack that you want? If they're saying they're doing backups, are they sticking to the backup routine they promise? You know, we know that we are because we're doing it. One slight downside is obviously physical control doesn't necessarily mean doing stuff physically to the machine. On something like, you know, cloud service, you may be able to pick up a machine and move it to a different data center, which you can't really do if you've got an actual bot. You know, it takes a while. You can only need a van. Yeah, so I think, I know, I'm glad that Aaron brought up the point about the different uses for the cloud and, you know, platform being one and infrastructure being another. From our perspective, when we're managing servers for people, it doesn't make a great deal of difference to us where that server is. So, you know, it could be your existing infrastructure or it could be that actually you want to leverage some of the power of the cloud but only at an infrastructure level, in which case, you know, you might want to commission some virtual machines and have those managed. So, we kind of offer a trade-off and you can go as far as you like. You can either go completely into the cloud and say, okay, we want everything virtualized or you can say, actually, you know what, we've bought our own servers but we really don't want the hassle of having to deal with these. So, we like the fact that they're physically here in our basement and we like the fact that we're controlling the bandwidth and all this kind of stuff but we don't want to be bothered with managing Linux and, you know, that's quite a common approach for a lot of companies. In terms of when we do go into the cloud, we have preferred suppliers, of course, but I mean, we can really run these things anyway. I mean, that's the point. It goes back a little bit to the flexibility stuff but, you know, you can have the physical control and still have the kind of management that allows you to maybe have a reduced system administration team. And the other thing I would say is, with our model, you have kind of full access to all the software. So, you know, you can check up on us just like Ru was saying with a platform, you've just kind of got to take their word for it that they're doing all the things that they said they do. Whereas with us, you know, you have full access to the monitoring, you have full access to the backups, the backup locations, you can run your own restores if you want, you know, it's all, it's quite transparent. It's almost as though you're hosting yourself. Okay, round five, separation. So, this is a point where we have a bit of risk because we use the same connectivity for our servers that we do for our office stuff. And, you know, if all my colleagues have decided to fire up YouTube at the same time, I don't know what would happen to the website. Obviously, if you're co-located, that doesn't apply. So, you know, once you've got to the servers, because you have total control over the servers, you can separate things how you like, you know, you can stick everything in one box, you can have bots per machine, you can build your own little private cloud if you want it. So, you know, you can separate things how you like. Kind of up to you. Oh, is it me? Oh, I'm sorry. Okay, so, yes. So, on the separation side of things, I think it depends on the situation because of our flexibility again with the management service. Sometimes it depends on the way the client has decided to lay things out. Sometimes it's down to us. Obviously, if you put stuff in the cloud, even if it's just about infrastructure level, you might have concerns about having potentially multiple VMs on a single physical machine. That's certainly a possibility. You don't, again, you're unlikely to have the level of transparency with an infrastructure cloud provider of the sort that we might use to know whether actually you're taking more of a risk than you think because you've got four servers, but actually RaxBase or Linode, whoever it is you've contracted with, have put them all on the same physical machine. That is possible, you know. So, on the VM side of things, we run some of the same risks that the platform might run, but obviously we don't have to take that route. We can use bare metal servers. And in terms of the actual hardware separation, obviously what we sell and what we manage are individual separate machines. There's no shared resource on a single machine. You're never sharing your resource with somebody else on that actual machine. It's always complete and utter separation. Own IP address, own MAC address on devices, et cetera. All that stuff is just, you know, from a security perspective, it's completely isolated. All right. So, yeah, that's a lot of that's true. I would say both points, sometimes there is a lack of separation. A lot of pause offerings are set up as a shared system. In these cases, the provider has created a shared architecture that is, it allows the end user to utilize a shared resource point. Sorry, pool. So, why would they do that? Well, when people set up pause like that, they do it because it drives the cost down, right? You're able to utilize more processing power and pay less for it. So, the idea is to offer scalability and infrastructure scalability at a better price point. However, an important point about this is I would say that's actually a quite antiquated approach to utilizing cloud services. That's so 2011, right? It's a remnant of the era of shared servers and cheap hosting, okay? But most cloud services now, they're moving beyond this. So, I would say any decent pause platform is not a bunch of servers and physical resources are virtualized in these cases. So, therefore, they're shared, the resource pools become dedicated. It's a distributed system, so it's a better approach. And if you're looking and considering pause, that's something that you might want to consider or you run into these lack of separation issues because there are many providers that are doing it that way, but you can also get distributed systems where you have dedicated resources. And that's really sort of the way forward, at least in my opinion, for these sort of pause and cloud offerings. So, I don't know, I would say if you have no doubts about using Rackspace or Amazon as it is now, or any other cloud infrastructure that, which actually provides you virtualized instances, you shouldn't really be afraid of using cloud platforms either. Just do your homework first if it's that important of an issue for you. Round six, scalability and performance. Yeah, so this one, I'll take the lead on that because, well, again, smart cloud systems are designed to grow as you grow. And this can be both long-term growth and short-term growth. And I'll clarify, by long-term growth, I'm talking about sites, just sites themselves, that start out small over the course of its business lifetime. And as it becomes more and more popular, you need more and more resources. This could also be considered the same argument for any small development agency. You start out a couple of people, and over time, you need to keep adding resources. Well, that almost goes back to the CAPEX argument, so I won't repeat myself there. But the idea is that you can just grow as needed instead of trying to anticipate how much growth that you're going to need. The second thing is the short term. And in the short term, I would say this is a big one because if any company here creates a great site and suddenly that site is featured in Wired Magazine and your traffic is about to go from 1,000 hits a day to a million hits a day, you need to be able to scale up to that sort of traffic immediately. Again, a smart cloud system should allow you to provision those resources. In some cases, literally just drag the slider bar where you want it to be and say how much computing resources you need. By the time it updates into the cloud, you should be able to have enough computing resources to handle those kind of peak loads. We're in a pretty unusual position with regard to scalability because we have total control over our traffic and the reason for that is our site is subscription only. So we can predict our traffic levels and we can scale up as we need to. As yet, we actually haven't had any because like I said right back at the start, we've got a really good deal on our servers and then massively over specified. So we know we have room to grow for the foreseeable future. But if we were in a position where we couldn't predict our traffic to that level, I'd probably be looking at a cloud service. Or us. Well, it comes back to the whole thing of well, what was your choice at the outset? Are you running bare metal hardware or are you running on virtualised infrastructure? If you're running on virtualised infrastructure, sure. There's no way I can sit here and tell you that you can slide a slider and add five virtual machines to your layout. It doesn't work like that. We can scale. We can't scale as rapidly as a platform, but we can scale rapidly using the tools that are available these days. The LibCloud API and Puppet and tools like this that allow you to build servers and deploy software onto them. We can scale up a site in a matter of hours, not minutes hours, but we can do it. I would make the point that nobody provides autoscaling that I'm aware of. Erin may now say, hey, we do. I don't know. Coming soon. Coming soon. But nobody that I know of provides autoscaling. Somebody's still got to sit there and slide the slider. At least with a managed service, you can be confident that you've got monitoring systems in place and you've got experts watching those machines and watching for load changes and being ready and enabled to react. We can provide a level of rapid scaling if the infrastructure choices are right. It's not a one-horse race, but if you need to scale up and down really rapidly on a regular basis, you probably want a platform. Round seven, UX productivity. So, basically, self-hosting has good UX if you think bash has good UX. I like bash, but I'm a developer. Obviously, there's loads of other stuff you need to work out yourself, like how to order the right hardware, how to build your environment, you're going to learn your tools. It's a long list. There's a lot of stuff that you need to learn. You can't just go in it. Slide, slide, slide isn't tech boxes. All right. Here's the thing. The way we look at it, user experience is the key to productivity. I beat the same drum quite often, but one of my points is people are costly. In order to increase productivity, is it best to add more human resources or add smarter processes and systems? Having all the tools that, well, our opinion basically to answer that obviously would be UX is the key to your productivity as a Drupal agency, as a agency that creates new things. So having all the tools needed for most Drupal projects already configured and available in one simple, either the graphical user interface or in some cases you can also have your own command line interface fully integrated with the system. Having everything there available, you've got your Drush, you've got Git, you've got Solar, et cetera. All of those things just at the touch of a button or a line of code and enter. The advantage of that results in an easy to use system that is designed for one goal, and that one goal is to increase or maximize your productivity. I'd say that it's hard to even compare productivity to the sign up and go approach. If you have a good system and you're using a good system, you should be able to sign up and literally create a high availability website and push it live onto the net within 15 minutes. Doesn't mean that the site's going to be a beautiful site winning any awards or anything, but you should have that sort of usability in a good platform. If you don't think that's possible, I'll give you a link for a Drush archive right now, anybody that wants to try. The last time I did it, it took seven minutes and 50 seconds. I'm not a techie, so that says a lot for productivity. Anyway, the last thing I would say is, can I say just a show of hands? Has anybody ever actually built a distributed system? I think you, I know, right? Anybody else? Distributed systems? Got Pedro number two? No one of us? Yeah, so just a handful of people and I mean, how many hours are lost and I'm not going to shout to everybody, but have you have any estimate on hours on setting up a distributed system? Oh yeah, the days. Days, and did that even include high availability? Yeah, that would include high availability. So the main point is every hour that you're spending some specialist to design these systems is an hour that you're losing on Drupal development. Right, so yeah, I suppose on the UX side of things, starting at the more simplistic level, for sure we don't have the slick user interface of something like a platform service and pretty much all the platform providers have pretty sweet kind of built-in version control systems, high productivity tools that is excellent. I can't deny it, I use them myself sometimes there. It's really, really good. So, you know, we can't match that, but I guess what we do is we make it as usable as possible. So we try to provide tools. So for example, we provide continuous integration with a tool called Jenkins, which means that when your developers actually have finished on a piece of code, they can just push it back into the central version control system and it builds automatically, which is kind of the same way as it works with the platform stuff, except that hours is much more, you can see the wheels turning and the rods pushing and everything, whereas with the platform it's just all just happens by magic and there's just a little spinning disk in the corner. So, you know, everything's a lot more exposed with us, but I think a lot of the tools are there and it's more a case of kind of getting used to the differences and different ways of working and the fact that, you know, for example, you'll have to go and log into your monitoring over here and you'll see eye over there and they won't just all be in one nice dashboard, they're in different places. We try to get around that with single sign-on solution, you know, you only have one account so that you log into everything with the same account and all of that stuff. So, you know, we do our best, but it's no platform dashboard. But, you know, I think that the key point there is what you lose in user experience, you gain in flexibility and circle back to the flexibility point. Yeah, okay, we don't have a fancy dashboard that you can slide things up and down and that kind of thing, but, you know, if you want to run SugarCRM, no problem, just run SugarCRM, it's a Linux server, it doesn't care. So, it depends on what you're doing. Okay, let's take a look at security. Okay, am I first on this one? I think it's me, actually. So, yeah, I mean, on the security side of things, I guess we straddle a line because we are, because we are not in total control of the hardware that we provide our service with ever whether it's a customer's hardware or whether it's in the cloud, it's not ours, it's not our physical service. So, you know, we have to take a certain amount of physical security for granted. We have to assume if it's virtualised infrastructure that the organisations providing that infrastructure is doing things diligently that the underlying VMware is well made and virtual machines are well separated and that kind of thing. So, you know, from that perspective, we're only as secure as, you know, the physical security of your server room if you're Rupert or the online security of your, you know, dashboard or whatever it is that you've decided to go with. But, you know, having said that, it kind of goes back to the point that we were making before about sharing of resources and there is no sharing of resources with other organisations with us, you know, your organisation, your data, your servers, everything is just yours and the only people that can access those servers are our privileged system administrator staff and people that you tell us that can and that's it. And we do all of the security updates and patching for you, we do that transparently. I think I'm fairly confident we've got, you know, stricter security and firewall protocols in place than pretty much anyone else. And, you know, we're, as far as that's concerned, I think we're transparently and demonstrably more secure as a platform than most other people can be. And even on the physical security side of things, at least if you ask us to commission hardware for you, virtualised or physical, it will be in an ISO 27001 data centre, right? It's not going to be in some basement or anything like that. It's going to be in racks in proper physically secure data centres that are certified. So, yeah, so, okay. Data security, I don't really actually consider this a con, even though it's a big hot topic around cloud stuff. Oh, it's in the cloud. We don't actually have the servers in our basement. And I mean, it's definitely a con in the mindset of people that are still getting used to cloud ideas, for sure. And I don't invalidate that. But I would say it's, I call it smoke and mirrors approach, really, basically because of this. We all live in the digital age. And the fact of living in the digital age is data security and vulnerability of information is a persistent danger in the world that we live in. And, you know, I would ask anybody that is worried about cloud security, where do you really think your data is safer in the system, and I mean no offense to Greg or anybody else who does it this way, but in a system that you and, you know, your great team built somewhere, whether it's in a secure location or not, or in a data centre with, you know, somebody like Amazon or Rackspace or something like that, that is literally investing billions of dollars every year to increase their security, to add machines, and not only on the physical side but to hire the best and the brightest people that they can find to run those systems. Now, I've made that argument before and somebody responded to me, yeah, but, you know, where do you, who's the likelier target? Amazon or my shop? Nobody knows who we are. And I say that if your argument for data security is anonymity, then it's not really a very good argument in my mind. True, Amazon is more, has a higher profile, but they also have the resources to back it up and I personally would trust a company that is investing that kind of money and hiring those people more than I would trust just a handful of people in a room. So anyway, but there's one other point about that and that's when I was preparing for this, specifically this point, I shared it to a friend and he told me a story, so I'll just tell this little story. His grandfather went to the Toyota plant in the 1970s and it was right after they had automated production and they had very quickly become sort of the envy of the automobile production industry, producing cars faster than anywhere else with fewer defects. During one hour, the grandfather, my friend's grandfather asked one of the managers, you know, how can you do this with such consistency and speed and delivery and he said the defect lives in the human hand. So if you remove the hand from the process, you remove the, sorry, you remove the hand from the process and you get no defects and a lot of this applies to security as well. So security problems are almost always defects in the system that are caused by human error. Eliminate the hand, automate the process and you have superior security. So yeah, obviously when you look after your own servers, security is something you need to keep on top of and Aaron's right, we're never going to be as good as security as Amazon are. We don't need to be that good. Our environment is much simpler than Amazon's and you know, we're the guys who put it together. We understand between us all of it because we built it and you know, we can therefore control it because we understand it. So you know, we know which ports should be open, which shouldn't be open. We know which inputs we should accept, which inputs we should reject. We know where all our software came from. We know how to keep up to date, things like that. And we do have an advantage and the advantage is if we have an issue, we're not reliant on anyone else or anyone else's time scales to get it resolved. You know, because we put it all together, we know how long it'll take to put it back together. Okay, last point for this panel is change in adoption. So yeah, change for us is work basically, but luckily as an organisation, we don't have a lot of major change. You know, we're not a dev shop. We have a product that we offer and our product sits on these servers and for most of the time, despite the security stuff we have to do, they just don't get on with it. So this is why self-help, self-hosting makes sense for us. You know, if we were exposed for a lot of change in our production environment, we'd probably do something else again. Different order. Well, it's okay because I actually missed this slide. Apparently I just figured out that I didn't write anything. So I'll just wing it. Change is a process. Nobody wants to, especially, I would assume that everybody in this room has been working in the tech industry for a while and has been building sites and doing things. And you all know how to do it at some level. Well, I assume most people know how to do it at some level. And the way you know how to do it works. So every time that you talk about changing the process and adopting a new process, that's something that we, as people, human beings, we just shy away from, especially when it's being implemented from somebody somewhere that doesn't really know what's going on. But again, if you have a good system, hopefully it's been built in a way that the learning curve is going to be to your benefit rather than to your detriment. Okay, yeah. So I mean, in terms of change and disruption to your organisation moving, it depends on the knowledge. It kind of circles back a little bit to UX and a little bit to skill sharing as well. For sure, you know, with the services that we offer, it's harder for an organisation to adopt those services with no Linux knowledge or with little Linux knowledge. But obviously, we provide training and help and tools to make things as smooth and as easy as possible. The whole purpose of us doing what we do is to try and give you flexibility and freedom of architecture with as little pain as possible in terms of the learning curve for your people. And in my opinion, this point shouldn't be a barrier if you're umming and aring about whether to go platform or managed or self-hosted because at the end of the day, in my opinion, the freedom and flexibility to run your own software, your own hardware, and call the call the shots, train up your staff, et cetera, et cetera. I think they outweigh potentially any sort of training issues you might have or the issues of getting people up to speed. And with the support of a good management company overseeing that for you, that doesn't need to be something that scares you away from potentially running your own layout. So, yeah, I would say for sure the learning curve can't be helped. This is this is not amateur stuff. This is, you know, serious setting up of scary layouts. We'll do most of the work for you. But at a certain point, you'll probably need to and want to engage with it and understand it yourselves. And we'll help you do that. Okay, I'm going to allow the guys a brief moment to conclude their arguments and then we're over to you for any questions. So I'm going to say that most people probably shouldn't self-host. Because like I've said so far, you need the right team. You need a decent connection somewhere to put your servers. You need the ability to predict your load or react fast enough to changes in load. You need the skills to build and protect your environment. So, you know, it's a good fit for us because we have these things, but, you know, that may not always be the case. And, you know, if you want to find out if it's the right thing for you, ask your tech team or if you are your tech team, sit down and have a think. Okay, so, yeah, I wouldn't kind of thump the tub for any particular approach myself. It's not for no reason that we comfortably partner with a platform provider at the end of the day. I think that choosing a hosting approach and a hosting product is very individual for every business. I hope that you've got a bit of insight into some of the vectors involved in that. For us, you know, we try very much to help customers choose the right approach when they come to us and it might not be managed service. We've got some customers that start out managed and then end up in-house like Ru and we've got some people that we just say, look, you're a perfect fit for a platform. You know, so it depends massively. At the end of the day, we just want to support Drupal-based companies to make smart decisions about how they do their hosting and where they do their hosting and get the best deal for the people that we work with. How did you say that? Thump the tub? I will thump the tub. Platform as a service or nothing. No. Actually, I'm just joking. There are individual cases. You should be making the right decision for your clients. I do stand 100% by, I think, platforms are more flexible. They're a very good value offering, but they're not the right fit for every single use case. I will accept that, and I will even promote that. Can you write it down? Nope. We know this isn't being recorded. Oh wait, it is. I'll conclude on this, and this is the last bit of platform spiel for you. As a Drupal service provider, we see that there are three primary ways to increase your bottom line, right? You can produce more Drupal. You can produce more Drupal, for example, by getting more contracts or bigger contracts. You can increase your services or the scope of your services and what you're offering. The third would be to reduce your costs. In my opinion, if you're using a good platform, you should be able to do all three. You should have a more competitive advantage that allows you to produce more and win bigger contracts. You shouldn't have to add people, so therefore you should be able to reduce your costs. Also, if you partner with a good pause provider, you can increase the scope. You can become resellers or add hosting as an offering, something that you're using to generate revenue for your business as well. The last statement I'd say is personnel and a good platform increase the efficiency of the development process, allowing you to create more, faster and with less people. Thank you. We've got a mic over there. If anybody would like to come up and present questions to the panel. Shall I put this one back? I knew when John walked in the room, the first question would be from him. Hi. This is hard to answer, but I'm looking for a finger in the air. CAPEX and OPEX on 100,000 hits a month and a million hits a month. Go. Each one of you. Oh, God. I don't know. What's the size of you? Anonymous traffic? Authenticated traffic? What are we doing here? This is finger in the air, but it's triple. You can assume there's some authenticated traffic, but I'm assuming it's not. Normally not much. Sorry. Just to understand that. Essentially, what you're saying is what would it cost? Are you asking us what it would cost to take the various approaches to host a small website and a big website? Basically, that's the question. To define this really clearly, a challenge for business is that the costs are not defined. I'm not saying they can be very well because I've tried myself as a sysadm in five years ago. What is good is to see your reasoning on your finger in the air costs behind what it takes to do small site, 100,000 a month, medium site, million a month hosting. Think about it in isolation. You've got to sit up all because it's got to be separated from everything else. In isolation, if I come to you and I say I'm getting, I've got triple, I've got 100,000 hits a month, what price do you give me as a fixed cost? That's CAPEX and RPEX. Of course, the first question I'd ask you is, well, is high availability important to you? On a million hits a month, I assume it is. How many hits am I losing in my downtime? I would probably ask you if I'm not technical. I guess what I want to highlight here is that the reasoning is important and how close you can give me an answer on this is important as a business, not just what the price is, but having some certainty around it. I think that's the whole difficulty with the question because you're talking about 100,000 visits a month website, right? But if that's absolutely business critical, we might run that on six servers. If that's just a front-end shop window for a busy corporate site, we might run it on a single VM and it'll be fine. So it's difficult to say. It could be anything from literally 300 pounds a month up to why don't we simplify it? Let's say 100,000 on a brochure site, a million on a commerce site. Sounds good to me. Yeah, fair. So I mean, yeah, I would say assuming we're providing a server as well, you probably be talking about, probably around about 200 pounds a month for the 100,000 approach for the million. I don't know, a very finger in the air. I don't know, probably about a thousand pounds a month, something like that. Okay, so I don't know the size of the site, obviously. So let's just, just for sake of argument, we'll assume that it's not very data intensive, doesn't need a massive database. 100,000 hits, we could handle that for $50 a month. A million sites, a million hits with a high availability system that has multiple data centres, self-replicable or replicable would be about starting point, I think, and I'm just assuming, but I think you could set it up for around $564 a month. Got it. And I would say we're not a hosting company and you need to go and talk to one. I think the interesting point about this is that, and it does raise an interesting point, which is that if you're talking about single individual websites, platform probably will normally be more cost effective because the economies of scale kind of start to make it cheaper per site if you're running multiple sites on a set of infrastructure. Right. So then managed services comes into play. You've got a whole lot of separated out properties for one organisation and you need a high SLA where they can call in any time of day against any of those properties. Right. Okay. Great question. Thanks John. Anybody else? Do we have any other questions? What do you think is the winner? Yeah, you don't want to see that. Well, if there aren't any other questions, thank you all very much for coming. If you would like to talk with any of us, I have cards here. I've also got a social event tonight. You can find it on the Drupalcon prog website under social events, walking distance from the conference. And I'd love to chat with each and every one of you. Tap us on the shoulder. Thanks for coming. Thanks everybody.