 I'll go ahead and just start. So can everyone hear me? So I'm Sunitha. I'm Sunitha Muthukrishna. I'm a Program Manager at Microsoft, and I work on Microsoft's Azure websites, which is more of a hosting solution, something similar to, say, Pantheon or AquaCloud, something along those source. So today's talk is basically going to be on, give you an introduction to what Microsoft Azure websites is, and a couple of other services that we've worked with to kind of give you a really good, high-scalable Drupal solution, as well as dig a little bit deeper about how you can actually manage your Drupal sites as well on our platform. So as you guys might have heard, we had a brand name change, so we're no longer Windows Azure, we are Microsoft Azure. Microsoft Azure, the reason we changed the name was, we no longer have just Windows support or .NET support, we actually have Linux, we have open source applications like Drupal, WordPress, Joomla, even .NET applications, Python applications like Django, so we have support for all these frameworks on our platform. Why would you want to use Azure websites? Now, a couple of reasons why you want to use it is, it's the fastest way for you to deploy an application. We support various deployment technologies like Git, GitHub, even FTP, WebDeploy, Visual Studio Online. If you have your content on Dropbox, you can also use Dropbox to deploy to our websites. So we support a wide range of deployment tools that you guys can use. We also provide scaling, so obviously if you're looking for a hosting provider, you want to be sure that as my business grows, as I get more users towards my site, I want to make sure that my application can handle that, so it should be able to scale and grow as your business grows. So we support auto-scaling, we also allow you to scale proactively. In the sense, for example, you know that you get traffic probably just six months in a year. So you can configure your site, however you can configure your site to handle that amount of traffic for the six months of the year and the remaining six months you can actually drop it down. This way it's more cost effective for you. Now it's Microsoft, so you guys should know it's secure, it's reliable. We really care about customers' data privacy, so we make sure we've taken all the efforts we need to make sure that the platform is as secure as possible and avoid any kind of attacks or onto the platform as well as the customer data. Here you can see an image of how the platform Azure websites is built. It's kind of an overview and I can give you some steps as to how the workflow is when a request comes into our platform. So when a client makes a request, it actually goes to the load balancer. The load balancer is our front end, what the end user actually hits. And the load balancer uses a module called as application request routing. Application request routing is nothing but it's a router that figures out where I should send the request to, find out where is the server that's actually running my site. So it goes to the runtime database, makes a call, gets all the information, which machine, where's the web server, gets all the information, sends the request to the web server. The web server processes the request. It uses the help of the storage controller and the storage controller is nothing but it helps the web server to talk to the file server. So web server and file server are not together. They are two separate pieces or components. So the storage controller helps manage those tasks, any input-output operations. And once the request is processed, it actually sends the response back to the client. So this is how the workflow would be for any request that goes and hits an Azure website. We have a couple of other components which I have not included in this diagram. One is deployment server and controllers. A deployment server is the component that actually allows you to manage the deployment. Now you're using Git, for example. So in order to use Git and actually deploy the site content and maintain that continuous integration, you have to have some sort of a component to manage that. So this is what the deployment service responsibility is. If it doesn't, if for example, for some reason deployment fails, it rolls back all your changes. So this way, whatever site was running before, it's still running and the end user's not seeing a broken site because the deployment failed halfway through. Controllers, what they do is they actually manage the service end of it. For example, billing, quota enforcement. So if you're using a different, depends on which kind of plan you're using for the platform, we have various plans. So if you're using, say for example, a shared hosting, we have quotas on how much CPU you can use. So that's what the controller's responsibility is. Figure out how much CPU they use and if they exceeding and kind of stop the site or suspend the site, make sure the customer upgrades. So that's kind of what the controller's responsibility is to do. Now, couple of words you guys would have heard with cloud computing is high availability, high performance that people talk about. So what high availability is, is nothing but what is the uptime of your site over a period of time. Microsoft supports 99.9% SLA, which doesn't really mean that you'll never have downtime in your life. Your site will go down. It could be due to various reasons. It could be natural disaster. It could be that the entire data center went down. It could be service outage. It could be anything. So basically you should see around like 10 minutes of downtime if you're having an SLA of 99.9%. If someone's offering you 99.99, you'd probably see one to two minutes. So how do you, even though Microsoft Azure is giving a 99.9% SLA, how do I get most out of the platform? How do I make sure that my site is available as much as possible and make it more resilient? So even if there's a service outage, I can still make sure that my end users are being able to access my site. That's what high availability here means. So the most important thing for high availability is make sure there's no single point of failure. Single point of failure is for example, take your Drupal site itself. Drupal site is highly dependent on your database. So if your database goes down for some reason or is maybe the content got corrupted, something happened to your database and it's not available, your entire site goes down, your end users can't actually reach to your site. So none of the content is visible. The way to tackle that problem is redundancy. Redundancy is nothing but just replication. So you have your database in one data center, you replicate into another data center and make sure your application is optimized enough and written well so that if it notices that one database is not able to serve the content, it's not able to connect to one of the databases, it can switch over to the second database. This basically actually reduces your downtime. So even though Microsoft says that you get 10 minutes downtime a week, if you set it up correctly, you configure it correctly, you can actually get four nines SLA and even though the service provider doesn't really provide you that, but if you set the system correctly, make your application as resilient as possible, you can actually get most out of the platform. High performance, obviously faster response time, you wanna make sure the users can actually load the pages, they are hooked on to the site, they stay on the site, they don't move away from the site because the page takes 10, 15 seconds to load. So some ways to do it is caching and obviously scalability. Caching, everyone is aware that Drupal runs really fast. If you use some sort of a cache without a cache, it can be slow because if you're using a ton of modules, it takes so much time to actually load all the modules when the page loads, it gets really slow and that can impact for how long the user is actually staying on your page, on the website page. So you can use various caching servers, you can set up mcashd, varnish, whichever server you guys prefer to use, you can actually use it on our platform. Scalability, scalability is, say for example, today my site is a small site, I get maybe 100 requests per second or 100 users. So if I get 100 users today, I can set up the configuration in a way that can manage 100 users. Say there was a big launch that I had and I started getting 500 users and with 500 users, obviously, you need to be able to manage that, your site needs to be able to manage that. So we support various techniques that you can use to actually scale and grow your site even though the user traffic is growing. So I'll dig a little bit deeper on Drupal and how we can actually set it up for Azure. So with Azure websites, you can actually spin up a simple Drupal website running on a single machine and talking to a single database. So user comes, he can access his site, this will serve up hundreds of requests per second. That's fine. But the biggest problem with this design is that if, say for example, it's Black Friday, you're having an e-commerce application. It's critical to make sure that your application is available on Black Friday because your users can actually come purchase. It's critical to your business. There's a service outage and the database is down and no one can access your site now. And you get a phone call in the middle of the night saying that the site is down, then you have to call the service provider and figure out what happened. Sometimes the issues can resolve in few seconds. Sometimes the issue takes hours. Sometimes it can take days. So how do I avoid that whole process and make sure my system can recover itself irrespective of what happens at the service provider level? So some of the issues that you could probably see is service outages, DDoS attacks, any malicious attacks. Azure actually takes care of all the malicious attacks and DDoS attacks. Couple of things for DDoS attacks you could do is if you enable auto scaling, sometimes auto scaling handles it, but we have an internal product that runs behind at the platform level that actually checks for such kind of issues and actually restricts those IPs when they identify that this could be a DDoS attack or could be malicious attack. So this is something that the platform itself takes care of you guys. Service outages, again, we can design the system to avoid that. This is eventually what the goal would be. So this is a sample architecture. You can tweak it in different ways. This is an easy one to set up. Here what I did was I set up a replica of my website in two regions or two data centers. Now, each website is actually not running on a single machine. If you're running on a single machine, if the machine goes down for some reason, no one can access your site. So you can actually run your single site on a single data center on multiple machines. You can have each region, basically you have a specific database, you have a cache server attached to it. So this is how you would configure for one region, one data center for your Drupal site. Replicate the same thing onto a second region. And here what I did was we give a restrictive file storage size, it's 10 gig, which is usually good enough. But if you have a site that is media heavy, you have lots of images, lots of videos, it's good to actually store them on Azure Storage. So you can store all of them on Azure Storage, you can add a CDN on top of it. Azure provides a CDN as well, so you can add a CDN on top of it that actually caches the content. And you can use a global traffic manager like Azure Traffic Manager, which route the traffic. So when an end user basically comes to your site, the end user actually hits the traffic manager. And the traffic manager figures out, okay, this person is sending me a request from Japan. Let me find the data center that is closest to Japan and serve him the request from there. So the end user actually sees a much faster response time because he's getting the response from a closest possible data center. If, for example, take the same Black Friday issue that I was talking about, and there was a catastrophe in region one, everything is down, your website, your database, server, everything is down. In this case, what will happen is Azure Traffic Manager will identify that, okay, this region is not serving up the content. Something's really wrong, so let's route all the traffic to region two. So in this case, your system is actually recovering itself. It takes three to five minutes for the traffic manager to flip, which is much better than having a service outage and waiting for the service provider to actually resolve the issue, which might take sometimes hours. I don't know what CDN means. Oh, CDN is a content delivery network. Content delivery network is nothing, but it actually caches any kind of static content, and it could be images, could be some static HTML files. So you just, because it's cached, it's easier for, it serves up faster, unless until the content changes. It's good to use it for content that never changes. And CDN also manages failover and things like that. So if, for example, here you can see Azure Storage is actually a single component. So what if the Azure Storage is down? The user will still see the content because the CDN is actually caching it and actually sending that as part of the response and not waiting for the Azure Storage to come up again. So even if storage is on CDN, it will actually still resolve the issue if something happens to it. Any questions, anyone? So I'll give you guys a quick demo about Microsoft Azure. We have a free trial. It's $200 of resources. You can purchase and play around with the platform. It's managed.windowsazure.com. We have, as you can see the list here, we have multiple services from websites. Websites is the managed solution. It's, that means it'll give you the entire stack to host your platform, sorry, host your website. And all you need is the application and your data. And that's it. Virtual machines, you can set up your own VM, you can set up your own cluster. This way you can manage and control everything yourself. We support Linux, Windows, all flavors, or bring your own image. Mobile services, cloud services. Cloud services is a fancy name for basically load balancing across multiple VMs. You can create a cluster. So basically the end user goes to your load balanced URL and the load balanced URL figures out where to route the traffic to or where to route the user request to. Database is a bunch of other services. So here I have a bunch of Drupal sites already created. I created a Drupal site in West US and I created a Drupal site in East US. So this is similar to the architecture that I just described, creating the site in two data centers. It's as simple as going to the new button website, quick create, you can create a website here. It takes a few seconds to create your site. If you're interested in playing around with the platform, I would recommend to go to the gallery and actually choose one of the templates that we already have, a prepackaged Drupal and actually set it up from here. Now let's dig a little bit deeper. This is actually my site that's already running on West US. Now you can see this is an image page. This image page, this image is actually stored on Azure storage and not on the file system. So our dashboard is actually fairly easy to use. If you, let's take a little while to load. So our dashboard basically gives you a view of all the operations that's happening within your site. It gives you the monitoring data you need. That is CPU time, bandwidth, requests per second, memory usage. You can also add a whole bunch of metrics that you can actually see once the page loads. Not sure for some reason the chart's not showing up but here if you click on add metrics, you can actually see all the metrics that you can add and try to monitor. Then if you ever want to edit your site online, we have something called as edit in visual studio. So this basically pops up an online editor and you can edit and make tweaks and we support Git. So you can set up your source control. Here I already have set up a Git repository for my site. So I can edit anything, track all my, do all my commits and actually push back to my GitHub repository. For example here, you can add whatever you want. It automatically saves. So you need to be really careful if you're actually editing your production site which I would recommend never to do. Okay. Then you can actually add a bunch of additional services to your website. We have third party services that you guys can use. We don't support SMTP by default. So you need an email provider like SendGrid. We have SendGrid available as part of the add ons. You can choose SendGrid. It's fairly cheap. It's less than $10 a month for your email. You also have deployment slots. Deployment slot is a fancy name for staging and dev sites. So you can create a new slot. A new slot is nothing but a separate website which you can treat it as a staging site. You can create all the way up to five slots and you can swap between your staging site and your production site or from your dev site and your production site. The monitor thing is not showing up. I think it's because of the resolution. So deployment history, since I've already set up Git, it actually tracks all my deployments that I've made to my site. It tells me if the deployments passed, failed, you can look through all the entire history from the dashboard itself. Web jobs. I'm not sure if many guys use Drupal Cron. So Drupal Cron, if you're running it as part of your application itself, you might see that depends again on what kind of Cron job you're running. But if it's intensive, it can really slow down your site. And if a user is trying to access the site at the time when your Cron is running, then he's going to see a really big response time. So he's going to feel the impact of it. So how do we avoid that is using web jobs. Web jobs gives you the ability to create background processes. We support PHP, any EXE you can upload, and PowerShell, even C sharp. So allow you to create your tools in any framework you're comfortable with. And we basically isolated from your application layer. So your Drupal site runs, disable your Cron job on your Drupal site and upload your Cron.php on the web jobs. Now here I have a simple PHP Cron job running. All it's doing is, it's running at 12 a.m. every day. I've scheduled it to run 12 a.m. every day. You can run continuous jobs, you can run on demand, you can run them scheduled. So it's basically optimizing a bunch of tables. That's all it's doing. It's a simple task, but you can create any complex tasks that actually run as a web job. Configure basically allows you to change PHP configurations from one version to another. We maintain all the security fixes for PHP, so that's something you should never have to worry about. The platform actually takes care of everything, all the updates. We work with the PHP community to make sure that all the fixes are made into the platform. We have, as I said, we have Python, we have Java as well. We support WebSockets. This is the setting to be able to edit online. We have SSL, you can have custom domains. We have Site Diagnostics. This gives you all the web server logging, your PHP logging, all the logs that you basically get at the server level. Since we are running IIS in the background, so it gives you all the IIS-based logging. If you need to debug, you have remote debugging, so you can still dig deeper and actually investigate what happened. Monitoring, this is like Pingdom. You just set up the URL of your site, select which locations you want to run from. It'll configure that, it'll run these tests regularly and give you the response times. So you don't really have to go and choose Pingdom or a third-party service. You can actually do the same kind of monitoring using our endpoint monitoring itself. App settings is basically used for environment variables. So you can use them as environment variables and create, maybe your application may need to use it or maybe at the server level you need to make changes. You can use the app settings to actually do that. Connection strings, it allows you to link your site to a database. We recommend using this because when you're trying to do backup, we also offer backup and restore. So the backup feature needs to know which database it needs to copy from. It could be a database on a VM, it could be a database from the database service, but it's always good to have it as part of this. So it's easy to work with some of the other features. Scale, here's where our hosting plans come into picture. So we have various plans, free, shared, basic and standard, free, it's free. You're not going to be charged for it, but it's really slow, it's more for a development kind of a site. Shared, it's a shared hosting platform. It's similar to any shared hosting platform. You share the resources with all the other customers as well. The caveat of using a shared hosting is if someone is using most of your CPU, your site is kind of messed up. So knowing that you should figure out if you really want to use shared. Basic, it's again, it's a dedicated machine. It's slightly lower in cost. And you can select from any of the sizes that we offer. So we offer three types of CPU cores. One is small, medium and large. For Drupal, I always recommend trying medium or large because it's really heavy. And instance count is how many machines you want to run on. We offer, for basic, it's up to three machines you can run your site on. So all the three VMs, they're actually sharing the same file structure. So you don't have to worry about the content being out of sync because all of them actually talk into the same file system. Standard is our highest plan. And basically it gives you more number of features where you have autoscale compared to basic, which you didn't have autoscale in that. And here you can scale all the way up to 10 instances. And you scale basically based on the CPU usage. You specify your trigger rate. If it's 60 to 80%, spin up a new VM. You don't really have to do anything. It just spins up a new VM based on the range that you specified. Nothing has to be changed in the VM automatic. It's the platform itself starts routing the traffic and balances the load across all the VMs. So you don't technically have to do anything. It really helps, for example, when it launching something, if you're having a new release or launching some big campaign, and you expect it probably there'll be 1,000 users, but eventually there will be 5,000 users. Having autoscale, it would have managed everything for you and you don't really have to worry. Now every time you create a website, you get something called as a Kudu website. A Kudu website is our developers portal for your website. This is something that the end user doesn't see but something that the developer can see because it's authenticator, you need your Azure account. You can check all your environment information. You can check all your process information. What process are running? Here you can see that our two PHP processes running. You can also dig deeper, find out what modules are there, handles, what threads are there. If something is, if one of the processes hung up, then basically you can go and kill a process. When you kill a process, it'll spin up a new one. So you don't have to worry about, okay, if I kill this process, what's going to happen is it's still going to break everything. You don't really have to worry about it because the platform is intelligent enough to figure out, okay, this was, there are less processes, there's more traffic and you just spin up more processes to handle the traffic. Tools, we have diagnostic dumps. Gives you all the diagnostic information you need. Memory dumps, web hooks is again deployments. You can see if the deployment failed or passed and it gives you alerts through SMS or emails. Gives you the entire stack trace that this is what happened, why it failed and actually rolls back everything. Site extensions is basically a way for you to extend this entire development experience. You can create your own site extension. I have PHP My Admin, which I've already installed and this will basically give me access to my database from within the portal itself. Sometimes maybe you're seeing some crazy stuff happening and you really want to see what's happening and dig a little deeper into the database, you can use this. You can access your database and run your SQL queries right from here. I have site replicator. Since I'm running my site in two regions, I actually uploaded my published settings here. So it keeps the content in sync between the two sites and you can skip rules. For example, since both are using different databases, I didn't want to copy settings on PHP. Yes, is the database? Yes, I'm using a MySQL service that we offer with our platform, ClearDB and I'm using one of the dedicated clusters and they manage database replication and everything. You can build your own custom tools. It depends on how you're configuring your site. It's all real time. It's even driven. So as soon as it notices that there was a change to one database, instantly the change is replicated to the second. It takes a few seconds. That's about it. If you guys have any questions, you guys can meet me at the Microsoft booth and I'll be more than happy to answer any of your questions. Thank you. We're having a little screen resolution issue here. Okay, this is a little backwards in some ways but my talk is more on an introduction to clouds. It's sort of a high level, what they are, what's coming. Can you hear me now? Better? Okay. Sorry about that. So this talk is more in sort of an introduction, high level, what are clouds, what are some of the differences of them? She just did an excellent job of sort of going into the details and all the difference are all the particulars of the zero platform. I won't lie, that was actually the one I knew the least about. We work more with Rockspace and AWS and usually pulling people off but still those are the ones we work with and do the most. This is gonna be sort of a high level. As I said, I like to give people a warning of what's coming so they know. So it's a quick agenda sort of define the cloud. We're gonna talk about what some of the options are out there and then how do I decide what I need to do and that? So clouds, what are they? People have lots of different expectations of a cloud, of what it means to do. Talking about automatically scales never going down including the high availability, disaster recovery, alerting, monitoring, backups, all those things. It's almost as if they expect it to know what you intended to do, to have a crystal ball. And as we often say, my crystal ball is broken. I can't read your mind. So you have to be able to tell the cloud what it is. And again, sort of defining it as a hard or a nebulous term, it has a lot of different terms. So I always go to the Wikipedia. It's a good standard baseline to use on what it is. And my favorite one on the Wikipedia is it's a marketing term according to them. It's not a technology, it's not an application or that it's a marketing term to define a space. When I first started doing PowerPoints and all that, you just always had a little cloud that went out and you said the internet and that's what it went to. It went out to the cloud. So people often just think, oh, it goes out to the cloud, it goes into ether and sort of cyberspace. And what does that really mean? I sort of like to define the cloud as I'm using someone else's computer. I'm putting all my stuff on someone else's computer and then I have to understand the implications of that and what they are. People often talk about cyberspace, this Ethel thing, where is it? It lives in data centers like this. This is a row, I don't know, this is a Creative Commons off the web. So I don't know what data center, but it's a very common. Long rows of servers, tier four redundancy, all that sort of thing. But this is where your data lives. When she was talking about multiple regions, multiple data centers, she talked about East Coast, US, Aspern and then the one in I think it's Fremont, but West Coast and that. So you have different things to do, but when you put your stuff on the cloud, in essence you're putting it on someone else's computer. So you need to think about security implications and what all that means. There are a number of public clouds out there. They all have a variation in different ones. Amazon Web Services is the one I think of the most. It's usually the one that's most popular. They sort of started it. They've done a great job. The people at Amazon are very smart, don't get me wrong. But they have a size and a scale very similar to Microsoft and Azure that is different. And so you have to understand what those implications are and that you can absolutely set up a five nines high availability site in a cloud, but it does not come automatically. It takes a lot of work. Netflix is one of the most famous ones that I know of that's on the cloud. They're on Amazon Web Services. They're monitoring and alerting is 200 milliseconds. They have watches and that if something goes wrong in 200 milliseconds they start adjusting or moving. But what they did on Amazon wouldn't work on Azure or wouldn't work on Rackspace cloud. So each of these clouds have a lot of capabilities, but they're unique. They're different and you can't just pour it. It's getting better, but it's not there. I know Google has a cloud. I don't know anyone who's used it. So I don't know anything really about it. The HP, the IBM, the Oracle's, they're more private enterprise level clouds. I know Amazon Rackspace, Microsoft, Google, you can just go sign up with your credit card and get that. I don't know if you can do that with the HP and the IBMs. When we've worked on those they've been with contracts and environments that were more private clouds that we were managing and making sure. But as these public clouds came up, you ended up with this new thing as a service. And it sort of became the popular term. You had companies come up, whether it was sales forces is selling as a service, ops sources as a service. But you had different levels. You could do infrastructure as a service. You can do platform as a service which is much more what the previous talk was about. You can do software as a service. And each of those have different implications on what's being provided and what's being out there. There's also things like storage. Amazon has the S3. Don't know the name of the Microsoft storage, but I assume they have one. You have those sort of capabilities so you can pick and choose what you need to do and how you need to do it. I often sort of say black mesh because we're an infrastructure platform in some way software as a service. I often say we're service as a service. Depends on what you need. We'll make it go. We're one stop shop to fit it in that. That's my only real black mesh pitch here. But as a result of all these services, people have a lot of capabilities and a lot of choices of how they want to interact, how they want to set their applications up. I'm gonna take a quick step back a little in history here and talk about the virtualization technologies. They've been around for a while. The biggest ones that I know are VMware, which was about 15 years ago, 2001. Zen, it's pronounced like it's a Z, but they spell it with an X for some reason. And then KVM, which got put into the kernel, I think it was 2006 in 2007. And then OpenStack, which is the one that we have been using a lot recently, was a cloud-based platform that was built by Rackspace and NASA, and now I believe HP is also one of the biggest contributors to it. So these different virtualization technologies have gotten better and have let you be much more robust and controlling on what you're able to put on a machine, how you're able to break things down and that with the virtualization OS. And this isn't the best because it's a one app to OS. You can put multiple apps on each of these VMs, each of these OSs and utilize that, but it's abstracting the hardware layer and all that sort of stuff for you. They've gotten much better at that, which has led to sort of the proliferation of private clouds. And this is where people can install it on their own network, on their own infrastructure. You control the hypervisors, you control the network, you control the storage in the IO. That may be a good thing for you, that may be a bad thing for you. It really depends on your needs, what your levels are, of how you want to make things work. When you put things on public clouds, you have no insight into what's happening underneath. You don't know how many other sites are on their hypervisors. You don't know how much IO is going back and forth. You don't know how much network in that. Now, you'd like to think that everything has plenty of room and plenty of growth. That's not necessarily always the case, unfortunately. So, again, that sort of always available scalability. It's much more available in public clouds because they are just larger, but even they have capacities. They're not infinite. There is a capacity at some point in time. On a side note, one of the things that always sort of amused me, particularly in shared hosting, is you'd see things like unlimited databases or unlimited bandwidth or unlimited file. Never trust anyone that's giving you unlimited of a finite thing. The unlimited databases and the fine print, maximum of five tables. That's not unlimited. So, recognize what they are in that. The public cloud has some of those. Now, everything you just said about private clouds, they tend to be smaller. They tend to have more finite resources. You're building things out. You're able to utilize things more and shift your utilization to meet it, but a private cloud gives you a lot more control, but it's not necessarily going to give you the size of an Amazon or Microsoft or that. And then the next thing that has come up is hybrid clouds. Hybrid clouds are sort of a combination of both. You may have specific data that you want on your site. You don't want to leave it. You don't want to have it out in case something happens. Something is compromised. There's an issue either on your server or hype. So, you keep that data there, but maybe your public-facing website that's calling back in and keeping the credit card numbers on yours, but you don't want those out on the cloud. Maybe you have more signups and marketing things that are feeding your APIs as you're doing data collection and all that. You don't want your information out on the cloud, so you put it on that. So, it becomes a hybrid cloud in that. You can do it in a couple different ways doing that. You can also do it with multiple cloud providers. You do not have to do just one. If you set up your private cloud or your infrastructure, you can connect to multiple. That's a good way to do redundancy and not have all your eggs in one basket. One of the things about Amazon, they're huge. When they fail, they fail spectacularly. It's a three-day, five-day outage at times we've seen. I mean, literally, and one of them was a human error. It's, they had an issue on one router, made a mistake. That whole Northeast section went offline. It started replicating someone else. As soon as you started their application, they couldn't stop it, because they stopped it. Everything was corrupted. They wouldn't have had a fail point, so they have to let it go. I've heard of similar things on Rackspace and that I'm not familiar. Microsoft's probably never had one, but I'm not familiar with those examples, but those are things that take, one that happens, there's no one to call, there's no one there, so you need to be able to plan and take care of what you have and what you're doing. So how do you start making some of those decisions? How much in-house knowledge do you have? Do I know what I'm doing? If you're setting up a simple website, a brochure, or that, you saw how your interface, it's simple. It's point-click, you can set up a very simple interface. Same on the Amazon and the Rackspace, you can get it. But you need to know if you want to take it to that next level, because as I mentioned earlier, clouds don't have crystal balls. They don't know when you want to scale or that. And on a side note, I've also heard horror stories of automatically scaling when it was a DDoS attack or something else along those lines and it cost someone about $15,000 to $20,000. They got a monthly bill and they were usually about 150 bucks. So you need to make sure you're monitoring, you're alerting, you're staying on top of things so you don't get that sort of thing and that. But as I mentioned earlier, what are your security concerns? As I say, as I define the cloud, it is someone else's computer, it is someone else's issue, they have to deal with it. I just, beauty of TCP and networking, point it towards a name and it off it goes. But as a result of that, you're on someone else's computer. What are your security concerns? Does it matter if your data gets it? Within the government, they have very strict regulatory compliances. PCI will HIPAA, FSMA, FedRAMP, and then in the e-commerce world, PCI and that. You need to make sure that on those sorts of things, is your cloud certified for it? Has it had the security? Has it set up? How will it do updates? What happens if I need to make an update? How much control do I have in those sorts of things? So these are sort of the questions that you have to walk through and talk through on that. What are my performance needs? Again, you don't have as much with performance. The biggest one we've seen often is IO. It's not CPU, it's not memory, because you can scale those. It's IO. You just don't know what's happening underneath. So you have to be concerned about and not have issues. You also have performance needs if you're doing the disaster recovery in multiple regions. How does the replication set up? Microsoft has a service, Amazon has services. You can do it as pure MySQL replication and that. Do I need high availability, redundancy, or disaster recovery? I separate high availability and redundancy as not the same thing. High availability is a result of having redundancy, but having redundancy does not give you high availability. In the example you used originally with multiple databases. I have one master database. It's pushing it to a slave database. That keeps me, if something gets wrong physically with the hardware, something happens on that side. It may or may not be automated to switch over to the slave database. Sometimes I've seen people run their slave databases about 30 minutes behind, because if a corruption happens in the master, you can stop it, you can flip it over, and you catch that corruption and no learning and monitoring. All very feasible and doable tasks, but you need to know that you need to know to do them. It's sort of the Rumsfeld quote. The unknowns I know and the unknowns I don't know. So you need to sort of know what those are and how to work it. And also at the end of the day, what is my budget? How is it? One of the things that everyone says about cloud is it's cheaper. It can be, it doesn't have to be. It can get very expensive, very fast when you start putting up multiple sites in multiple regions and that, which is why the big companies tend to do private clouds themselves and do this as opposed to the public. It's all a matter of scale of what's important to you and that. So understanding what your budget is, but also understanding, I hate this phrase, but the total cost of ownership. Understanding not only what's my budget for my IT, but what's my staff, what's my quality of life? Am I gonna be the one dealing with this at four in the morning? Have I set up redundancy? How have I done that? So you have all of those sort of things that come into impact. And then we get to software. How does my software? Is it a simple website? Is it a single server, multi-server? Do I have APIs going to other things? Am I doing backups? Am I integrating with other places? All of those things matter. When Amazon first came out, it's a little better now and right scales improved in that. But I remember talking with one of the development friends. He's like, yeah, I just had to write two applications. I had to write my PHP Drupal application and then I had to write an application to handle the cloud and make sure things were monitored, staying up in that because Amazon, I'm picking on them a lot because that's the one we work with the most, things just disappear sometimes. Instances, I don't know if the hypervisor went down and it just didn't. You can set it up to reboot and that. Sometimes it works, sometimes it doesn't. If your database server crashes, but not your web server and then the database, you could get into some hung state. So being able to build in that sort of fault tolerance into your application is very important. What happens when there's an error? What happens when one piece goes away? How do you handle that? How do you make that work? And those are the kinds of questions that you need to sort of think about as you're going through this. The other big thing on software is just because it worked in your dev or on your local does not mean it's going to perform the same way on the cloud. You need to make sure you do testing and application, IO benchmarks and those sorts of things to know what is gonna work, what is not gonna work for you on that. And then last sort of question, set it at 15 to 20 minutes, so I'm right on time. And then we're gonna have questions because this one of this to be more an intro on that is what is next on this? It talks about an OS and an app. What's next is an OS or apps by themselves. So you can spin up containers, is what OpenShifter Docker, spin up a container of just what you need just to my SQL application. Because when you have the OS being replicated, you're replicating everything, everything in the kernel, everything for the file system, print drivers, all those sorts of things, my SQL's never gonna need that, PHP's never gonna need that, Apache's probably never gonna need that. So being able to isolate and lock down, virtualization is getting better. I mean, it was a great leap when virtualization happened all of a sudden, you could have one server with very low utilization, put 10 things on it, not have them interact, and you got a lot more bang for your buck out of that server. That trend is continuing. You can get finer and finer resources that you have control over that you can spin up or down as you need and get a lot more bang for your buck out of the hardware and that. I know with what we've been doing on the OpenShift stuff is, it is just sort of now coming out, so you either need to have someone who really knows that or wait a year or so for it to mature a little bit more in that. But that is what is coming next. So there's things like Docker and containers that will give you a lot more control, but they're gonna come with their own sets of headaches of making sure that they're configured, they're optimized, everyone knows how to work together, and that sort of thing. So that is my high level overview of clouds and what is next. And then hopefully you have some questions. Anybody, going once, going twice, anybody? So everyone perfectly understands all of that, clouds, no questions. I did this talk because at camps, I often get, and I've had a couple people, but not nearly as many as Drupal cops, I will admit, but at camps where I've done similar sort of things, people often will come up and say, so what exactly is a cloud and what do you do with it and how do you do it? So hopefully I was able to sort of answer some of those questions of the different options that are out there with some of the different strengths are and that. If no one has any questions here, I won't keep you. If you have any other questions, you think of anything else, please stop by. I'll be at the Black Mesh booth all day tomorrow, all day for the rest of the day today as well. And thank you for coming. I appreciate it. Thank you. Here's your drink.