 That is time, and here we go. Hello and welcome. My name is Shannon Kemp, and I'm the Chief Digital Manager of DataVersity. We'd like to thank you for joining this month's webinar, Managing the Transition to Hybrid Cloud. It's part of the monthly webinar series sponsored by IDERA. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them via the Q&A in the bottom right hand corner of your screen, or if you'd like to tweet, we encourage you to share our highlights or questions via Twitter using hashtag DataVersity. And as always, we will send a follow-up email within two business days, containing links to the slides, the recording of this session, and additional information requested throughout the webinar. Now let me introduce to you our speaker for today. Rob is currently the Director of SQL Product Management Group at IDERA in Austin, Texas, driving the definition and production of IDERA's industry-leading SQL Server Management Optimization and DBA Productivity Tools. He has over 14 years of experience as Development Director and System Architect in SQL Server Development Organization and previously was CTO at Purvasive Software. And with that, I will give the floor to Rob to get today's webinar started. Hello and welcome. Thank you very much. And thank you for the introduction. My name is Rob Ryanour. I lead the SQL Product Management Team at IDERA. As mentioned, I was a Development Director and System Architect for 14 years at Microsoft. I managed the SQL Server Storage Engine Organization for about seven years and was involved in the initial designs of Azure DW and the deployment of our data warehousing infrastructure into cloud. So today, I'm going to briefly discuss several broad areas where things work, but in some ways are different, sometimes non-intuitive as we migrate infrastructure to the cloud and especially as we try to link on-prem environments to cloud environments, so hyper cloud. So the reality is there are a multitude of items which need attention when moving to any new deployment environment with different storage and network behavioral characteristics and especially when the environment has an IT execution paradigm, which I think we would all agree is still very much under development. I'll touch on that in a little bit more detail about when we just jump in the details here and make sure I know how to drive. So I would guess most everybody attending this presentation has knowledge and opinions on what private clouds and public clouds are, but I found it useful to quickly review these topics just to make sure that we're all on the same page and the same for deployment patterns and cloud environments. Obviously, this is widely available knowledge, but it's useful to make sure you know why I'm using certain terms. You may even use different terms, but at least we'll be on the same page. So there are obviously advantages and risks to utilizing cloud infrastructures. I'm going to briefly touch both sides of that coin because it really is a two-sided coin. And I'm going to provide an overview of sorts to the mechanics of cloud network mechanics, but this is in no way intended to be a detailed tutorial on building multi-cloud networks. This is a multi-hour topic in and of itself. But the terms and concepts illustrated will make it a lot easier for any of you that are looking to design such an environment, and frankly, I think to get an appreciation of the complexities that we're dealing with. And finally, I'll mention some of the tools Idara has in market that are designed precisely to address the data and performance risks posed by cloud deployments. So I'm pretty sure I have more material than time will allow, but I'll try to adjust as we go along. And it's the very least there will be additional content online. So let's start with public cloud versus private cloud. And I'm sure this is an area people are already familiar with. It's pretty straightforward. Private clouds were built in data center facilities. Actually, this used to be pretty straightforward. Private clouds were built in data center facilities and maybe infrastructure outsourced to a hoster, but the enterprise controlled all aspects of the deployments and management of these resources. So tools like vCenter, system center were used to automate and simplify the mechanics. But at the end of the day, the IT personnel were managing and securing the infrastructure. And public cloud, on the other hand, was infrastructure managed by somebody else. For instance, Amazon or Microsoft. Many aspects of the infrastructure could be customized. But at the hardware level, everything is virtualized and managed externally. The in provisioning, storage management, network provisioning, all of this is managed by the cloud vendors. Now, however, this has become a little blurry. Private clouds were fully managed by owning IT groups, but that's really changing. When a company has a bunch of cloud VMs for a distributed application, the underlying hardware is cloud-based, if you all see what I mean. But the organization is managing it through their company's IT tool. So I guess at some level, you really would view this as a type of hybrid environment, although, frankly, fundamentally from the development group and from the IT, it is actually a private cloud. And going the other direction, Amazon has announced the availability of RTS for VMware. And there's some variations along that as well. So now you will have the data center and IT enterprise-owned hardware manage database as a service. So actually really manage instances of service, but still boring the line separation. So this was a fairly easy and straightforward topic. And frankly, there is an arms race going on with respect to cloud management APIs that's leading to all sorts of interesting manifestations. But that's another presentation. So let's now talk purely around cloud. And again, I think probably most people are familiar with this, so let's just go through it quickly. Cloud VMs, it's the example on the left, the first piece here, essentially 100% compatibility with private cloud deployments. Fundamentally, you can take anything that's running in your current IT environment. You can put it up into a Cloud VM. It is 99.9% likely, maybe even a few more nines, to run with full compatibility. And as a result, you can have SQL agent, third party tools, any sort of utilities that you have deployed with the SQL server instances. Certainly, of course, multi-database queries are available. And we'll talk about it a little later, around the nature of subnets and virtual networks that you end up having to pay a bit more attention to when you're deploying into cloud. Yeah, so let's just wait on that. So manage instances, on the other hand, are really a reaction to some of the limitations to cloud databases. Arguably, I should have talked about that first and worked my way backwards, because you see the compatibility issues more clearly. But what Microsoft discusses is tier, and you get down to it, it really is very similar to RDS. But RDS and manage instances are the same entity. It is taking an instance. You now can have SQL agents. There's multi-database queries are available, and compatibility is really getting very close to on-prem. So arguably, I should have started with this first. But cloud database, truly, database of service. And at this point, Azure SQL is the only real example. There are several different forms of the databases, singles or as members of the last pool. But each database stands alone. So implementing this paradigm certainly provides some cost benefits and management shifting from the companies to the cloud provider. But it has resulted in some significant restrictions on SQL syntax. For an example, just to pick one, the common use database name that often is less around required for multi-database environments as much as a way to get less verbose, fully qualified SQL statements without having to write out all the details. But all that code just breaks in Azure SQL. And it's something we've certainly run into many times. Also, no SQL agent, no SQL profiler, XEVENT creation syntax changes. So all the XEVENT monitoring that you not use instead of profiler actually has to be modified. But certainly way cheaper than manage instances. So now hybrid clouds. Besides the blending of the functional stacks I mentioned earlier, which actually is creating a hybrid cloud of sorts. I hadn't really thought of it that way, but it kind of is. Really, hybrid clouds are the natural evolution from on-prem deployments to the cloud. Unless a company literally shifts everything overnight, there will always be a transition period where some of the applications and services have been migrated to cloud implementations while some applications and services remain in the data center. And now, an exception to this would be people that just start from scratch without existing infrastructure. But generally, from a migration point of view, you will pass through hybrid cloud patterns. The evolution to be driven by IT initiatives, imagine cost efficiencies, server retirements, new growth. But chances are, all of you at some point soon will be dealing with environments sprawling across on-prem and one or two or more cloud infrastructures. In many ways, from a purely functional point of view, given an effective network connectivity architecture, on-prem versus cloud should be relatively transparent. But the phrase, an effective connectivity architecture, hides a lot of complexity. And we'll talk about that in a little bit of detail, because this isn't a full hybrid cloud network connectivity presentation. But we'll touch on some of the issues. And also, while server components can be relatively easily moved to cloud deployments, achieving satisfactory and expected performance in behaviors will also require some work. And I'll talk about that in quite a bit of detail, especially around the performance and behavioral characteristics. So my first version of this title was the beauty of utilizing cloud infrastructure. But on proofreading it over the next couple of days, it started sounding really a bit overly dramatic. However, I drive most of the SQL-related performance, evaluation, development efforts in IDERA, and have done that in lots of other companies. And I just don't have the words to express how cloud infrastructure has completely changed the playing field for dynamic, evaluative, and tuning effort computational tasks. In previous lives, I formed and managed the performance teams at Tandem, at Probeza Software, Microsoft, or SQL Server 2005 through 2012. And I tell you, back in the day, performance work often got done at night, because that's when you could get access to hardware. We spent many, many, many, many late nights alone, dark, partially lit labs drinking warm coffee, just trying to get access to larger hardware configurations. And the ability to spend up hundreds of DMs with specific customized images in a few hours and automate the execution of distributed workloads across all of them and shut them down at the end of the day is a game changer, doesn't even capture. It just changes. It enables things that just weren't even possible before. You just wouldn't be able to iterate at that velocity. It would take weeks instead of days. And, well, I know I keep going on about this, but the contrast in just a decade from dark labs and bad coffee to kicking off tests and evaluating results through Wi-Fi while flying across the country, it's an incredible leap, and certainly easier on family life. Seems to be lagging slightly, but so in the same way, business models which leverage short-term access to huge volumes of compute power, which some level didn't really exist back in the days of dedicated data centers, it's a configuration scenario that just wouldn't have been practical, frankly, which is why you would not have seen that sort of adoption. For example, an Azure account that I spoke to back in my Microsoft days would spin up 2,502 core instances for two days of calculations on commodity trading for the current week to make predictions for the next week. So they never had a data center. They started off at 300 so DMs, and as they developed the granularity of their data acquisition and they obtained customers for different analytical views, they scaled up their cloud footprint, but still also always very bursty into the week computational flurries, computing the data from this week to make predictions for the next week. And a traditional data center model of on-prem servers that just they would have never gotten that model up the ground. So aside from the cost advantage of certain business models, the agility and velocity advantages often accrue because of the parallelism possible when development staff can implement these infrastructure changes directly. I can spin up 500 VMs this week and investigate a product stability issue because I don't have to submit IT tickets for all the details of my implementation. However, that is a double edged store at some level. And that is one of the issues I think that will increasingly need to be managed and that some level is maybe, it shows some of the advantages of utilizing cloud infrastructure to perhaps be directly in competition with some of the management and secure computing aspects that companies are typically going to be desiring of. So let me, well, let me go through several of these and we're not gonna go into super detail but it's worth sort of, you know, on the one hand, you know, the cloud is just a game changing piece of infrastructure to be able to leverage. However, migrating SQL server are really any complex piece of middleware to the cloud introduces new complexities, different performance behaviors and new data risks. So while many aspects of the execution behavior appears to just work in the cloud, there are performance characteristics which are sometimes subtle and often drastic. Now, I'll have some examples and some data that goes through that and probably more detail than we'll be able to talk about. In addition, data security and the ability to assert regulatory compliance change drastically in cloud infrastructures. So while the reliability of any given aspect of cloud infrastructure is generally much better than a typical data center, the reliability of the services provided through the aggregation of those components is significantly more complex and leads to, well, anytime there's a lot more complexity and ironically, as we'll talk about making all this complexity easily modifiable and configurable, you've introduced a risk vector and well, so we'll go into more detail back. So setting up, configuring and managing cloud infrastructure is both incredibly complex and ironically incredibly accessible. It really is elegant in a danger sort of way that really every aspect of cloud infrastructure can be configured. And there's an irony, obviously, this is exactly the accessibility which potentially introduces data risk. In the data center where there are certainly APIs exposed by individual components, there generally is not easily accessible access to every component in the data center. One group in IT owns configuring firewalls and never gateways. Another group may be in charge of storage management allocation. Another group may own virtual machine provisioning and imaging and within the corporate data center provided by firewalls, IT staff follows defined established procedures based on decades of experience which has been established based on trial error learnings often not necessarily perceived at the time but often informed by mistakes. At the early, at the time, we didn't really think of ourselves as learning by trial and error but in retrospect, we actually are. If you look back that there's many things we learned that those are not the same procedures we would follow now. And at the end of the day, the IT industry has just not yet developed. We have not yet developed. That level of established best practices are established procedures for cloud infrastructure. I know most IT departments certainly are not going to volunteer that but it's just the fact these procedures and best practices take years to hammer out and be effectively understood and adopted. In some ways, the ease of migration of on-prem infrastructure to the cloud it's been made so easy by the cloud providers it provides the illusion that established procedures and responses will work just fine in the cloud. And when large blocks of infrastructure moved to cloud deployment and everything appears to work it actually provides a false sense of security we found. And this is an area where I don't expect you just take my word for it. There's lots of data and we'll talk about it in more detail here in just a bit. So I'm not going to spend too much time on this but it's worth just as an illustration I happen to have the diagram around. There is a brutally complex elegance to designing and configuring cloud environments that generally speaking all of the configurable components the execution environment are provided in a very consistent related fashion. And for instance, this is a diagram of the never configuration for a single DM with two IP addresses. One of them is internet facing one is a private vNet facing. And there's a significant level of complexity just in getting this configuration correct. Not to mention if you have hundreds of thousands of these across separate virtual networks and subnets. And the irony here is that the network security groups are so flexible and the fact that they can be attached at multiple levels in the network stack can does lead to unexpected consequences. And we among our groups we've several times had an engineer open ports for network access on an individual VM and have those changes inadvertently propagated over all the VMs in the subnet. So there's a strict hierarchy and it's not rocket science but it takes meticulous attention to detail that sometimes people aren't always by instinct following meticulous attention to detail. So I'm gonna dig quickly into network connectivity. There's much more complexity here than I think we have time or people are even interested in. But for those of you that are going to be pulling together the start of an environment that I think this probably gives you at least a start of how to go about thinking about it. Most cloud vendors provide the concept of a virtual network for Azure. These are Azure virtual networks for AWS. They are virtual private clouds. They're effectively private networks with compatible IP addressing and connectivity to the IP addresses provided to the subnets. This leads to the fact that virtual networks contain one or more subnets by definition which typically are the containers for the actual allocated IP address ranges. And we'll talk about how those are address ranges get allocated. So subnets are the broader network container for virtual machine addresses, network interfaces, network security groups, IP address objects. IP addresses are distinct objects with property values. In some level they are in a regular IT environment but I certainly had not thought of them quite in that way. But I guess I'm not really an IT person by definition. So maybe this is a bit more natural to everybody else. So virtual private gateways provide connectivity between virtual networks. So obviously as you set up different environments, different departments, you will end up with separate virtual networks. At some level the networks you have within the on-prem data center are virtual networks as well. And as we won't go into a whole lot of detail but certainly that ends up really being addressed in the same way that you address virtual networks in the cloud. You have network gateways that provide connectivity between them. And this nomenclature of virtual networks, we said virtual networks outside of cloud infrastructure as well. So the network gateways are the mechanisms used to construct virtual private network gateways. So virtual network gateways between a single cloud infrastructures virtual networks is pretty straightforward and it's an intuitive sort of concept. Gateways between different cloud infrastructures though including on-prem resources are a bit more involved and we won't go into a lot of detail on that. It's a whole area of virtual private network gateways and both Azure Native US provide them and allow them to be configured in several different ways. I think we'll just mention it briefly. So virtual networks define a range of addresses and subnets that carve out pieces of that address range. And this is an aspect where you are forced to understand a bit about how address ranges and subnet addresses work. These are defined by CIDR blocks, classless internet, data routing. These within different subnets but with the same virtual network by default all have routes to each other. But there's a traffic filtering aspect advantage of segregating things into separate subnet. Not to mention how you've chosen to allocate addresses. So I won't go into this in detail but at the end of the day if you are going to set up cloud infrastructure you need to understand how CIDR blocks work. And it is slightly unintuitive I found but actually ends up really being easy once you figure it out. And fundamentally you've got a address range followed by a designator prefix size designator. So it's defining how big the defined part is as opposed to the variable. So really quickly, and it's all out of 32 bits. So a CIDR block specifying 24 bits means that eight are left over for unique addresses, 256 total available addresses. I think I have another example there that is slash 16. So it indicates the prefix size of 16 bits. So the 16 bits are left and out of the 32 so you end up with 64k work addresses. So peering between networks is the nomenclature often used by both AWS and Azure and this effectively ends up being a virtual network gateway but there's a well-defined mechanism. I think I'm going to just briefly show it but I think this is an area that we're not gonna really go down but it's relatively straightforward in both AWS and Azure, they're implemented through the portal and you pick which virtual network you're in, find the peering network and all the mechanics of that route end up getting built in provision for you behind the covers. And boom, here's the final result. Didn't go through a whole lot of detail here but the final result that you've now built peering relays. So fairly straightforward. I think the complexity is more thinking through the nature of the addressing architecture that you're gonna define and how that's going to work and how it's gonna be allocated. And then on top of that, the nature of the network security groups and how those additively end up at some level, I think there's a level of complexity that will typically defy a single individual to understand the whole thing and it is a classic place where automation tools are really the end up being the answer. So looking more broadly at hybrid cloud network connectivity configurations, both Azure and Amazon provide mechanisms for both private and non-internet access. As Microsoft provides this through dedicated express route, Microsoft network infrastructure, Amazon works with a set of partners but at the end of the day, you can configure a VPN network that connects your on-prem site to hybrid cloud to cloud deployed infrastructure and connect over the internet or connect over private networks. The private networks terminus turns out, there, you know, it's a limited number of locations around the country. And so you typically have to arrange for private connectivity from your location to wherever the closest cloud vendor network terminus is. But probably you guys won't be resolved with that. There's often dedicated data connectivity teams that are addressing that. There's a variation on this that at some level probably will just become more and more common that as you are tying together multiple different physical locations for data centers instead of doing point to point configurations, there's something very attractive about utilizing the cloud deployed infrastructure as your point of route. And now you end up with, you know, an even more complex help and spoke architecture but really based on the very same mechanisms. And here's, this actually is old. This is maybe nine months old but you can see that if you're in Wyoming, you're gonna have to run some, or North Dakota, you're gonna have to run some fairly long private lines before you can connect to one of the cloud providers termination locales. Larger cities of course have multiple opportunities but I assume this would just get more and more disparate as the cloud providers try to pull people onto their networks and make that part of their extended value proposition. So circling back to the broader issue of utilizing cloud infrastructure. Migration of data and compute infrastructure of the cloud can obviously magnify existing vulnerabilities within the protected environment of a data center with various departments standing up servers for a variety of reasons, which certainly enhances productivity and it's in place of a pretty minimal burden on requiring precise security configurations. And even if there are certain procedures and compliance requirements, the downside from misconfigurations is largely contained within the corporate internet. But it is exactly these issues which will cause problems once the infrastructure is migrated to cloud infrastructures because these now minor misconfiguration issues will likely lead to extremely bad outcomes. So within, it's the same, making this point several times, but within a corporate data center protected by firewalls, IT staff follows defined procedures that were based on decades of experience lots of trial and error, frankly scar tissue from mistakes and this is an area where cloud deployed infrastructures have just not developed that level of established best practices. Yeah, it's still an evolving field. So Gartner put out an interesting paper a few months ago which puts the risk of cloud deployments into perspective. And it's not all negative, it's not all negative anything, I think the challenges are how to adopt the positive. But generally speaking, their point is kind of infrastructure is rock solid. That's not what the problems are. It's not that somehow cloud, that Amazon or Microsoft's infrastructure has concerns or issues. It is much more about how customers utilize the cloud infrastructure. So, and of course Gartner's putting numbers on things as if they were well-defined percentages. These are just broad, broad estimates, but still the estimate that 99% of security failures will be customer's fault is an extreme estimation. And maybe that's 97%, maybe it's 99.9. Still, that is a significant claim. And I think I'm gonna show a little data from the nature of these data breaches. That does not seem unlikely, that doesn't seem unreasonable. I think the estimation that companies who don't put in plans to explicitly control their public cloud usage are 90% likely of the next 10 years to have data breaches. That's an interesting statement. I think we have seen about the efforts that I've been driving didn't have sensitive data, so it didn't matter. But we have multiple times inadvertently misconfigured aspects of the network and found unknown people trying to access our SQL server instances that were suddenly exposed out to the internet. Also, there's gonna be an ongoing struggle for companies to fully understand and plan for cloud data risks, to not overestimate the risk and as a result, underutilize the advantages of cloud, but also not to underestimate and to adequately and honestly look at the risk factors and design mitigation factors for each of the risks. So certainly it would be our argument and it turns out this actually was the main point the Gartner paper was trying to make, that companies cannot let the ambiguity and dramatic headlines drive their cloud strategies. Their opportunities there that just need to be embraced stay competitive because if you don't find a way to adequately leverage and utilize cloud infrastructure, your competition certainly is and they are gonna run circles around you. So ignoring the risk isn't the right strategy. It's facing the risks, identifying them, understanding that there is no perfect security strategy. There will always be opportunity for mistakes like any security policy. It's a multi-layered onion of a strategy, but you need to face it head on and certainly look towards tools and automation to help mitigate that risk. So there was a, turns out the Gartner paper pointed to this very one and so I included it, but you now you are finding because of compliance risks companies that are avoiding cloud deployments and it is a reactionary response to a lot of the bad press. Now I think there's an interesting point to be made that IT, we used to not have firewalls or firewall devices, even a concept of firewall and data centers and at some point people started realizing there were vulnerabilities there and we developed firewalls and the processes and data centers became reasonably tight and reliable. There is absolutely, I think a point of view that there's nothing that would keep IT from extending that umbrella out into cloud infrastructure and tightly control it in such a fashion that, well, they do have procedures and security standards and such in place. And as a result, being in an environment where they don't have to really worry about, the cloud doesn't necessarily have to offer risk. Now, I think if we actually though look at reality, corporate confidence in attesting to regulatory compliance is driving a lot of concerns because as it turns out, IT departments have not effectively extended these procedures out to cloud environments. Every week, I put together a technology newsletter for our product group and company's accidents. It's harder than you might think to find relevant 50 articles with the right technical level from business executives. So I'm always looking for content but for the last six months, every time I look for content, I end up with two, three or four cloud infrastructure data breach articles typically due to improperly configured cloud assets. So for the IDAR newsletter, I had to make a decision to only use one or two of these data breach articles each week or the newsletter would just end up being about data breaches. So I looked at my notes the last several months and made it easy actually to find and then I literally fell in the same trap. There are so many of these ill configured database data breach examples and it was hard to narrow them down. So I sort of put everything that would fit on a page but common thread across all of these incidents is not that hackers are malevolent entities are breaking into environments. But the pattern I've seen over and over is the related to understanding what the security settings for a given cloud infrastructure resource really mean and what the default settings are as well. Like when I set something up, where will I land unless I go in and fix it? So I think we have seen that regardless of best intentions and concern, it is the case that these procedures have not been effectively extended out into cloud environments. Slightly humorous aside, I guess it's not really if someone can be joking about other people's data woes but I tell you, see this pattern over and over and over again. Large company with lots of data established and maintained strong data security protocols policies and practices. That company then hires a third party data analytics firm to do something important with the data. Third party analytics firm proceeds to leave the data laying around in random cloud storage with minimal to no security. Data gets discovered and everyone involved seems incredibly surprised and a data breach is announced but I bet I see one of those a week that literally is such a common pattern that perhaps people will get ahead of it now but it is surprising how common that very pattern is. At IDARA, we have a series of tools that allow analysis of network environments both on premises and in cloud to determine what the effective access rights are for a given SQL Server instance. So SQL Secure Monitor's reports provides alerts on SQL Server users, permissions and effective permissions across thousands of SQL Server instances allows you to quickly identify vulnerabilities and unexpected permission inheritance patterns provides advice on where to close vulnerabilities where to set stronger security policies. It also periodically captures security snapshots across all of the monitored instances. This allows you to go be alerted on when things have changed and also be able to look back over time and determine how things got into the incorrect state that they got into. And of course, there's a whole lot of mechanisms available for how these alerts can be shared with email and SNMP and such. For SQL Secure and really all of the IDARA products I'll sort of touch into the screen to them but it really is the same that we, one of our early design points in building our cloud strategy is complete and total transparency. So it can be deployed in the cloud, monitor on-prem, deployed on-prem, monitor in the cloud, deployed in the cloud, monitoring the cloud certainly as people have been doing for years deployed and on-premises data centers and monitoring within the data center. So let's touch on regulatory compliance here because this is, this while there is risk that is found by secure configuration parameters there is an aspect around compliance that goes more to fundamental risk to a company. We have a broad base of Fortune 100 Fortune 500 companies as customers and of course they're all evaluating the regulatory compliance strategies with respect to the cloud. And as you can imagine the high visibility of all these exposures has made people nervous but that already mentioned but bears repeating for cloud deployed infrastructures not to mention hybrid cloud and multi-cloud the accepted established practices and best practices are still evolving. To decades of experience these are things that are still in flux. So when a company is required to assert to the compliance of a system there is a lot of inherent risk involved in that. So certainly HIPAA, SOX and GDPR have all been the news and have legal ramifications to an accompanies certification of compliance but GDPR is even a more visible example and since enforcement went in place more than $60 million in fines have been levied against companies for compliance violations. Now actually 56 million of that were fines on Google but in the reality is aside from Google the fines were always in the 20,000 to $450,000 range but still fundamentally businesses are not seeking out to have financial penalties and the negative press PR from the trust damage arising from these sorts of charges. So compliance manager is designed exactly for those circumstances. Built-in logic applies the appropriate SQL server settings to meet most regulatory standards. Powerful reporting capabilities provide an easy audit experience and allow periodic verification of compliance state. At the end of the day what could easily require weeks of regulatory analysis and custom development we allow to be implemented in minutes not including install time. So literally you are able to set on a push button basis the individual requirements for regulatory compliance such that you are capturing the correct set of data to be able to report, produce a report to an auditor on very short notice with very high confidence. Now again, same story as with secure design from the ground up to be deployed in the cloud or data center and to audit deployments in cloud data center or any combination you can think of. Now this piece I don't think we're we're getting a little short of time although I have to admit that I'm this is an area I often will spend lots of time on but ignoring overt failures of some sort and node failures are obvious resource hogging by far the most common area performance exploration and remediation is related to query performance. So in cloud deployments both storage and network behavior and therefore the performance experienced by SQL server is almost is actually quite certain to be different than that experience in a data center deployment. So you can just assume that query behavior will almost certainly change when moving to cloud. So there's several of these underlying issues touch just in detail here. Well, I'll show some actual data. So here's an example, you know, as I as I alluded to the all of storage in Azure and AWS are networked. So the behavioral characteristics relative to sands or direct attached storage are radically different. So there've been certainly several opinion pieces suggesting that the best way to get good certainly I think Microsoft would say the best way to get good predictable performance out of Azure and AWS is to pay for the higher end SKUs and higher end provisioned IOPS SSD storage infrastructure. And I certainly can't disagree with that but you guys may already be familiar with this it gets extremely expensive very fast and assuming limited financial resources which is generally the case, every efficiency which allows the effective usage of less expensive components plays pay for infrastructure to be used for additional functions. And so some of that's what performance engineers do. Anybody can buy the most expensive systems and get decent performance. The harder way to get to this is find the most effective and efficient least expensive components and still get stellar consistent performance. So yeah, most of the volumes and Azure and AWS will be networked this except for the local drive specifically in SSD, but it's temporary and so it can be used for temp DB or swap files but it will go away every time the system is reallocated but this network of behavior results in a range of different behaviors from that of on-prem Sans generally higher latency generally the way more deterministic than typical Sans over commit when present and it's always present because it's all virtualized but it's typically very controlled and consistent and I found much fewer unexplained latency spikes in actually cloud storage than I have in on-prem. Bandwidth and IOPS are constrained on a per VM basis. This is a big deal has all sorts of other implications where you don't quite understand why things aren't going faster. So unlike Sans deployments assuming six geometry volumes Azure and AWS growing a volume generally grows IOPS and bandwidth but certainly that's the claim that you can get better throughput both bandwidth and IOPS by going to larger volumes. As it turns out actually and if you look at this data that's somewhat true but not particularly effective and certainly not cost effective. So on the left, this is increasing a volume from 64 64 gigs up to two terabytes and frankly it's fairly disappointing growth as you grow the volume but old fashioned mirroring, I mean old fashioned striping huge win, almost linear. Now the megabytes per second is an issue because it's all network but from an IOPS point of view which is often what you're most worried about with SQL anyway, just really stellar growth through simple striping. So it is a different way to look at it and certainly, well, you can see the numbers when you try it for yourself but you will find that the growing the volumes is a fairly unsatisfying way to get throughput and IOPS paying for high end SSDs provides a certain amount of strength there but frankly just striping volumes get great results. So I'm definitely getting low on time. I think some of these slides will just be available for later reference but fundamentally the way that you monitor monitoring thousands of instances and both validating validity availability of the instance as well as monitoring individual queries and understanding when queries are slowing over time is a non-trivial task and the mechanisms that we use are not necessarily unique in the industry. This is a well understood sort of virtuous cycle of evaluation and feedback but we have an implementation that has been in market for a decade as three generations of scalability engine that we've brought to bear over that time. Just real quickly, I will touch on the fact that certainly there's detailed drill down and monitoring for each instance around resource usage. Alerts can be set on different parameters excessive CPU, excessive latency, automated alerting for any of these erroneous or dangerous configurations completely flexible can be configured with self-tuning baselines. You can drill down on any given queries, a problematic query you can drill down into exactly what's going on, what is it waiting on, where were the weights, what was the CPU disk IO and more importantly how did it evolve over the last several months over previous snapshots and has there been something that's changed and a visual query plan viewer to better understand the mechanics. We're going to go fast because I'm now behind. Same point here that diagnostic manager, we have people deploying them in clouds on-prem. The monitors can be single cloud, multi-cloud, data center, VPNs and common addressing are utilized. Instances can be moved back and forth with essentially no changes in configuration. You know, a given node may, it makes sense to have it on-prem one week if it needs to be moved to cloud from a diagnostic manager point of view that's essentially a transparent operation given that the networking has been set up correctly. I'll just mention in passing, we have another product SQL safe backup that does extremely fast backups, advanced compression. We have patents around some of our technology that allows after a crash on recovery, SQL server will be bought back to online status typically within a minute or so long before the backups have actually completed. And this certainly, this is a differentiating feature. So we have a broad range of tools. Here is links to them and a little bit more description. I think that it is worth pointing out that it's actually important to point out that all of our products are available with three 14-day trial usage. These are not in any way limited, full, complete, total function, live demos driven by idea engineers are available on request. These trials are fully functional, no credit card or approvals required. We ask for an email, but otherwise no obligation. And that's our sales motion is 100% based on customers who have tried the product and found value. And on the one hand, that's a little bit scary, but once you get it going, it actually has been super effective. And that's, we're growing like crazy. So at that, slightly past, because of how many questions we have, but maybe a little bit past where I thought I would get to and I had to move past some of the performance items more quickly than I would have liked, but I think we are open for questions. Rob, thank you so much for this great presentation. If you have questions for Rob, feel free to submit them in the Q&A section in the bottom right hand corner of your screen. And just to answer the most commonly asked questions just a reminder, I will send a follow-up email by end of Thursday for this webinar with links to the slides and links to the recording. So just as Rob, as people are typing in their questions here, do you have, what was the one thing that you really wanted to mention that you didn't have a chance to get to? I think that I did not spend as much time as I would have on the data risk that comes from performance variabilities on-prem versus the cloud. And I sort of mentioned it maybe in passing with respect to the security and compliance aspects, but there is a seductive and deceptive aspect to moving SQL Server instances into the cloud and having everything just work apparently perfectly. But as we, and I went through it fairly quickly, but as we discussed, having a fully networked storage model wildly changes the latencies of IOs. And there are times where you come out ahead, but there's generally you come out behind and you have to be somewhat creative in A, how you design the storage, but especially how you provide tools that automate this whole process, that it is a, it almost would be impossible. It'd be like starting from scratch with all of your application performance work and having to do everything over again, because it is really in that realm. And so having the tools and automation that will build automated baselines for you based on behaviors, flag you when your queries have moved outside of these baseline areas, identifying problematic queries down to IO response times inside the visual query plan viewer, these are areas that arguably cannot really be addressed without tools. And whether it's our tools or someone else's tools, it really is almost a requirement once you get past a few instances out in cloud environments. And I think you can make the same case for both compliance and secure as well, that it is frankly a corporate mandate or requirement that for success and survival of a company, as they move their infrastructure into the cloud, they have to adopt types of automation and tools to help them deal with that complexity. Are they will absolutely end up as one of those articles I'm deciding whether or not to include my weekly newsletter. I love it. And how do you get the weekly newsletter? It's just to our, the executives internally. Oh, I got you. It's hard, but oh my God, it is hard. I've got to do one today. It's on Tuesdays after this. I've got to go. It's hard to find good, pithy, insightful, not just PR stuff. So I'm, it's hilarious how much of it, how many are end up being data breaches. That's what's sort of a topic of joking around, you could literally do a data breach newsletter every week with the volume we see there. Oh, I'm sure. I understand. So, well, everyone's pretty quiet today. So not a lot of other questions. So, Rob, thank you so much for this great presentation. If you do have questions, feel free to submit them and I'll make sure and get them over to Rob. But thank you so much for this great presentation. Thanks to our attendees for being engaged in this. And I hope everybody has a great day. Okay. Thanks a lot. Thank you. Bye.