 Please let me know once the screen is up, the slides are up and I can start from there. Yes, we're now looking at my CCOI with a cool sports car. Okay, okay, great. Thank you. Hi everyone, on behalf of the MySQL as a service team, I would like to welcome you today. Thank you for joining us. My name is Gagan and the development manager for the team responsible for various modules on the MySQL database service on Oracle Cloud Infrastructure. I'm looking forward to talk with you today about implementing a database service on OCI and using the MySQL database service as a reference as an example to get into the details of how these kind of services can be hosted on OCI. The safe hours statement, yes, I would like to remind that this is for information purpose only and the content may change and the sole discretion is with Oracle. Now, over a period of past couple of decades, there is a lot of traction for cloud computing. A lot of people want to migrate their on-premise systems to cloud for increased efficiency, lower operational cost and all the other maintenance around upgrades and how they want to patch their service. In and around that, a lot of people have migrated from on-premise to cloud. And as complex as, as simple as it may seem to migrate, there are a bunch of things that one has to take care of. It's not an easy initiative to migrate to the cloud and ensure that the things are going to scale as and when more and more customers start using your service. And apart from that, the other latency is on the network and there are various aspects that one has to consider when you're hosting this kind of solution on cloud. Nevertheless, I mean, with the power of the open source engineering team that MySQL has at Oracle, it has been a great success so far where we're able to host the entire MySQL as a service on OCI cloud, which I'll be talking in more detail in the coming slide. So the intention of this slide is not to showcase a bunch of components that helps in building the OCI service. It is just to show you the roadmap and all the various components that I'll be touching upon in this presentation. I'll be going through each and every individual component that is listed on this particular slide and get into a slight detail about each of them and how the entire MySQL as a service is orchestrated on OCI cloud. And how each of these components play a key role in making the service safe, secure, resilient, and highly available for the customers. So the first blue highlighted component here is the user tendency. So in the most simplistic term, this is what a user would see when he creates an Oracle cloud account. There are a lot of free trial available. I'd really encourage and recommend for you to try out the free account and see the various services that are provided by Oracle cloud. And the user tendency is your home tendency where you could choose a particular home region and go on from there. This slide depicts the console screenshot. It shows all the various screens and the results that are available on the web console when you create an account and have a look at you setting up a VM, setting up network. And those are very intuitive user friendly results that will enable you to do that. The MySQL database service itself leverages on most of this cloud provided services, the VCM, the storage, and the compute. And if you plan to implement your own cloud service on OCI, these things are availability of fingertips to develop your entire service on OCI cloud. This screenshot is again a sneak peek of the MySQL as a service on the left-hand side bottom where you say the wizard popped out where as a user one can create a DB system. You can create a standalone DB system or a highly available DB system with multiple nodes. The backups can be configured in a manner that you would want to schedule a backup automatically every day or want to take a manual backup at will. Those things are supported. Channels enable you to migrate data and have multiple DB systems interconnected using these channels and configuration. MySQL provides a vast set of configuration to tune for performance and various reasons that this configuration enables the end user to do. Moving to next slide. Here we spoke about regions, the home region, whenever a user tenancy is created. But apart from that, a user can also subscribe to various regions across the globe. So region is basically a localized geographical area such as Ashburn, Sydney, Mumbai, or Tokyo where customer creates the tenancy. And this region, the comprise of availability domain. So availability domain can be seen as one or more data center within a particular region, within a particular city. And these ADs further have broken down into a fall domain. And the fall domain are basically grouping of hardware for high degree of protection towards unexpected failure or computer hardware maintenance. So those kind of things are taking care of the fall domain. So MDS is built on such a robust system where it is spread across various ADs and FDs to ensure the availability to the end customer. Moving to the next slide. The compartment, compartment is basically it's a logical container. You can encapsulate all the OCI resources that are used. So the storage and the VCN and the compute can be grouped into a particular compartment. This additionally enables you to create certain policies and user accounts and policies as to which resource can be accessed by which user group or a particular individual. This enables you to do that. So all the physical resources are encapsulated in this logical compartment. So coming back to the main slide, the overall architecture of how MDS is kind of orchestrated. The key component highlighted here on this slide is the control plane. So far we have been discussing about user tenancy and the few aspects around it. Now we shall move to this most critical part, which is the control component. All the OCI resources that comprise of the MDS service itself, the component that are required to run the service, the administrative interface, the API service, or any kind of task management which has got to do with initializing storage, initializing compute to make the DB system and make it available to the end customer is handled by the control plane. The control plane also takes care of storing the metadata. It has the overall blueprint of the entire system where whatever DB systems are created or backups are created or channels are created. Are stored in a very safe secured metadata store, which is used and accessible at any point of time. This defines the overall across for all the customers, their entire information and the blueprint is stored in this store, metadata store. Apart from that, another important aspect that the control plane performs is the health monitoring of the user DB system. Whenever user creates a DB system, all the kind of health checks whether the application is working fine, the server is up and running. The various entities that support the servers, how is the infrastructure of the Oracle provided cloud is behaving to take the right correct corrective measures if something is below the threshold that we have set for a particular service. Some of the very important design consideration while coming up with the control plane is definitely the availability. We want to ensure that at no point there is a single point of failure. We provide multiple instance MySQL server, highly available server where user can configure more than one node and we have primary secondary nodes which kind of switch within sub seconds and your service is highly available. Scale is another key important factor where as in when more and more people adapt to this particular technology and we have more customers, it should not so happen that the other customers are impacted and the bandwidth is compromised. So scaling of the service and having the same latency and the throughput is essentially the key factor for designing the control plane. Resilience is another important aspect where whenever we do an upgrade or modify code in the server or the administrator or the control plane code itself. We have to ensure that there are no outages. It is seamless to the user. It just happens behind the background. User does not have to worry. The service is up and running and resilience is definitely one of the key factor. Security, I think this is the most critical aspect where the user definitely paranoid about where the data is stored, who can access it and who can view it, who can read it. So from the security aspect, there is utmost care taken in terms of where this user data is stored. What is the amount of privileges that is granted to the service? There are no user data that can be read at any point of time and we ensure that in control plane, we provide the bare minimum privilege to any kind of an internal service or the instance. So that it does it has just enough privilege to do what it is meant to do but not beyond what it is supposed to do. For example, if an internal user has to only read a particular metadata, we only grant the read only permission. There are several permissions like read, write, manage permissions, but we ensure that at any point of time we carefully examine the operation that has to be performed by a module and give the bare minimum privileges. So in certain sense, a lot of care has been taken to keep this data secure and safe. And apart from that, we also take care of data jurisdiction about data not leaving a particular region. When you're doing a cross region migration of data or backups that are stored in another region is only by user, user permission and configuration. It is not stored randomly at any region. It is with the approval and the inputs provided by the user. Monitoring is another key aspect. So apart from the things that we can handle reactively, if something goes wrong, we proactively keep probing the various aspects of the service, trying to find the issues even before the server goes down. This enables the backend operators and the SMEs to keep the service up and running even before the issue occurs. So for example, if there's an infrastructural burden or some run out at the underneath set of who say cloud infrastructure services that may impact the MDS. We take necessary measures to keep the service up and running in those cases and try to move to a different AD or fall domain and make the service keep keep up and running for the users. The next critical aspect of the overall architecture is the data plane. So here the entire provisioned this this particular data plane will provision the OCI components like compute and the storage which where exactly the user data is physically residing. It's completely separate from control plane and the design consideration is the instance principle available on this data plane should have minimum blast radius. For example, if one of the data plane nodes are kind of compromised, we ensure that it cannot impact any other data plane or any other customer or any other DB system for that matter, nor can it interfere with the functionality of control plane. So the entire data plane is architected in a manner that the blast radius is bare minimum restricted to the only that particular node on which something may go wrong. It cannot it doesn't have the ability to bring down the overall service or impact any other customer in any manner. That's one of the main key design consideration for the data plane. There is this particular interaction between control plane and data plane is key for the overall service. So whenever user wants to create a DB system, there is a bunch of interaction between the control plane and data plane, lot of data and configuration related data. And the creation credentials for that particular server are also shared between the control plane and data plane. This channel is extremely secure. It is encrypted and all the data can only flow in unidirectional where the control plane provides all the information to the data plane of how and what are the resources that it needs to allocate to bring the entire DB system. So this channel is secure and individual for each of those DB system that are created at any point of time. The DB itself we spoke a little bit about the metadata storage. So the entire blueprint is stored in the in the. It's a persistent transactional key valued store where all the information about various configuration that user has set for a DB system and other aspects regarding the networking, the shape of the particular compute. And the storage size, all those details are stored in a very secure manner persistently in a key value record. These things are regularly backed up and it is almost close to the live data for the overall infrastructure. So it has a very strict concurrency control, which implies that we ensure the commits are serialized and snapshot isolation is provided and guaranteed. So apart from this, there are a lot of things that one may get out of the box from the by adapting the OCI services, the audits and events, the API HTTP request authentication and authorization, the security zones, the tag validation quota and limits. So these are some of the out of the box available services from Oracle OCI. Same has been adapted by MDS and any other user who plans to implement something like this on the OCI cloud can definitely leverage on these services instead of trying to build it from scratch. You have a head start and a bullet code from where you can start off. Tendencies are another key aspect. The identity and access management policies leverage on this compartment and tenancies whenever you want to define a user group or user that can access a particular resource or not access a resource. So this is just an example of principle granted to one of the users and not granted to another and what they can access and what they cannot. And how these permissions are granted are using policies. So these are human readable policies where you can define a set of rules as to which user user group can access what in the particular overall service. Another important aspect that the API service provides is apart from only the public facing API where it enables the user to create and create and monitor the DB system. We have a bunch of internal APIs that we've created that enables the operator and other internal users to monitor the health of the particular DB system. There are various admin related APIs that are provided to monitor the dashboard, the CPU disk usage, which are both publicly available and also available to the internal operators and keeps the show up and running. These internal APIs are not exposed to the customers directly. These are primarily to do the monitoring and auditing of how the system is behaving overall. And there are a bunch of metrics and locks that are periodically monitored in order to ensure that the service is healthy and there are no issues at any point of time. Another important module of the overall control plane is the control plane task itself. So there are certain APIs which are short lived where you want to get an information about a configuration of a DB system where it can fetch it from the database by querying it. However, there are some long running process where you initiate a DB system and there are a bunch of operations that has to perform behind the scene where it has to allocate memories to rage compute network and attach them and then hand over to the customer. So these kind of long running tasks are performed by something called worker nodes in the control plane. This enables offloading all the particular heavy lifting by another bunch of worker nodes that scale automatically based on the number of requests that are coming in. So as and when the user grows, the number of worker nodes that support these kind of long running tasks scale automatically and seamlessly. So this is our key component of the control plane that handles all the important operations behind the scene. Health tech, we spoke many number of times in this particular presentation. The health techs are not just confined to see whether the node is up and running. Apart from that, we also do the service level, the database level checks where we have dummy queries which ensures that the server itself is up and running and responsive, not just the not just the process is alive. So this kind of probing is done in order to ensure that the health tech happens at the deepest level and take necessary action if something is not as per expectation. In the monitoring, there are definitely seven more most important user parameter or metric that we monitor regularly. The user satisfaction in terms of how the system is behaving, is it responsive? What is the latencies? Are the queries getting completed within time as per expectation? What is the kind of traffic that is getting pumped into that particular server? Are there multiple error rates? Are there queries which are failing for any reason, particular reason? And how it is scaling? I mean, when there are spikes of load, how the system is behaving in terms of scaling, whether the queries are responded within the well-defined latency time and how loaded is the CPU and the disk usage? Those things are monitored very, very closely to ensure the best customer experience for the database. So apart from the functionality itself, a lot of care has been taken in order to make this particular service up and running for the customer and provide the best performance available. It is backed by the MySQL engineering team itself and any kind of tweaks in terms of iOS and things like flush table read log, how to eliminate those usage on cloud and how it can double write buffers can be optimized. So those kind of things are in fact fine tuned and optimized to the core by the engineering team to work at the best possible manner on cloud. Emens and notifications are mechanisms where the operators get notified if some things are not working fine based on the health check input. The various logs and the metrics and alarms have been configured in a manner that if something fails, we are immediately notified and take the necessary action to bring it live. And auditing of the logs happen on a very frequent basis. We see that whatever we have defined as the thresholds are met time and again on a daily and early one. So backup and restore, one of the key factor where user can define how frequently they want to take backup and they can also choose an option of migrating these particular backups to another region completely based on the data jurisdiction that is defined by the user so that even if there is a regional level failure, the end customers can bring it live again in another region and continue from there. So backup and restore is another key aspect of the overall setup. So all in all in short, this is how the entire MDS is implemented, trying to make it safe, secure, available, well monitored and managed and powered by the engineering team of MySQL. And that's it. That's all I had. Any queries, please let me know.