 It's great. I can be here to talk about this topic. So, thanks everyone for joining this session. It is from MySQL. And the topics today, as mentioned by folk, it is about how we deploy MySQL L in an easy way. And there are so many ways that we can do this deployment. And people have been doing this from time to time. Who am I? My name is Ivan Ma. I'm from Hong Kong. So, I have been involved in this software deployment architecture over the last three or some years. And today is actually the MySQL deployment topic. So, let's go ahead and watch today. So, the agenda is like, so what kind of deployments we have been seeing? Okay, it can be the master in the lathe. And today we use the terminology called source and replicas. And there's also something to make it easy to deploy through the replica set. And it is a MySQL shell, inodb cluster, and doing this. And we will explain later. And there is also a very open and out-of-the-box deployment using the replication or we say that MySQL inodb cluster. It is a very hot topic for this day. And how we bridge a production site and the DL with the inodb cluster. That's what we today. We have the so-called cluster set. Okay, that is very helpful for us to like click-a-click and then we can build the replication across to a more cluster. And people also telling, they like the Kubernetes. So, how does that work? Okay, in MySQL deployment. So, what today I will also share, okay, the latest, okay, the preview of MySQL operator for this Kubernetes. As well as people like to see something in actions on cloud. So, there is MySQL database service. And as well as the real hot topics, the machine learning for the heat wave, which you can find it on our Oracle cloud infrastructures. Okay, let's get into it. First of all, when we look at this diagram, so basically it is quite typical. It's like muscle and slice. People deploy applications and MySQL in just one standard. Okay, as like standalone server and getting a like standby server. Okay, on the other machine or actually on the other site. So, basically, this is like logical replications from one site, one MySQL and sending data across the network to the target, which we call this replicas or in the old terms, we call this slice, master and slave in the old terms. And today we use source and replicas. That can happen to be with one to one. That can happen to one to many. So, with this, what we are talking about is the applications connect through certain middle tier to connect to MySQL server. It can be the VIP, virtual IP. It can be something the third party components, like SQLPoxy or HAPoxy, MHA and something like this. Okay, that's the basic, like how the application can connect to the server and switch over to the other if that one fails. So, for some incidences, if we fail over to another standby servers, which can be active, and afterwards, we want this server to get data back to the original server. We usually call this like dual replications. So, the replications is unidirectional, but we can set up too. Okay, from A to B as well as from B to A. So, when we kickstart with the server on the machine B, and the data which put onto the server B as in the replicas at this previous time, and now it becomes the primary and the source, and data can actually be replicated back to the original server. So, when things happen, we need to switch back. We can get the primary server back to the original machine one server. So, that's why we can actually have the like dual replications located together. So, there is also by default when we talk about the replication, it is asynchronous. So, let's look at on the right-hand side, this is actually the source to sending data to the replicas. So, it's actually we call the semi-sync in here. So, basically a thing, we may get it off, because the other side actually stopped, okay, the data still continues to update. But what happens to this, we can enhance this, okay, to the so-called half-and-half semi-sync. So, in here at the bottom, we try to actually showing the state status, okay, or replica status in here. So, the source is continuous per second, we update in certain record. So, in here we have the GT ID and all this is actually data is written, okay, through the slave or the replicas. Now, the command we try is to stop the slave, okay, the IO which getting the data. So, when we stop it, you see on the source, the data, okay, cannot be any more data to getting in. So, it is just one, two, three, okay. So, on the other side, because we stopped the way to getting data from the source, okay, both side at this time, we have the one, two, three, okay, 123 rows of data. Now, we try to start the IO again. So, the IO continues to start again on the source. So, that's why this actually protect the data as like, okay, whenever we insert data, we guarantee the data has to go to the other side, okay. So, on the left hand side, the wordings in here is like, by default, replication cell is asynchronous. All this like transaction when it comes in is just, okay, trying to write to the source bin lock and then finish. But what happened, okay, if this is like the semi-sync, there is an IO thread on the replicas, that's trying to pull the data from the source to the release lock and the SQL thread is trying to apply that kind of data to the database, okay, from the relay lock and sending it back to the database so application, okay, can actually read the data. So, in this like the demo on the right hand side is trying to what? To stop this pulling data from the source and the source cannot commit the data. So, we guarantee whatever we commit the data at that point, okay, it has to commit to the relay okay, data on the other side. Okay, there's no data loss. So, the configuration and implementation has to get the semi-sync plugin on the master and as well as on the slave or we call this replicas. And we have to enable the master and enable the slave. Okay, we can actually set like the time out where if the master on the source side which tries to write and then you hand because the IO thread is stopped or whatever the network is disconnected. So, at that point after the time out it will bring the replications to asynchronous as long as the connections get back it will go into the semi-sync again will tie to the IO thread again. So, this guarantee the data has spontaneously to be written on the other side and no data loss. So, what happened? Okay, any tools that we can actually make it easier okay, is to enable this like replications. With the today we have the MySQL shell and InnoDB which is the storage engine and we can use this MySQL InnoDB replica sets and with this MySQL shell it is like a tool it's a client, it's a CLI so that we can use this JS JavaScript okay, connects to the source, the host one and then create a replica set named as MyRS and we use this to add the second instance and as well as the third instance in here so you can see this, okay that kind of the creation using the shell is easy and bring this all together okay, forming a one to two replications okay, easily without getting like change master all these things this is just very simple in MySQL shell and typing in a fill command in here you can see three command lines okay, to bring these three instances to be a replica set, a replication architecture okay, for sure in here is the timing okay, because there is, it is the asynchronous replications data still maintained in the source and the pulling data is something next up so it is asynchronous so there is a chance data is written on the server A may not be able to catch up into the server B and C okay, this is asynchronous so this is tool so what makes after we actually have the innovations to get into the NLDB cluster deployment so with this NLDB cluster deployment you can see all we have the free MySQL server which is the yellow color in here and the application connects to something in the middle which we call this MySQL router so the MySQL router is like a transparency tier and the application connects this like a proxy server and that proxy server we know where is the MySQL server active so it connects transparently to the back end of the server even if it's three nodes so whatever that's bring down the server it will actually switch overs to the others it's like fade over but the application is transparent it doesn't know, it doesn't need to know so this is kind of the deployment in here but people also may create another topology which we have the routers just sitting on the server and the applications for example this is like Java and JDBC URL and it can get into the multi-holes connections and when it connects to the router which the router one it fails it can connect to the router two and the router one two three basically they are status okay, no matter which one you are connecting to it's actually connected to the topology of the three in the DB cluster so it knows where to talk to so this transparency is getting very easy for deployment so let's look at how we create this in the DB cluster on this right hand side so firstly we can actually provision three nodes okay the very basic one from the fresh, clean and newly created instances which is to demo on the right hand side okay it is still blank and there is a possibility people has been running a single node server and they just want to add in two nodes to bring the server into HAA which we call this three node in the DB cluster okay it can be backup restore or whatever MySQL done and MySQL actually loading the data back or the other side is people has been doing the replications using master and slave this means they have the two nodes already and running with the data so basically the two nodes we assume are the exactly the same and we provision another new node okay bring the data okay as like the replicas so one to two and three nodes at that point and then we can bring this to the DB cluster and convey and create the administration users as well as config properly for the my.cnf the configuration file and for sure when we say this actually a cluster meaning the data in for all these nodes they have to be the same same by what means it is like all this okay we do not complain like online okay data by data okay but we compare the GT ID gobo transaction ID with A server B server and C servers to be all the same that means okay they are the same data so we have to guarantee okay all this GT ID on all the servers to be the same so that they can bridge them together and we provide this okay with the assumption or actually it's a requirement for all this table we need the primary key so identify all specific rows okay to all the data and so whenever we update a specific row and we know where it is and it can actually perform quickly so demo is right here on the right so now first of all we create the three instances okay we call this the port number 3.1.0, 3.2.0, 3.3.0 and they once we create as point out in here the three directory is actually the data directory as the demo we put this onto one single box so for this the one single box creations after this you will see that okay we try to change the configuration might of CNF to a pen certain specific configurations okay like the repository table, GT ID must be on and also the chest sum okay using what is the SCH64 so after all this actually basic configurations we can start it up so once we start it up so what the next is so right now all this startup can be using the system D or whatever you feel it is good for you so right now I'm using the tar package which has the master D-save and the 3.1.0, 2.0, 3.0 port number okay they are running so for this it's a brand new server so the GT ID that means they are all empty and they are all the same because they are just the same so what's right now I'm actually getting into the connections to the minuscales shell and then connects to the server so what that is it has the JavaScript, Python, SQL way and what next is we create an administration users, GR admin users and by configuring these users we also check whether there is actually missing configuration it does okay the minuscales shell tries to configure the server and we start it so all the server is created properly and we check it and what next is we login through the minuscales shell okay and in here it's just to execute the select statements to query the GT ID as mentioned earlier they are a brand new server no transaction at all even we create the users but it's actually hidden as like user transactions so the GT ID is all empty okay for all the server meaning they are all the same okay when the cluster is dispelled okay it tries to compare so what next is we tries to create this cluster by create cluster command this command is okay we set certain parameters consistency how it detect the timeout and also what's the cluster network and subnet so all this together and we create the cluster so right now the cluster on the server one which has a one note server as the cluster and we tries to add the second and third note which is how we do this by recovery it can be calling mentioned earlier in the last sessions or it can be incremental as right here all this empty there are the same so we just use incremental that's the fastest okay so to bring data here and back so meaning there is actually no data changes at all so we try to use this and also it tries to use the local host another number where we tries to use this as like the local cluster network and as as long as we do this we actually create another note which the cluster has two note and we do this again for the third note and we create this in ODB cluster okay so let's move on so what about if we deploy this to the DR there are some other alternate alternative so for one thing here is we can create a cluster and cluster on two data sender and with this meaning we have to send the data from one side to the other side so we bridge this and through the async replications so we can actually use the cluster sets okay it's actually a new features okay out from 8.0.27 and as of today it is actually 8.0.28 so basically this like the replication connections it has the failover capability no matter this is on source or no matter this is on babacus meaning there should be actually like point to pine connections the point to point when the point has outage what happens the point actually on one side we have three note okay one of these fail actually fails to the other so the source data will actually be another note to sending data across and the same way apply okay the same way applies to the replica side meaning we have one note actually on the replica side on that data center is receiving data but what happened is fails so it also will fail back fails to the other notes receiving the set of the data and bring the data in always okay available so you can be just two data center or in the middle we can be three data center okay or four so this is like the replications can actually be scales to multiple data center and one important thing is there is maybe a chance the data center does not have okay uh that many that much capacity we may actually host just one single note you know to be cluster as like stand by okay because we have sufficient okay power for the data center one and two so we don't need extra more and that's actually just for something as like backup okay so that's what we call this cluster set this is quite easy so right now as actually the demo on the right hand side we have okay the one oh two oh three oh as cluster as we go earlier and then now we try to extend this to actually four oh five oh and so and with the two cluster assuming they are actually sitting on two VM or sit actually on two data center so what's right now you can see uh right now when you look at this so basically you can see this we try to create the three notes okay four five six as brand new or we can actually back up and we calculate and now we start the four five six and again and what next is we try to get into the create replica sets so what this means we create a replica set is we use this okay to create a replica set okay as you can see we run this create a replica set and that right that actually works and what makes this to curious the status what happened to this uh replica set we try to get the replica set cluster set sorry cluster set which is and the status is showing there's one cluster to this cluster sets okay and the primary instance is 51 oh what makes this we create the 3v4 oh as a second cluster okay replica so in here you can see we create this replica and now and the my cluster tool is up and running okay the 3v4 oh as well as and with this my clusters to we can just do it the same way okay to add the instance at the instance and all this together meaning we have two cluster okay by having this my scale shell and the deployments is just easy and the tool data center is now bridged okay together so we have the command to show the cluster again and it will show you they are actually two cluster running so you can see this actually my cluster is the primary my clusters to is the replica and they both have three nodes running okay so one X here somebody may actually ask what happens to the right now people using Kubernetes so as Kubernetes right here I will just do it quick so the three nodes actually we can easily is to create a stable set and actually the router set and there is actually custom resource definitions putting the in node dv cluster as the resource in Kubernetes and there is also service okay which we can deploy as a name say my cluster which point to okay the router tier actually this is the status replica set okay so whatever which one is failed it actually automatically points to the others and application is transparent to access the routers okay for this service so this is more detail okay diagram and showing in the MySQL operators which you can install and perform all these operation which is quite easy and with all this like how we do backup as well okay so what it means to us is this actually operators is now in beta and you can actually find it in the github.com MySQL MySQL operators you can easily deploy this okay CRD custom resource definitions and as well as deploy operators YAML which you can find it on the operators there is also hem chart okay if you look at this operators page you will see how we can actually install it in another way this hem chart so here I just to show you using is to deploy control so basically I have this like our Kubernetes environments okay so with this Kubernetes environments so what next is we can actually have this web page and there is actually way you can copy and copy and you can deploy this to here and run this control apply that specific CRD as well as the other YAML deployed operators when we deploy the operator meaning there is also a namespace call MySQL operators running and we can use that to deploy our custom resource which I see in ODB cluster so once we have this we actually have this namespace and the port is running so right now on the screen it is trying to show you there is a new namespace and the one is actually being deployed and for some seconds and we will see that it's running it's ready so let's get it quickly to another one how we deploy in ODB cluster deploy the operators so let's get to this step so here you can see that firstly we need to get to the username password for the MySQL database so we define a secret by applying this secret with the name my password, my resource and password welcome one and what after this is to apply for sure and then we deploy the in ODB cluster by having the YAML and the kind is the in ODB cluster the type of the resource is in ODB cluster which is something new which is the operator when we deploy we create the CRD which has a new name in ODB cluster and right now in here we actually get a name, my password for the resource about the username password and three instances for the nodes we can actually spawn five six whatever and also the routers how many actors we need that and also three and we may actually change time zone from specific the US time zones back to something else and we actually can enable some other configurations as well as the storage how big your storage size you need too so once we do all this the thing is we provision and provision getting to time and around two minutes and it has one server running and another minute so after around this like seven minutes we get to the MySQL server up and running in a very easy way so you can see this the in ODB cluster with this is actually up and running now all this actually you can see my cluster online free online and it has three instances and routers actually free and the ages actually over like five or six minutes okay all provisioned okay in my specific environment so you can see on on work we can deploy applications to access and actually gets to know how we can access to it so let's get it easier and then started for like how we deploy PHP and means for example so in here so you can see we have the the cluster up and running we can scale it okay by obviously we have free three nodes and we add it and we use the number of instance from three to five and after this we apply and then we get free instance and getting to five instance is still in the process of provisioning additional one online so after a while we actually see this okay my cluster is getting more and more and still one two three and four five okay four and the cluster servers is there so what next is we actually provision the PHP the PHP and mean can actually be running and connecting to us so what in here is we actually provision the PHP and mean and the PHP and mean can actually be running easily and then we just wait for the load balances the external IP or we can use the ingress okay controller and then have the ingress rules and to make sure that the routing from the externals to the Kubernetes to be up and running so after that we can actually use the PHP and mean and connect things to the inodb cluster okay next let's move on so I haven't said that actually the high ability for this meaning we can have the single okay we do our own with the inodb cluster okay or we can use the Kubernetes okay MySQL operators as well as what about the next game is okay on cloud MDS so let's get a quick look on what is available on our Oracle Cloud infrastructures we can provision a standalone servers as they click click click as well as we can define H8 models which is the three nodes running as like you know the applications running as well as a high performance and very speedy okay we call this hit wave and we have today machine learning enabled for this hit wave as well so we have different shape okay to the requirements of the users okay what they want for this and it's managed servers and applications running on the OCI will actually be created and easily used so as the many data center okay even on the cloud we need to be for tolerance and what it means in here a machine or a data center we have availability domain we have default domain with the machine so we deploy when we collect the H8 models meaning they know actually deploy a specific machine or specific data center when the region has more than one data center they try to deploy is across data center within the region so make sure that we get maximum reliability in here so what about hit wave in here so as managed server MDS is like okay you provision you can use okay we have the people we have the service automatic we have the interface that you manage you can actually monitor the service as actually we see that earlier from our okay previous speakers and the hit wave service right here it can be the online database and just running okay the MDS service and it's just right data for the INRODB and the hit wave is actually a cluster of memory engine and intelligence to actually dictate the SQL statement and all data actually from from the INRODB can actually patch it into this memory cluster and data will be relatively to corresponds to these SQL engine so it can be the OLAB application surrounding and with the single interface so in the past people actually doing ETL doing a lot of things different databases skill set ETL effort okay all this meaning the cost to you so right now with this single interface as SQL and MySQL as the frontier okay to your application meaning it's simplified no ETL again application is like the same skill set again running this and perform much much faster okay it can be extreme fast okay 400 5000 times faster than the original what you are running okay with your applications what today we have machine learning applications to be enabled so think about this okay as heat wave machine learning meaning okay in the past you may use sapling interpreters okay running pandas and actually expect the data and then put it into somewhere and you have to use the train data and finding what actually predictions and you want to explain and infer okay to the end result just like okay we have all this like travel and things we want to find the shortest path okay to somewhere how we can do this we learn from the experience and then we project okay this maybe the shortest path and then we find and explain what kind of attributes in your model should be good enough so our MySQL heat wave is here also give you the capability to run this okay in just SQL interface SQL interface okay this is like the first part okay for us to explain we have this okay I hope today is about time okay for the sessions and this give you a summary for today and see if you have any questions for today we have the easy deploy okay for MySQL you can be the source replica sets and you know DB cluster as well as running on the Kubernetes as well I explain for others like how we do this MySQL database service as well as heat wave and machine learning and the shell which is actually today we use this actually can be to supervision can be actually migrations tools okay