 Good afternoon, good evening, good morning, wherever you are. Welcome. I am Chris Short. I'm a Principal Technical Marketing Manager at Red Hat. I'm also a Cloud Native Ambassador. I'll be moderating today's webinar. A few housekeeping items before we get started. During the webinar, you're not able to talk as attendees. Sorry, but that's just how it works. There is a Q&A box though. So at the bottom of your screen, you click that Q&A box. Feel free to drop in as many questions as you feel necessary to get what you need out of this webinar. Also, friendly reminder, this is an official CNCF webinar and as such is subject to the CNCF Code of Conduct. Please do not add anything or chat about anything or put anything in your questions that would be in violation of the Code of Conduct. Basically, be respectful of all of your fellow participants and presenters. So I'd like to thank you for joining us today. Welcome to today's CNCF webinar, Distributed Transaction Processing Across Multiple Clouds with Kubernetes. I'd like to introduce Joe Leslie, Senior Product Manager at NUODB and Aaron Corelli, Principal Professional Services Solution Architect at NUODB, who will be driving today's webinar. Please take it away. Excellent. Thank you, Chris. In today's presentation, Aaron is going to control the slides for us. So we have an easier transition through the middle as we are going to tag team and both present. So thank you, Aaron. So yeah, just a few words about myself and then Aaron will get to introduce himself as well. So at NUODB, I am one of our product managers here. NUODB as a database product, as you can imagine, is made up of many components. I look after several of them, particularly our cloud native strategies, as well as our product and solution deliverables in that space and roadmap. I also, as NUODB is a distributed SQL database, I managed the, from a product standpoint, the control plane, as well as our NUODB insights visual monitoring tool, which you will get to see a little later during our demo time. Aaron, won't you say a few words as well about your role here at NUODB? Sure. Thanks, Joe. So nearly a year here now at NUODB. Based over in the UK, I work in professional services, which means I get involved day to day with the customers, helping them deploy NUODB in the best way in their environment, working through the solutions with them. And obviously, recently, we've been doing lots of cloud work with Kubernetes, and that's led to the case study we had to show you today. So, yeah, looking forward to it. Great. Thank you, Aaron. All right, let's go ahead and get started. Yeah, let's first review our agenda. So Aaron, if you could switch. Great. Excellent. Thank you. So I thought it would be good if we just started with a little setting of context and we you know, we define terms like multi-cloud. So we'll do that today. Also, you know, why customers are looking to multi-cloud as a means to deploy business critical apps. And then, as Aaron mentioned, we're going to review a case study. So actually, do a real case study of a real enterprising bank that is going to, you know, is deploying, go into market with a multi-cloud infrastructure. And Aaron's going to take us through a live demonstration. All right, we're going to actually show a Kubernetes multi-cloud environment. And then we'll talk a little bit more about NUODB and our participation in making all of this happen for this particular customer. And then we'll have some nice time for Q&A to make sure that everyone gets their questions answered. Okay. So with that, let's talk a little bit about what exactly is multi-cloud. You want to make sure as we use terms here today in the presentation that you know what we mean when we use these terms. So first off, multi-cloud is not hybrid cloud. So right in our industry, there's lots of jargon. And certainly in this new space of cloud computing, we're not short for jargon. You know, we love to insert words in front of things to make it possibly more complex to understand or come up with some new, you know, area within a section or a segment of the market. And that's kind of what's going on here with hybrid cloud and multi-cloud. So multi-cloud, and as we're going to use it today, we're really referring to multiple public cloud networks that clusters that are kind of networked together to make up a distributed computing environment. Now hybrid, hybrid refers really to a mix of technologies. So you have typically an on-prem or a private cloud. And it's, you know, somehow linked up to some public cloud pieces, which makes up, you know, the environment. Typically in a hybrid cloud, what you see is it's a little more rigid, and certain components are running in that segment only. Where with a multi-cloud, and as I mentioned, we're defining here as, you know, two or more public clouds. The idea is around allowing applications and their services to now move more freely between these public cloud clusters. Really, and the aim here is business continuance and the elimination of reliance on any single, either physical location or, you know, cloud vendors product, Amazon, or Azure, Google, or whichever one it might be. Now, as I mentioned, that's the definition we're using. You know, you could go out onto, you know, the internet and Google multi-cloud. You will find different definitions. So that's why I thought it is important that, you know, I state in our presentation, when we use multi-cloud, we're referring to two or more public clouds. You will find some multi-cloud definitions that kind of mix that hybrid capability in. So as I said, we are kind of making a distinction here in today's presentation on these two terms, okay? Aaron, yeah, we're ready. Yeah, so why, you know, why would a company or an institution consider multi-cloud for their business critical apps, right? I mean, clouds, you know, kind of new. It's something that many feel they might not have full control over. And there may be some risk associated with deploying into the cloud. And, you know, for some of these reasons, you know, there could be, you know, more research or more decisions that would be made before a company would actually, you know, decide to move a business critical app out to the cloud. But yeah, so what we wanted to talk about is, you know, some of the reasons that we're seeing that companies are looking to the multi-cloud environment to deploy their apps. And as mentioned just a little earlier, business continuity and extending the availability through a multi-cloud is actually something that's now available and can reduce vendor lock-in, right? If you're in a multi-public cloud environment and let's say it's made up of Amazon and Azure, you by nature will be more resilient to any failure events that would occur in any one of those cloud environments because you're not losing that entire vendor's cloud infrastructure. You would only be losing the one and the other would be able to continue to run and support your application. So for these reasons, companies are really seeking to increase their application service levels through the use of multi-cloud computing environments. And as mentioned here, kind of driving that the reality of where we're heading with true zero downtime application deployments. And next slide, Aaron. Thank you. We thought it would be a great way to kind of present the topic to talk through a case study. This is a real bank that has implemented a banking application in a multi-cloud environment. And when deploying a multi-cloud strategy, what we've kind of come across is some industries and specifically the one I'm referring to here in the banking industry, there are regulatory requirements that now mandate the use of heterogeneous or disparate public cloud platforms. So why is this? Well, major public cloud vendors regularly experience outages. Now that might sound like a surprise, but it is a reality. No, some of these outages may be short and the extent of them can vary. But the fact is they do experience regular outages. This is actually verifiable. We can all go out to the internet and Google on these terms and you'll see that there's actually even annual reports on cloud vendors products about the outages that they're reporting and annual kind of interruptions and so on. But so why is this important? Well, so what happens during these outages is organizations can lose availability or data or possibly both. Right? So that's why we start to look towards greater protections and distributing our applications across different vendor products. And with multi-cloud, there's actually some tech enablers that make this all possible. And we're going to talk about some of those today. So one is Kubernetes orchestration. Kubernetes being the leading de facto standard in open source orchestration container management. Basically, a tool that is maturing very, very quickly through open source contributors. And as I mentioned again is like the de facto standard. So Kubernetes has a lot to do with kind of making multi-cloud a reality. Also, high capacity networks. What we're really seeing now is quite amazing in that we can now stand up a multi-cloud infrastructure in two different cloud products. And from cluster to cluster we can see two to three milliseconds of delay. And with that kind of capability, yeah, we can now start to stand up critical banking apps or other critical apps within an infrastructure as we're describing. And then there's also lower costs associated. So low cost compute. And of course, any kind of cloud investment means that there's less investment in your own on-prem resources, be it CapEx or OPEX type investments that we're seeing lower costs associated with these environments. And then storage. Whenever we're moving critical apps into the cloud, we're usually talking about persistent storage and Kubernetes offers integration with lots of container native storage options that are also helping make multi-cloud deployment environments a reality. All right. So yeah, so the banking customer that we're working with is WeLab. And so WeLab is, they're a challenger bank in Hong Kong. And they've embraced multi-cluster. They believe that multi-cluster is the right environment for them to deploy their application. And we've included here several of the reasons that they've landed on. And some of them we've covered lightly already. But in order to achieve the highest level of business continuity, they believe strongly in deploying across vendor's products. And we'll see that WeLab has chosen Amazon and Azure, but you can absolutely choose other public cloud environments as well. The reduced operational costs, it's really reduced capital and operational costs of deploying these environments. And also the ease of management. Kubernetes has a more mature platform now for deploying containerized apps. There are lots of options available now for managing these environments, creating a single pane of glass or interface to managing and monitoring and controlling the environment. So as all these come together, it makes multi-cloud an environment that, yes, a company or institution, even banks, as in this example, can now choose the public cloud environments to deploy their applications. So when we look at WeLab's environment, we'll just kind of cover a few of the things I've mentioned some of them already. They have gone with a multi-cloud that is comprised of two public clouds, Amazon and Azure. Of course, they're using the Kubernetes to deploy their stateful applications with persistent storage. That also includes the NoDB component. The NoDB distributed database uses persistent storage. And we'll talk a little more about that as well. Some of the technology to glue the two multi-clusters together, we're using some VPN tunneling technology, some quality of service via a service provider on Megaport. And in this particular deployment, we've chosen to use Rancher Kubernetes management. In fact, that was WeLab's decision to go with Rancher as their Kubernetes management interface and console. Of course, the application itself, it is a SQL banking application. It's a packaged application. And then the NoDB distributed SQL database. So all these components together have allowed for this environment to be delivered. So with that, I'm going to go ahead and turn it over to you as the one who actually implemented was part of the team that implemented the system. To take us through some of the challenges faced, because I think for our listeners today, they'll be really important as they're considering deploying their own multi-cloud applications. What sort of challenges were we faced and how we resolved some of those challenges? Sure. Thanks, Joe. So Joe asked me to think about the challenges we faced. And you might recognize a theme amongst these few examples that I've given here. And a lot of them boil down to networking. I mean, if you take each individual cloud on its own and you take an application which is a container native and Kubernetes native and easily deployed, then of course, each of those is going to work pretty well on its own. When you start thinking about the issues of connecting these two clouds together, and not just the clouds we're talking here, if it was just a cloud server, a EC2 instance, for example, within Azure, a virtual machine, that would actually be pretty straightforward. But when you actually layer Kubernetes, pod-to-pod connectivity, the container networking, everything on top of each other, it actually becomes quite a difficult task to network everything together. So let's look at these challenges faced one by one. So latency between clouds. Now, NuoDB is interested in latency. We want the best possible latency to get the best possible performance. And one of the metrics we have is that we would like latencies to be a lot less than five milliseconds wherever possible. And if you look in here, two, how they resolve. So basically look at latency. Now, in the case of WeLab, we were lucky in that they are using Megaport, which is going to offer a certain level of QoS for the VPN that's operating over the VPN being the connection between the two clouds. So we can be sure of a certain level of service there rather than going over the public internet, which might be a lot more variable. Now, while we bring up this topic of VPN, it's worth pointing out that we're actually concerned here with two different layers of VPN, if you like. There's a layer of VPN which is connecting the two clouds together, which lots of different applications might run over. And then separately, we also have a VPN which is being deployed by NuoDB within the containers themselves. So if I reference to VPN, I might be talking about one or the other. I'll try and distinguish between them. So in this particular case here with VPN QoS, I'm talking about the external VPN, if you like, the connection between the clouds. The second point here, both data centers being located in Hong Kong. So while we might be using AWS and Azure, both data centers are actually in Hong Kong. Hong Kong is obviously not an enormous place. And actually the distance between them is not enormous. So the ping times latency between those two data centers is actually pretty good. And that's not to say you couldn't use it on a wider network, but you always need to be considering those latency times. And the last point here, keeping client connections local to one cluster. So this will depend a lot on the topology and what the aim of using multiple clusters is. In this particular use case, we have a DR type scenario. So the Azure cluster is a DR cluster and the AWS is our primary cluster for servicing client connections. Now that's not to say that you couldn't use them active-active. But the good thing here is that what we can do is keep client connections local to one cluster and that saves having too many hops between those clusters for both the client connections and for retrieving data from NuoDB itself. So the second challenge I really put down here is pod-to-pod connectivity between the clusters themselves. So again related to networking. And while we were looking at this, we looked at a lot of different options. There's a lot of party products out there which could do this for you. There's some examples here. Container networking, Istio service mesh, psyllium, using host networking, host ports. There definitely are options out there and in most cases getting to the point of having a networked multi-cluster is not straightforward, but definitely possible. The problem is when you start looking at trying to get existing applications working over those solutions, they might introduce problems which you then need to change your product to work with. So it really comes down to a question of perhaps how flexible are you to make it work on these solutions? Or if you're perhaps building up a solution from scratch, it might not matter so much. But certainly in our case what we found is a lot of these solutions are likely to also be predetermined by the client you're working with. Perhaps the KAS cluster has already got a CNI installed and that's the one the company has chosen to work with. So in order to get a bit of flexibility and certainly have a lot of things in our control, we decided in this particular case to run over our own internal VPN. So this is New ADB as a product establishes its own VPN connections between all the peers in its database network. And like it says here, this is pretty simple, effective. It allows the solution to be flexible. It works. It's agnostic to the CNI in use and it allows us a lot of free rein effectively to use our product how we designed it to be used. Third point here, connectivity from outside to inside the VPN. So that's another challenge. Now naturally if New ADB is working within its own VPN that produces a barrier to client applications connecting into it. The obvious answers to that is to also put that client on a VPN, but you know that might introduce other problems along the way. So fortunately, New ADB is quite flexible in that in that respect and what we can do there is allow direct connections using that in combinations with the KAS services that are available. What we are able to do and this will become a bit clearer soon when Joe talks about the New ADB architecture. But effectively what we're doing is bypassing the admin plane of New ADB and connecting directly to the transactional engines which are handling the queries. And we can do that via the Kubernetes cluster IP and headless services so that we don't have to worry about VPN IP addresses not being addressable from outside the cluster. So it's a handy flexibility there that New ADB allowed us to work around that problem. And the third one here, sorry fourth one here, domain databases stability and differences of performance between clouds. So this is a pretty big one actually. When New ADB is running on one cloud perhaps or whether it's running on bare metal you can get pretty consistent and you expect the performance to be of a certain level. When you start mixing multiple clouds what we found is especially with Kubernetes as well, the differences in times between servers being provisioned between pods being replaced between persistent volumes being allocated and can vary quite considerably. And what we found was that that was causing havoc sometimes with how New ADB handled domains coming up and databases being available in the timeouts for default timeouts that we were used to just weren't enough. So in this particular case the resolution for that was to look at how many different ways we can possibly tune New ADB. Fortunately we expose a lot of different options as to what timeouts you can change whether it's the admin layer or SEMs or TEs and we are able to change a number of different timeout settings and address them specifically to the latency of the network. So we can look at average latency, maximum latency of the network over a period of time and then we can use those figures to apply them to the New ADB settings and make sure their work has expected. So combination of different issues there mostly networking related and resolutions around making sure that we got QoS on the VPN taking some things into our control with using our own internal VPNs and the flexibility of New ADB with direct connections and lots of different options to make this work well. So I'll move on to the next slide. So this is an example of the actual client architecture at WeLab. Now it's simplified somewhat, you can see in the sense here let me just get my pointer up. So in a sense here we've got a simple representation of the VPN connection. Now that represents both VPN connections, the site to site VPN between clouds and also the New ADB VPN and you can see we've actually got VPN pods here running as well which are servicing the internal VPN connection for New ADB. On the left hand side this is showing the Amazon cloud sorry in orange. On the right hand side here we've got the Azure cloud in blue. Now these are both Hong Kong based you might notice on the right hand side and in Azure we don't actually have availability zones and that's because Azure don't offer availability zones in Hong Kong region. On the left you can see we've got availability zones there. So the intent here really is to distribute the New ADB architecture of the T transaction engines, the storage managers and the admin processes ensure they're distributed over enough different resources that we are resilient to failure in a number of different ways. In this example here we could tolerate failures of multiple pods, multiple machines, multiple zones and multiple clouds and still have an operational database. One thing to point out here is I mentioned before that in this particular case it is a DR scenario so we can see at the top applications initially will be connecting to Amazon and like I said before connections from client applications within this cloud will all be directed to transaction engines within this cloud. They would never go across clouds. The connections that are made across clouds are all to do with New ADB's internal synchronization of data to ensure consistency of the data across both clouds. In a failure event the customer would look to be moving connections from the apps over to the other cloud and that would be that could be your automated or manual process of course. Another interesting thing to point out on this diagram is at the bottom here we have a tie breaker zone. Now that could actually exist anywhere, the most logical places to actually use a third cloud potentially. For example you could use Google Kubernetes engine or you could use Rancher deployed to Google. Now the idea of a tie breaker in this particular case is to avoid a network partition. So if we only had two clouds Amazon the left and as you're on the right we could end up in a scenario where the connection is lost between the two and if all processes are equal on both sides we end up with a network partition and the database shuts down. So what this tie breaker zone is effectively doing is allowing majority to be maintained should that connection be dropped and this is what's allowing us to tolerate loss of an entire cloud or a connection to that cloud. Okay Joe do you want to add anything to that before I move on to the demo? I think that was excellent thank you Aaron. Okay so if we make a short prayer to the demo gods. Okay so now I appreciate this might be appearing quite small on some people's screens. Let me just refresh down here it seems to have lost itself. Okay so what you see on screen I've got four windows open. The top right, bottom right, bottom left are all the Rancher management interface and the top left up here is the new IDB insights tool which Joe was mentioning earlier that he manages. So what I want to do first is just give you a quick overview of the Rancher management console. Some of you might be familiar, some of you might be brand new so it's a pretty nice way of looking at Kubernetes makes it very easy to manage. So first of all up here we've got a dropdown. I've named these clusters straightforward so AWS and Azure we have two separate clusters running here. And with AWS we have a default namespace and Azure we have default namespace. But interesting thing to look at first is the node. So AWS node is the page I'm on right now and Azure node is the page down here that I'm on. And we can see if I look scroll down on Azure. I have a single control plane running and I have four work nodes running. And up here on AWS I have a single control plane and four work nodes running. So next if I jump over to the default namespace and both of these we have them to load. So Azure at the bottom has loaded. So if we have a look at this we can see this is the actual new IDB deployment itself which is running. I mentioned before that we have admin processes and we can see here we have one admin process running in the Azure cloud. We have our TE processes here. And we have one of those running in Azure and we have an SM and that's also running on the Azure side. There's a few other pods here of the VPN service that is obviously the connection between the clouds for NodeDB itself. We have a YCSB load generator. So this is the Yahoo cloud servicing benchmark I think. We haven't got anything started at the moment but we'll play with that shortly. And we have a few other Damon sets. So THP we require transparent huge pages disabled for NodeDB as a job for low balance and policies. But we're most most most interesting in the admin, the SM and TE. And the demo gods are not shining on this window at the top here. Let's try refreshing just to point out these. I'm in the UK and all these servers are running in Hong Kong. So sometimes these interfaces do get a bit slow. Yeah and as you mentioned Erin, there's always the live demo Murphy's Law that will and always take effect. But I think most of our listeners appreciate whenever we try to show live demos it's very helpful. I'll just also mention while Erin's waiting for this one screen to repopulate on the demo slide that Erin launched from here, it has two links to recorded videos knowing that sometimes live demos can be a challenge. Once you receive your your link to today's presentation, you'll see that there are two videos that are available. Thanks Joe. So this this is now loaded. So you can see the configuration is very similar. The only difference being that we've got two pods running for admin, two containers running for admin and we've got two sounds. We've got two TEs. Everything else is very similar with an additional monitoring insights pod running here, which is the screen you see in the top left. So that's an overview of the domain. You can see here that these are two separate windows. They're two separate clusters running in two separate clouds. Everything is very distinct and very independent. The only key that we might have that something is linked is this VPN service down here. So in this bottom left window now, if I draw your attention down here, what I'm going to do is I'm going to go into the default namespace. In the admin, I'm going to open a shell and I'm just going to run a new cmd command. And what this does is show us what what new edb's view on the world is, if you like, if I just expand that out, I'll be a bit of shuffling of windows I have to do throughout this. So we can see here that we've we've got three of us up at the top running. And that's that's our three that we had running across both of those clouds. And you can see I've named them slightly differently. So this top one here is named admin dr in Azure and the bottom two here in AWS. And we can see in the database, we have three SMs running and three TEs running. Again, one of those SMs is in dr Azure. And one of those TEs is in dr Azure and the other two in AWS. So what this really does here is just show you that the new edb is really seeing this as one logical database and one logical new edb domain across multiple clouds. And that multiple clouds might not just be Azure and AWS, it could also be Google, it could also be on-prem. It could be it could be anywhere effectively where you've got that connectivity available. Okay, so if I just make that one smaller and this window at the top left here, let me just make this one bigger for you. So what this is doing is this is an overview page system overview. We'll check back here now and again to see what the database is doing. You can see not much is happening right now. This is showing us memory usage per node, CPU utilization, transactions per node. An interesting one here is the aggregate transaction rates in total TPS that's running and our client connection. So we'll keep coming back here and checking what's going on while we're doing the demo. So the first thing I want to do is I want to scale up a workload. So we have YCSB, AWS in the top right. I'm just going to drop that down and we have YCSB in the bottom right here. So I'm going to drop that down as well. So first of all, I'm going to scale that YCSB workload up to two. And we'll just wait a few seconds for those. Pause to allocate pretty quick. And on the top left here, we've got last five minutes showing. So we'll just have to refresh this a few times away a few seconds before we start seeing things happening. Now bear in mind what I did just here is start up YCSB only on the AWS side of things, not on the Azure side of things. And remember earlier, I mentioned about keeping those connections local to one cloud. And what we'll see on this up here is when the load starts, we'll see that those connections are only being serviced by TEs that are in Azure. So here we go. We can see the connections are jumping up there. The yellow and green are the two AWS nodes. We can see our transaction rates increasing. So again, the two AWS nodes, we've got around 1000 TPS per node for the moment. And we can see our aggregate rate is as it should be around 2000. We'll just wait a little while for that to stabilize. So while that's stabilizing, the first thing I'm going to do here as an example is to delete a TE. So this is a soft way of simulating a pod failure if you like. Let's say that one of our transaction engines dropped, whether it might be the node that it was on died, it could be the pod itself died, anything to do with causing that transaction engine to stop running. Now the client in this case is correctly configured. So the client connections to the TE that died should be re-established. And when that connection gets re-established, it will re-establish to one of the other TEs, which is still available in this particular case. We've only got one other available TE. But what we should see over here is the connection that's dropping off of one TE that died reconnecting to the other TE while it gets rescheduled by Kubernetes. And then what we should see shortly after that is the load get redistributed back to the restarted TE. And all the time we should see that the transactions continue running and we have no loss of availability in the meantime. So it's stabilizing over here now on the top left. So I want to go ahead going to our transaction engines. I'm just going to delete one. Assume that deleted. Let's just give a refresh. It decided to go slow again. So it doesn't look like that deleted when I told it to. But hopefully after this refreshes, it will work. This is what I was pointing out to Joe before we started. It seems to be slowly responding today. Yeah, that's okay. I'm sure it'll catch up. We may even actually see that it's deleted sooner in the NoDB Insights interface. As that interface is updating every 30 seconds with data, if it has deleted in the back and we may actually see it there first. Yeah, that's why I'm thinking it hasn't because I haven't seen that drop off over here. Yeah, it does look fairly stable there. This becomes one of those things when you click something five times. Does it happen quicker? Yeah, that looks like the transaction rate held steady. Yeah, indicating that probably that the delete did not occur. Yeah, we can see here we've just got an error refreshing the interface with a network request failed with the renter interface. So this does happen. See, it's such a resilience. It won't even allow you to delete a part. Funny that with the case. What we could actually do is try down here, see if this this connection is working any better. Let's go to, I don't know if it doesn't like me having multiple connections open to the same interface maybe. Well, that's okay, Aaron. Why don't we go ahead and continue because I think everyone gets the main idea around the idea of the ease of management through a containerized management console like renter to to manage a multi cloud environment, as well as how new ODB and distributed SQL databases servicing a banking application, or in this case, a sample app. But the idea is the same with the client that we have implemented in the lab. Also, I think some of the strong points that Aaron was making around our choice of the VPN tunneling, I think we want to also make clear. As you look to deploy your own multi cloud environments, we're always going to suggest that you consider and look over the best possible generally available options at that time. It is while a mature space, it is continuing to evolve and improve. So we would suggest, in our case, as we mentioned the VPN tunneling for us, it was kind of a simple, reliable way, an effective way for us, a very supportable way to deploy this multi cloud. We looked at some other technologies like, as Aaron mentioned, Istio, there's the Submariner project that was not GA at the time when we were looking to deploy for this customer. So again, just look at the different components and then evaluate them during your planning and your testing phases to determine the best and optimal tools for your own deployment. So, Jay, what I just thought I would do is just see if I could load up the interfaces, improvise here and close down my multiple windows and just see if I can get it up in the browser. We've got the initial connection okay. Sorry, Aaron. Let's go ahead and continue. We'll spend the next five minutes or so. We just want to do a little bit of an overview of how Nodev participated in the solution and then we'll open for some Q&A. We have just got it back. Up to you. I'm a wary of time. So, did you want to carry on? Yeah, why don't we go ahead and carry on because I think, you know, and as I mentioned, the video clips are going to demonstrate exactly what Aaron was going to show next. He was going to demonstrate the resiliency of the system that was built by basically inserting a failure event and deleting, in this case, a transaction engine, but you can delete it or a transaction engine, a storage manager, even an admin process within the control plane, and the Nodev database system will continue to operate. As long as there are multiple processes of each process type, the Nodev database system is going to continue to run and process SQL requests. So, it's a very strong benefit of the solution. So, Aaron, why don't we jump back to the slide deck? Yeah, sure. If you could. And yeah, again, I think on the next slide it actually shows those video links that I was referring to, and our listeners can always go and enjoy stepping through those at their own pace. They all have text labels describing what's happening as you go through those videos. But on the next few slides, and we'll move through quickly, there's just a few things that we wanted to talk about. And one is when and when not to deploy a multi-cloud environment. Now, really the suggestion here is a multi-cloud architecture, while certainly great for this particular bank and their critical app. Maybe not all applications require this level of business continuance. The bank is trying to achieve 24 by 7 by 365 availability, and they were looking to the absolute best technologies available to do that. So when not to consider, well, if your app is not requiring that type of service level, then you could likely deploy in a single cluster, or maybe it doesn't need to be across heterogeneous clouds. You can go across two particular clouds or a hybrid environment that we had talked about earlier. So all things to consider. On the next few slides, and some of this I'm going to allow for your review as well once you receive the deck for your own keeping, is just some slides that review the new ODB solution itself as a distributed SQL database. Erin, if you could flip one more slide. There was a couple of areas I wanted to point out in the architecture. You'll notice in the diagram the transaction engines are highlighted in green color, and the storage managers sort of in this yellowish color. One of the takeaways and benefits of the product is how the database is architected such that the transaction and storage layers are independent of each other. This allows the new ODB to scale either component based on a particular workload or use case. To the left we see this traditional environment where the query processing and the storage management layers are tightly coupled together and they just kind of stay together in a full database instance. New ODB is providing flexibility here that allows new ODB to service applications in a greater variety of ways. Also new ODB, if you're curious about the type of SQL, it supports anti-standard SQL. It also delivers acid transactional properties. You can trust that when you run your SQL applications that you are getting a secure and consistent atomic transaction that's being committed durably to disk. Just as if you're migrating from any legacy popular database systems, new ODB delivers that same high level of transaction security. In the next slide actually demonstrates some of how the different environments that can be supported. You can have some applications that might require scale out of the transaction layer. Erin, if you could just flip the slide that would be great. Thanks. Here we see the one at the top, maybe a logging application has many more storage copies as where maybe an HTAP hybrid transactional analytical processing app might have lots of transaction engines servicing different types of applications but less copies of the database. The choice is completely yours. Next slide, Erin. We've talked through a lot of the benefits of the active-active capability, the scale out, the distributed architecture of new ODB. I'll just make one point here about the dynamic caching capability. Each transaction engine is keeping a dynamic cache. If memory is available to it, it will effectively use its in-memory database performance capabilities but if it does need to go fetch a piece of data it doesn't have, it will look to its next nearest transaction engine neighbor and remain as efficient as it possibly can before ever going to disk to go for a data block or in our case we call them data atoms. Next slide. Yeah, so let's go ahead and wrap up and then we'll work to take some questions. So really what have we talked about today? We have talked about some rapidly maturing capabilities that have effectively allowed multi-cloud infrastructure to become a serious choice for those who would even be deploying stateful critical business apps to the cloud. We talked about Kubernetes and network resiliency. A lot of what's available today we could have only have dreamed about just even a few short years ago. Also this idea of running critical apps and a single logical distributed SQL database across a Kubernetes managed environment, it's demonstrating for us really the new possibilities. Earlier we talked about the true zero downtime capabilities and these options, they really allow our customers today and yourselves as you look to deploy multi-cloud it allows you to consider for you the best ways to deploy these applications when you want, where you want, and how you want in multi-cloud infrastructure. So with that why don't we go ahead and open for questions. We've got a few minutes left and Erin and I would be glad to answer any questions that are out there. All right so there's two questions that are open in the Q and A right now if you want to just go over those answer them live. Okay great thanks I will yeah so does the new ODB JDBC driver support the TE deployed in a multi-cloud model? Yes it does absolutely so the application that's being deployed in the cloud can certainly be a JDBC app and it would leverage the new ODB JDBC driver for that purpose. And the second question is how does new ODB differ from SAP HANA as a distributed in-memory processing database? Okay so SAP HANA kind of well known, I'll go ahead and say it's more of a you know kind of on-prem private cloud type implementation. So you know as far as you know how it's it's differing in availability would be just that we're we aim to effectively deliver on many of those same distributed in-memory capabilities but do that in the cloud okay and I touched upon some of the in-memory processing that new ODB as I said uses a kind of a quickest available algorithm to determine how to find its in-memory component. So each transaction engine could not possibly include all of the data of a database but when you look at the transaction engines together in a chorus they can start to approach you know much of a larger percentage of the database in-memory. So they do kind of work together in order to create that in-memory distributed database performance. Okay and then we also have a third question. Is this in the same competitor space as data stacks a distributed Cassandra? Yeah exactly you're answering your own question right guess not since SQL versus NoSQL yes Duane that is correct. So new ODB is pure SQL database it is not a NoSQL database. So we would not be considering data stacks kind of a competitive offering. Those who are looking to new ODB are those who have made a large investment in their SQL applications. They're typically very critical business type applications and they don't want to throw away the investment. They want to maintain an investment both in the apps as well as their you know human resources and leverage those apps in newer modern computing architectures that multicloud and Kubernetes is bringing to this space. Okay looks like we have another question coming in as well from Sujit. How is concurrency handled when multiple TE and SMs attempt to update the same underlying database records in new ODB? So new ODB also supports a distributed lock manager just like all those other you know legacy mature relational database products that you're used to. So there is for each data atom in new ODB we call it a chairman they all have effectively an owner that is its distributed lock manager. Okay so basically two different writers would not be able to write to the same data at the same time. Whoever establishes the lock first will become the owner and others will have to either wait or can at the application layer choose to not wait. Okay so really it's handled in the same way as other major relational database products. Okay great questions everyone. So Joe I spotted a question in the chat that hadn't made it into the Q&A as well and that was raising the question are there any security concerns with sending data across the clouds. I did answer that one I typed it in earlier because that came in earlier but feel free to add if you had something that you'd like to add. Yeah I was just going to say that when we set up the new ODB VPN so there's several different layers of VPN I mentioned here and we've also got a new ODB TLS encryption as well. So all the connections between all the new ODB components and clients we have TLS encryption. The new ODB VPN which we are deploying in order to operate across multiple clouds can optionally also be TLS encrypted and then thirdly if you're operating rather than over the public internet if you're operating via a site to site VPN for example between Azure and AWS or if you've got a provider like Megaport then there's options around encryption there as well. So there's you know potentially three layers of encryption you could use not saying you should or would use all of those obviously they might add latency as well but yeah certainly encryption is available. Great thank you Erin appreciate some further insights there. I think that wraps it up for us Chris so I guess so from our side myself and Erin we were delighted to present to the group today it's a fascinating topic and we were glad to share our experiences as we have deployed multi-cloud for one of our banking applications. So whenever considering a sequel application in the cloud we do hope you may call upon us as this is an area we have a lot of experience and we look forward to perhaps crossing paths again. Awesome thank you Joe and Erin for the presentation today. The webinar recording will be in slides will be available online hopefully later today at cncf.io slash webinars with an S. If you have any questions or concerns head there and wait you know feel free to look for the video and reach out if you have any other questions for the gentleman on the call today and without further ado see you at the next cncf webinar here in the future and have a great day. Thanks for hosting Chris. No problem thank you all.