 All right, we're rolling, ladies and gentlemen. Thank you for joining. This is another episode of the OpenShift Commons Briefings Operator Hours. Today, we are fortunate enough to have with us A-Welt Prangshma, who is the team lead and main architect for the Oasis Managed Services offerings at Orango DB. A-Welt, how are you today? I'm very good. Thanks for having me. How are you today? Really good. Really good. I got some final ski runs in this past weekend up here in the New England. The snowpack is pretty much gone. Everything's starting to shift over to spring. How's it over in the Netherlands? Yeah, we're actually, well, we don't have that much snow in the Netherlands, to be honest. But definitely, we had some earlier this year. But spring is definitely coming. We had a very nice weather outside right now and we're trying to get more outdoors and making a lot of fun with it. I've been there a couple of times. Isn't that the Tulip capital of the world? It definitely is. Yes. Yes. So, Orango DB, it's a database I can tell by the name. What are you going to be talking to us today about? Yeah, what we're going to talk about today is the challenges that we have while building our managed cloud offering. We are running Orango DB Oasis, which is our managed service. So we're running Orango DB databases for our customers. We're doing that in a multi-cloud fashion. So customers can choose different cloud providers. And there are lots of technical challenge behind the scenes, because not every cloud is the same. And we're going to talk about what actually makes the difference between the different cloud vendors and how we overcome that. Sure. Sure. You want to pop up that first slide that we were talking about earlier? Yeah, absolutely. No problem. And thanks again for being part of our show. We host all of our software partners who we work with, who build a operator for OpenShift. And Orango DB is another database vendor that we're proud to have been able to see supporting our platform for their database. I got to ask you a question. Your logo seems to be an avocado for the company. And I know when we were talking previously that Oasis is a managed offering as opposed to just the database and it has its own mark. Is that an avocado growing a leafy tree out of it? What's in your mark there? Yeah. So there is a story behind here. So you're right. The Orango DB logo itself is an avocado. And for Oasis, we wanted to merge that with the cloud. And obviously when you call your product an Oasis, we also want to have customers have the Oasis feeling. We want them to feel that we got their back and they can just relax in a very nice place and actually work with the database and let the worrying do to us. So what we did is we merged the typical cloud image with the bit of the avocado. So the brown dots in the Oasis logo is actually the bit of the avocado surrounded by the palm trees of the Oasis. That's pretty cool. So you're about seven years old. I think you've been with the company for about five of those years. What kind of changes have you seen in the company over the five years that you've been there? Is it growing? How is the state of the union over at Orango DB? Yeah, we are growing very much at the moment and we're growing in lots of different areas. So over time, we have grown from a database that people really get to love and use for all kinds of different purposes, primarily documents, graphs. Nowadays, also graph analysis. And we see that growth in our customers. We're getting many, many, many more customers nowadays when I started. It was still a few, but we now have very large customers. And we can also see that the actual use cases of our customers are changing. So it's not only small use cases here and there, but we can see that very large customers are relying on us to store their biggest graphs that scale many, many different machines to solve all kinds of problems in IR IoT or medicine or social networking or anything like that. And you folks are, I'm going to guess you're a fairly distributed company. I know that you're in the Netherlands. Is that, is that where the company started in the Netherlands or? No, the company actually started in Germany. It started out in Cologne, which is in the west part of Germany. Actually not that far from where I live. But nowadays we are a really distributed company. So we range all the way from Asia to Russia, many countries in the EU, the US. So I don't think that we're having people in Australia at the moment, but we may get there as well. Yeah, like most companies in the last year, we have transitioned from a distributed office based company into a fully distributed company where everyone has their private office essentially. Right. I mean, I work at Red Hat and we're, you know, we're owned by IBM now. And I think there's collectively in the company, there's probably 370,000 employees and everybody is doing this. Everybody is working from home. It's certainly, it's certainly been an interesting 12 months. And hopefully, hopefully at some point here soon, everyone will get vaccinated and we'll be able to get back to, you know, at least partially, you know, at least partially the old ways of at least going out and going to KubeCon, right? I mean, we were scheduled to go to KubeCon when, where was it going to be in Amsterdam, right? Yeah. And that was, that was really kind of a bummer that that right back then, their last March, everything got canceled, but oh well. Yeah, I think that we're all trying to to cope with this, but actually for the database market, the last year has actually also seen a lot of optics because with all the more digitalization of products, people are needing a way to store all their business. And they're doing that in databases. So for us, it's actually also a positive side to the whole story. So we, you know, I do partner marketing with, you know, vendors here at Red Hat that support OpenShift. And, you know, there's got to be probably 15 different database vendors that I work with from a marketing perspective on a regular basis. Why are there so many? Yeah, well, that's always a very good question. I think that there are so many because the use case is very in the technology varies over time. So we have seen in the database world a natural progression from storing very fact fixed rows of data and slowly more migrated to more no SQL, because people found that having a very fixed structure is not very pleasant. And more recently, there has been a large shift towards graph databases and analysis of graphs data, the data be in stored in graphs. And what's what around it be specializes in the special place that we have in there is that we're not only doing documents and graphs, but we're also doing that at the very, very large scale. So you're not limited to single machines. If you have a data set that needs many machines, you can still do it. And you can still have very performant graph. So there's, there's, there's your graph database, there's, there's SQL databases, no SQL databases, new SQL databases. What's the difference between graph data? How does a graph database compare with SQL, no SQL, and so forth and so forth? Yeah, I think in the, it's not so much how you store the data, it's more about what you can do with querying the data. There are more and more use cases out there where you are thinking of a graph in your storage space. That can be as simple as the traditional social network example where you have people and you and people have connections between them and you post blogs and there are comments on that. And all of that can be thought of as a huge graph. While you can do that for a social network, it's pretty obvious, but think in the world of IoT, for example, there you also have lots of different interconnections. So you're talking about objects like houses and sensors and actuators and all kinds of devices in there. And all of that can be thought of as a huge graph because there are lots of connections between them. Where the graph database really shines is in asking questions about such a graph. Let me give you an example. If I want to go ask in my social network, give me all the people that recently commented on a blog post that a friend of you wrote, then if you want to think of how you could write that, if you have to do all these lookups with a traditional SQL like joining approach, you would first of all have to think of all of that upfront. And it needs a lot of interaction with your database. So it's also on a technical level, that means a lot of programming, but it also means a lot in performance. With the graph database and the graph query language behind it, you can ask that entire question in one go to the database and let the database solve the problem for you. And it will just go through all the data that it has and give you back the answer. And where around it is really special that it can give you that answer very quickly also if your data becomes very large. So we're streaming this live on Twitch and it's also live on YouTube as well as Facebook. People watching this right now will be able to put questions into their chats and they're going to pop up over here and our producer Chris is making sure that that's happening. Are you saying that a comment that someone made on YouTube, and I don't know what their backend system is, but you can then actually use their backend data store to run reports and graph that up? No, it's not so much that we are using their store or that we can combine that, but if you model your data as a graph and store that in a graph database, then you can ease the amount of queries that you can do and make it much more performant in the process. Okay. And then so the types of workloads that customers choose to use for your database, how would you characterize those? Is it financial transactions on Wall Street? Is it Twitter tweets? How is it used? Yeah, that's the beauty of this. It's not limited to a single use case and a lazy answer would be all of the above. But to give you a couple of examples, we spend all the way from tracking aircraft parts to IoT applications to utilities to medicine. And just imagine what the COVID crisis and all the research around that is doing. And you could even think of all the resource papers being written around COVID and spend that up in a huge graph and that way advance the science behind it and come up quicker with answers to this whole pandemic. I was just making a note here. Actually, I still am. Maybe I should stop making a note and look at you. My mom always said to make sure you look at people when you're talking to them. So, you know, these challenging times that the people refer to it as what kind of impact has that had on your business? Yeah, I think it has twofold. One, we have learned to be 100% remote. We were already a distributed company with people in many countries as we discussed. But I think like any other business, we have learned that, yes, you cannot just take the plane and go to your customer. You have to talk to them over Zoom or whatever medium you use. And also in the way that we work internally with our engineering, with our sales, how do you coordinate that? I think that's change number one. And the other change, of course, is our customers. Our customers are also going digital in a very high pace. It's going like crazy. And they are storing data. And what we see more and more, and that's definitely a change of the last year, is since the customer also doesn't have easy access to their offices, they're also storing more and more moving their data into the cloud. And that is also where around the BOasis then pops up because we see more and more customers choosing our managed service because the already reason I cannot go to my servers in my office. So why not make the switch to the cloud and store everything in Oasis and not manage it ourselves? You know, like 18 months ago, people were saying, well, you know, the cloud is still is a lot of hype and workloads are still going to be continued to work, you know, running a data center and so forth on bare metal on premise. So you're saying that that because of the, you know, the present situation that we have, that's accelerating customers to get on the stick and improve accessibility to the data by moving everything into the cloud or moving things faster into the cloud than say their plans may have been 12 to 18 months ago. Absolutely. Yes. Yeah. Interesting. Okay. Well, good. So why don't I turn it over to you here and and let's let's learn about your title slide here and see what you got for us. Very good. Okay. So let's dive into this multi cloud provider platform that Oasis is and we're here also on the invitation of the guys behind open shift. So obviously we're talking about Kubernetes and everything around there. And let me start by saying that not every Kubernetes is actually the same. And although people may think that way and I would love it to be true. We found in our endeavor to build this Oasis platform that it is not really true and devil is in the detail. And what I propose or we'll discuss today is some of these challenges that we have and I will give you some examples of where things can actually make a difference between different Kubernetes offerings. So the thing that we're going to talk about is what kind of challenges you have when you're doing multiple provider support. And we're also going to talk about Kubernetes as an abstraction layer. But not every abstraction layer is perfect. And we're going to dive into the differences and those differences are in lots of areas. The typical traditional ones like security, but also networking and obviously for a database storage. Then there are lots of other things around here that we can dive into as well. But we will just hit the surface of this one. So a little bit of interaction about myself. We already discussed. I'm team lead of the around with the Oasis product. And I always like to work on distributed systems and actually make things work for customers. That's my main motivation. And that's my main driver. And that's what we're doing. And if you see a model train in the background, then you've spotted it right. That's my passion behind it. Wait a minute. Wait, that that's not a real picture that that's like no, no, no, this is actually a 187 scale model train. Oh, interesting. I thought that was part of the countryside there and no, no, no, no, no. Unfortunately, the Netherlands is much more flat than this. So this is not something I know. I've been there a couple of times for for Docker con Kubecon. It was amazing how flat it was. And that, you know, the everyone used their bicycles like crazy. It was really cool, which you can do if it's flat, right? Yeah. Yeah. Okay, obviously, there is also a much bigger team. So we are just some names and faces of my teammates. A little bit of an intro of around the DB itself. So we already discussed it's a graph database. But you can go much further than grass. You can also store documents even as raw as key values. The key part here is scalable. You can have very large graphs. And we give you all kinds of tools to make it easy to not only scale that data, but to make it still perform it. And that's very important. And all of that data, whether it is your graphs or your documents, all of that can be queried with a single query like that. That's pretty important for our customers, because imagine that you would have to learn three, four or five different query languages for each of them, it will be really annoying. This way you can just learn one thing. And you know it very well, and you can apply it to everything. So let's move on to around to be Oasis, our managed platform. So what do we do? We have in the history of around to be, we have spoken with lots of customers, and we get a lot of comments saying us, I really like your database. It's easy to use. It's great. But please run it for me. I don't like running this database. And that actually makes a lot of sense because running a database or any stateful load, but definitely a database is not an easy thing to do. You have to look at all the details and you have to get all of the details right, otherwise you have problems. And that's exactly what we help our customers with. So we run databases for them. So we run customer comes to us and say, I want to run an around to be database in, for example, Google in London, or I want to run it in AWS in Ohio. And we make sure that that database is started there. But also we make sure that it keeps running there and we monitor it, and we make sure all the backups are in place and all that stuff. Can I ask you a question? Yeah, go ahead. You know, there's this concept of, you know, if you're using, you know, some Kubernetes platform like OpenShift or others that you can build it once and deploy anywhere. Yes. That's, that's not the case with GKE AWS because they're, they're all different enough that it's almost like having to support multiple different platforms for the same application. Yeah. Yes and no. You can go as far as to say I can run it on all different platforms. And as long as you don't care about optimal performance or you don't care about optimal security, then you get pretty close to deploying and running it everywhere. If you do care about all of these things, and I think most people actually do, then there are all kinds of small variations. And let me, let me dive into those because we see Kubernetes as an abstraction layer. And I often make the comparison of the Kubernetes as the promise of the Java virtual machine as we had it in the 90s. That promise was also one, yeah, you can write once and then run it everywhere. But the reality, and I've written my fair share of Java in the past as well, was that you're also speaking like, okay, what platform am I running on? And I need to adjust for that platform. Small things, you don't have to rewrite everything, but you still have to deal with it. Kubernetes nowadays is the same story. We look at Kubernetes, in this case, managed Kubernetes. And we look at three cloud providers that provide managed Kubernetes offerings. Obviously, we're also available on-prem. If you're running OpenShift, you can use also the around-with-a-be operator to run an around-with-a-be database by yourself. But that's not Oasis. For Oasis, we're running on these managed Kubernetes offerings. One for Amazon, EKS. You have GKE for Google, and you have AKS for Microsoft Azure. And you could argue, hey, they're all Kubernetes, they're all, we have a series of containers that we need to run. What is the difference? Well, let me tell you the areas where the differences are. There are many. Let's talk about Kubernetes versions. They're slightly different. Let's talk about security and authentication. If you want to authenticate yourself with a Kubernetes cluster running on Amazon, it's completely different, and you need different tooling than an authentication on GKE. And of course, you can use the same Kube CTL tooling to do it, but underneath, you need to install additional tools for it. And there are lots of different areas. And what I would like to give you a couple of highlights of the differences on the different platforms. So let's talk Amazon. Amazon is actually, it's a very stable platform. That's awesome. But it has many resources that you need if you're creating a Kubernetes cluster. It's not as easy as to say, yeah, I'm spinning it up and just create a cluster. No, you have to bring in your load balancers and your security groups and lots of stuff. That makes it challenging. It also makes it easier to control once you have done it, but it takes a huge step to get there. And what we find is that everything that you do on Amazon is working really well, but the error handling is a bit outdated, unfortunately. It's not really structured, and sometimes you have to resort to things like string parsing and so on. If we switch to GKE, the APIs of GKE are awesome. You can just spin up and the Kubernetes cluster like that, and it's extremely easy to do. But the big problem that we have with GKE is they're very aggressive update policy. They're just forcing you to go to new Kubernetes versions on a relentless pace. And if you're on a managed platform, you have to follow. There is no choice. Can I ask you a question about that? You're doing something stateful like we do. You have to be extra careful there. Yeah, go ahead. No, I was just having been here at Red Hat for whatever, 21 years now. I was here when you know, Red Hat Linux was a very popular distribution, version six, version seven, version eight, and our update cycle was really fast. And when we started creating our enterprise product, Red Hat Enterprise Linux, well, the first version was actually called Advanced Server 2.1, but people probably don't remember that. We did that because our customers told us that they wanted a stable platform for three to five years, so they didn't have to keep revving their apps all the time. Is that kind of where we are now? I mean, wouldn't that be a major problem for people trying to build apps for GKE if they're like, what is their release cycle on their platform? Yeah, it is really a problem at the moment. And it is how they deal with it. The general release cycle for Kubernetes is roughly every three months, there is a new minor version which changes things. The amount of versions that you can have, that you can choose from on GKE is fairly short, which means that they have this window of versions that you can be in, but that window is rather small. So it also means that you're quickly out of that window. So you really have to think that on GKE, you are going to a new version at least every two months, and that's rather a lot. Does that mean people have to do a rebuild every two months of their apps? No, it's usually not a rebuild, but when you are doing things at the scale and trying to get the max out of it of your stateful payload, then you have to be careful, because new versions of Kubernetes also bring with it new features, which is great, but also deprecation of features and more stringent security requirements, for example. And you have to be very careful that an upgrade to that new version is not going to break your system. So the majority of the updates that we do in terms of Kubernetes versions are more of a test and deploy kind of approach, but there are definitely also issues. Usually in every minor versions, there are two or three issues that really need changes in our code to be effective under the new version. So if I'm a developer building applications, and I don't really want to put all eggs in one basket, because we don't want to get locked into one particular cloud vendor, presumably people building apps published them on Amazon, GKE, Azure, and others. But if everyone's revving their base platform every several months, as you mentioned with GKE, and I don't know what the release cycle is for Amazon or Azure, how does an application developer deal with that? You'd think that instead of creating new features, they're just sitting there having to cater to the update path of the three cloud providers there. Yeah, I must add here that the majority of things that require changes have to do with stateful workloads. And the majority of applications doesn't have that much state in it. That's what you use a database for. So the majority of applications are not going to have that much problems when their underlying Kubernetes version is upgraded. It becomes a problem if you're doing more stateful things, or are you more heavily integrated in the network, for example. If you're on the edge of the requirements of the network in terms of your security or your firewalling and things like that, then you have to be careful and then you have to go with the flow. Okay, let's switch over to Microsoft's jury. We haven't touched those. We can see that it is clearly the least mature of the three, but they are very responsive in their support. We have had great support from them. The biggest problem that we have is really on the storage side with Microsoft Azure. It has to do with the attachment of persistent volumes and resizing of persistent volume claims and so on. And also, there is a weird behavior of the cluster outer scalar on Microsoft Azure. So the cluster outer scalar makes sure that your cluster actually grows when the workloads on your cluster grow. And for us, that is a very useful feature because as soon as a customer says, I want to run this deployment there, we make sure that the nodes are there and that all the available resources are there. But that's not something that we do ourselves. We only want to have this capacity there and the cluster outer scalar is going to make it happen. If you look on Microsoft Azure, the cluster outer scalar is not that smart with dealing with zones. And that's for us a problem. We want to go cross availability zones because of the high availability guarantees that we give. But the outer scalar for Azure is actually doing is it just saying I'm creating your next node in the next region. So first you get a node in zone A and then in zone B and then in zone C. But it doesn't take into account if you have a certain affinity with your node. And if you have data that lives in zone two and suddenly the node is outer scaler is going to make something in zone three, it's not very helpful. So there are lots of issues there. I want to pick one here. We can go through the entire list, but let's talk about storage because storage of course is a very big issue for a database vendor. So where are these differences then? Let's start with performance. There is a big difference in how performance, the different persistent volume offerings are. And all of them have different options for configuring and you can choose of course your SSDs and your IOPF settings and so on. As soon as you go in that direction, it becomes provider specific. So getting back to your previous question of yeah, where is actually the difference and can't I just run it anywhere? Yeah, technically you can. But if you want to have a volume with certain characteristics, you have to specify it for that platform. That's not something that Kubernetes is going to give you. And also the characteristics of the performance over time are changing. The majority of providers have an interesting feature that they essentially give you a buffer of IOPF. So those are the IOPF operations that you can do per unit of time. And if you exceed that buffer, the majority of providers is a bit lenient and say yeah, okay, I'm going to be fine. You can do it, but not for a long time. So you don't have a sustained performance characteristic there and that changes over time and it makes it different to compare the performance of AWS against the performance of Google, for example. Let's talk about volume resizing. Volume resizing is something that is specified in Kubernetes. You can just create a resizing resource of your persistent volume claim and according to the Kubernetes specification, it's going to resize. But it's not really done exactly in the same way and that makes it hard. So we have had to build in additional code specifically for Microsoft Azure because of the way that the attachment of volumes works. The rest of the providers can change the volumes on the fly, don't have a problem with that and here it doesn't. This is one of these cases where the specification of Kubernetes is not going to help you because it doesn't say everything that there is to say about the actual underlying behavior and this is exactly the behavior that we saw in the 90s with the JVM where yeah, the specification says it's the same but the reality is slightly different. The big question of course, we are at right now already well into 2021 and do we see an improvement there because we have been working on the Oasis platform for two and a half years. Two and a half years is pretty much a lifetime in Kubernetes world. So will it get any better? And I'm afraid not. This is just a picture of container runtimes that we see. Of course, not all of them are available on the managed platforms but there are a huge option, a huge amount of options to choose from. And we see the whole Kubernetes space is evolving in a very rapid pace. There are lots of exciting projects popping up but there are also deviations. Always there are specific good and bad but overall it probably means more work for the multi-provider platform. Hey Wat, do you think, just go back to the last one for a second. Do you think the world needs so many container runtimes? It's always a very challenging question to answer. My personal answer to that would be no but let's be honest. There are also a huge amount of different cars in the world. Do we need so many different cars? People also have a preference and I think in this case that's also the case. So there are preferences, there are real benefits of all of them. So there is always a use case where one fits better than the other. You made the Scott McCarty from our company once, and I can quote him, he was talking about our platform is because there are so many different types of vehicles and each one is multi, you know, is purposed for a different thing. Who did he say? OpenShift is a dump truck that can carry 28 yards of dirt and go 200 miles an hour and handle really well. I thought that was kind of cool. Anyways, throw that out for Scott. Oh, obviously there are many more different areas that we can discuss. But there are lots of different issues that we are dealing with, like logging, like networking. But I just didn't want to go into all the details right now. Sure. Of course, I have to invite everyone to take Oasis for a test drive. You can try out Oasis for free and go to aramudb.com. There is a big button there where you can try it out and you can just have fun with it. Learn the Aramudb product. You don't have to do anything yourselves. Just spin it up and have fun with it. And if you like it, enter your credit card and continue with it. Okay. Thanks. Do you have any questions in the chat? I don't see any questions in the chat. I had a couple other questions here for you. Sure. Go ahead. I just wanted to make sure you could get your slide content in. How do you guys see customers doing things differently now than four years ago, for example? I mean, I know that you said that people are moving workloads into the cloud faster than ever, and that's in part been brought on by the pandemic and so forth. But what are they actually doing differently now than they were four years ago from a data management perspective? Yeah. Let's start with the way that they deploy. That's a very, very prominent difference over the last couple of years. If you were looking at it four years ago, everyone was running their database by installing some package, some Linux package on their servers, running everything in, I don't know, something like SystemD container or scripts and hooking up virtual machines. Nowadays, pretty much the far majority of our customers is running their database in a Kubernetes environment. So whether they're doing that on Oasis where we're running things in Kubernetes or whether they're running it by themselves on-prem using our Kubernetes operator, they are using Kubernetes. That is a huge shift. And it also changes everything from the way that we distribute our product to the way that how we can support our customers, how you get logs, and so on. That has been a tremendous shift. Okay. And where do you see it in 18 months from now? I mean, are we done? Is this it? Are we there? Has the eagle landed? Well, depends on how you are looking at that. I think that in 18 months, the shift towards Kubernetes, I'm not saying it will be complete because there will always be companies that don't jump on that bandwagon, but it will be pretty much all over the place. I think what is really changing nowadays is two aspects of the story. One is scale, and the other is what they're actually doing with it. Let me start with the scale aspect, where even one and a half years ago, you could see graphs that were spent up with some tens of thousands of nodes. Now we're seeing graphs with millions and hundreds of millions and billions of nodes coming. And in order to fit all of that, it absolutely no longer fits on a single machine. You need many machines and we're easily crossed the number of nodes what we even didn't think of a couple of years ago. That trend is going to continue very fast in the future. The other thing is what they're actually doing with it, because you don't only want to store your graph data and query for it, we're seeing a trend towards more analysis of the data. There's a lot of things that you can do, and then you're touching again things like machine learning and artificial intelligence, and doing that in combination with graph data is a new area that is popping up like crazy. I was going to say everyone's talking about AI and machine learning. Yeah. For that workload, there must be a huge dependency on the data stores, right? Yes, that's true. And it's not only how you store it, but also how do you model it, for example, because if you think of your traditional machine learning like these simple vectors of numbers and do some magic with that, how do you map your graph data set into something that can be understood by the machine learning algorithm that we already know? That challenge alone is a very interesting one. Can't they just use Oracle for that? Yeah, you could, but why would you? It's pretty much the same question as if you're standing in front of a very nice BMW, can't you go by horse? And yeah, you could, but there are so many more options that you have nowadays that I would prefer the BMW over the horse, to be honest. Okay. So from a marketing perspective, I'm not sure where your CMO lives, but there's probably some things that your marketing team would like you to make sure that you put out here so that it negates that phone call five minutes after we're done, where he or she says, geez, you were live on the internet. Why didn't you talk about X, Y, and Z? What would those things be? Besides go here and try the database? Yeah, I think the message that we're trying to get across is make sure that you can actually model your data, you can store it, you can scale it, and get your query stuff done with it. Get the actual value out of your data. That's really key. And what is the best way to get that done? You can choose all kinds of databases to store your data. But in the end, modeling your data as close to your natural use case as you can make it, and think of it in that way, gives you the quickest and most prominent answer to your use case. Okay. And so where can people come find you folks? Like, are you putting on a user conference or what's going on? The best way to get in touch with us is through our website, through our community Slack channel. We are very active there, and we have a large community of active Orangudb users there as well. So there are always people both from inside, but also from outside the company happy to help you out there. And I think it's really key to experience the Orangudb product for yourself. You can of course do that with Oasis. You can do that as simple as running a Docker container and playing with it. But make sure that you start learning. We have a lot of great examples in both on our website, also on Oasis. For example, if you want to learn about how you could do fraud detection with a graph database, we have a great example on Oasis how you could do that. And we tell you all the interesting queries that you can run against that and start playing with that data. So that's something that we have seen in the past that if you want to try out a database, what can you do without data? Not that much. And with the examples that we can now give you, you can just get started right away, start playing with it. On the more graph analytics side, we also have in there links on our website for that. We also have a lot of great notebooks that you can play with interactively. So if you're more on the AI machine learning side of the story and want to start exploring how to model your graph data for machine learning, we have a lot of notebooks you can interactively run them. They even integrate with Oasis if you want. And just play around with the story. How do I map my data into something that my machine learning can work with? And then of course, visualize the output again. Cool. Are we going to see you folks in Las Vegas? I mean, not Las Vegas. Los Angeles, the next KubeCon that's coming up is looking like it's hopefully going to be in person in the fall. Are you folks going to be there? We're definitely, if possible, we're definitely going to send people over there. And right now, the big question of course is, is it going to be possible? Nobody knows that right now. But we are trying to get back to, let's say, the normal routine of doing conferences and appearances all over the place. Because let's be honest, it's also just a lot of fun to interact with all the customers and all the prospects out there. Right. Okay. Well, this wraps up another edition of the OpenShift Commons Briefings Operator Hours. Today we had Eirut Pranjma, the main architect and team lead for the Oasis Services from ArangoDB. Eirut, thank you so much for joining us here today. Yeah, thanks for having me. And good luck with the show. All right. Appreciate it. Okay.