 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVersity. We'd like to thank you for joining this DataVersity webinar improving transactional applications with analytics sponsored by MariaDB. Just a couple of points to get us started. Do the large number of people that attend these sessions. You will be muted during the webinar. For questions, we will be collecting them via the Q&A in the bottom right hand corner of your screen. Or if you liked to tweet, we encourage you to share highlights or questions via Twitter using hashtag DataVersity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. Just click the chat icon in the bottom middle of your screen for that feature. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Then let me introduce to you our speaker for today, Shane Johnson. Shane is the Senior Director of Product Marketing at MariaDB. And prior to MariaDB, Shane led product and technical marketing at CouchBase. Before that, he performed technical roles in development, architecture, and evangelism at Red Hat, specializing in Java and distributed systems. And with that, let me turn it over to Shane to get the webinar started. Hello and welcome. Thank you, Shannon. Can everybody hear me and see the presentation? You sound good. Yeah, we'll see it. Looks good. All right, perfect. We'll go ahead and get started here then. Thank you everyone for joining us today. We have a fair amount of material to cover. Talk a little bit about what it means to have a hybrid workload. We'll walk through the architecture of MariaDB platform, highlight the importance of scalability, especially with respect to analytics. We'll talk a little bit about how MariaDB customers are taking advantage of hybrid workloads today, what it means for hybrid cloud, opportunities for database consolidation, and for all of those of you who are eager to get started, we make that really easy with Docker, and we'll talk about that. And if we have some time, we'll talk about the manager service we announced as well. However, I do want to try and leave at least 10 minutes for Q&A at the end. Hybrid workloads. You know, traditionally, we segregate our database workloads. On one side, we have transactional. The focus is typically current data. You'll see lots of range queries. And for all intents and purposes, the queries are known. They may be dynamic, but an application has been built. That application knows what queries it's building and setting to the database. On the other side, we have analytical workloads. We're focused on historical data, a lot of aggregate queries, and quite frankly, unknown queries. Particularly in the concept of analytics, we never really know how someone is going to analyze the data. They're exploring it. They're trying to discover something meaningful or something useful. It could be very iterative. So they do different workloads. They do have different aspects to them. If you have a small database, a little bit of data, a little bit of users, you can do full analytics and full transactions and perform really well. The challenge is that as your database grows, you start to face some limitations. That performance range isn't getting any bigger. So either you're going to have some performance issues on more complex analytical queries or those analytical queries are going to impact the performance of your ongoing transactions. So what we end up with is two different databases. We end up with a transactional database, which is really optimized for that transactional performance. And then you end up with a data warehouse, which is optimized for analytical performance. And we've been that way for many, many years. And it's no surprise the database technology starts to diverge as well. Transactional databases tend to use row-based storage, make heavy use of indexes, and more often than not will be either clustered or replicated. On the other hand, analytical databases tend to use clumber storage and for very good reason, especially if you have wide rows with potentially hundreds of columns, but you have aggregate queries that are only concerned with two or three of those columns, clumber storage has a big benefit. That is relying on indexes and sometimes don't use indexes. No surprise there. If you don't know how the database is going to be queried, relying on indexes isn't going to help you. So they have other mechanisms or ways to support low latency analytics. And finally, they're often distributed from a pure scalability point of view. If you're going to gather many years of data, hundreds of terabytes of data, you're going to need some scale-out capabilities. There is a problem with that though. We've essentially built a brick wall between the application development side of the house and the BI reporting or data science side of the house. Two different databases, two different underlying database technology, two different database vendors. Eventually, that transactional data will make its way into the data warehouse. Eventually, someone will analyze that data and extract some sort of meaningful or actionable insight. And eventually, that insight may make its way back into the transactional database to help improve that application. All of those eventually is add up to time. There's not only the complexity and the time associated with these ETL processes, but there's a time of analyzing it and the time involved in getting it back into the transactional database. When we talk about data-driven applications, actionable insight, we need to close that loop. It's no good to our customers if product recommendations are based off of analysis from a week ago or a month ago. It's much better if we can recommend a product based on all the data we have as of a few seconds ago. So we have to break down that wall. And the other problem is that even transactional applications make use of analytics, whether we realize it or not. Or maybe we would like them to use more analytics, but we can. And so I have a handful of simple examples here. E-commerce. It's certainly a transactional application. Show me all the products in a specific category. Add a product to my shopping cart. Make a purchase. Update the inventory. These are all transactional. But in a very competitive environment, all of us, as consumers and customers, are expecting more intelligence, a personalized experience. How many of you have been looking at a website? There's a product you want. It's on sale. You're really thinking maybe you should buy it, but you hesitate. You sleep on it. Wake up the next day. Make a decision. Yes, I'm going to take advantage of this opportunity. I'm going to buy this product. But when you go back in, you find out it's sold out. Wouldn't it have been nice if the day before when you went to that website, it was able to look and say, hey, based on the products that are in all of our active shopping carts based on the purchase activity today and on our current inventory, which for some of these products is fairly low, we recommend you take advantage of this opportunity to buy this product because we expect it to sell out within hours. That's actionable insight. That's something I can take advantage of to have a better experience. But it's also analytical. We can look at banking. Certainly all of us have logged into our bank before and we can see pending transactions, our current balance, our available balance. We can transfer some money, all transactions. But what if instead of simply showing me what my available balance is, it could look at historical transactions, identify recurring deposits, recurring transactions, regular spending, and then also look at my current spending rates, how fast am I spending my money, and then maybe make a recommendation that says, Shane, you should transfer some money from savings within the next few days because we expect your available balance to reach zero in less than a week. That's actionable insight. And that requires more analytics. And finally, PPC ads. You've all been searching on Google, you've seen the ads that are showing up. Of course, by profession, that means that I'm often creating campaigns and creating ads. On the flip side, Google is showing those ads and gathering impressions and clicks and ultimately charging me for those clicks, all transactional in nature. But you better believe I have a very deep need to go in and analyze the performance of those ads. If Google could tell me based on changes in keyword costs, or in search behavior, or trends, we recommend that you lower your budget or pause some of these ads and reinvest the money in these other ads. That's actual insight that I can take advantage of. I'm going to have a better experience. I'm going to be more productive and get more for my money because they gave me those analytical capabilities or at least able to give me recommendations based on analytics underneath. So the problem, we have this brick wall, we have these two separate databases, two different users, two different applications. The solution is to bring analytics into that database that you use for your applications. And the importance of this slide is that when we talk about hybrid, I'm not suggesting that the data warehouse go away. That's never going to happen, nor should it. That data warehouse will still aggregate data from different applications, different business units, different teams, different sources. It's a great repository for all of your company's data. But the same analytical capabilities that are present in that data warehouse, we need to put in our transactional database, if you will, because these new applications we're building, they need it. And so it's not an either or situation, but an and situation. We're simply extending the operational database, if you will, so that it can handle transactions as well as analytics. So that was a little brief overview of how we're getting to hybrid and why it's becoming a little bit more important. I'm going to spend the next few minutes walking through the architecture of RedDB platform and how it approaches this problem. If you're familiar with RedDB, in the past, we had two products, TX, RedDB, AX. They're both built on RedDB server, that popular open source database that we all know and love, as well as RedDB MaxGal, which is a database proxy that provides additional enterprise features. One leveraged InnoDB and MyRox. These are transactional storage engines. The other leveraged column store, which, as the name implies, is columnar storage. And they were meant for two different scenarios. One was meant for the active team. One was meant for your BR data science team. One was a database. One was a data warehouse. What we've done with RedDB platform is we've brought these together so that we can have one single database platform that does analytics and transactions together. And there's a couple key points to highlight here. The first is the role of RedDB MaxGal. The database proxy will inspect queries as they're coming in and make a determination as to whether it should route them to a database instance with transactional storage or a database instance with analytical storage. There are different ways of accomplishing this. One example might be the query syntax. If the query is an aggregate on this table, let's send that to a database instance with columnar storage because that's an analytical query. If it is an insert, an update, a delete, regardless of the table, send that to a database instance with transactional storage because that is obviously a transaction. The second component is the change data capture stream. So as I mentioned, all those rights, the inserts, updates, and deletes that are going to transactional storage will automatically be replicated to columnar storage. So that same data that's available in one place for transactions is immediately available in another place for analytics. And a few additional pieces when we talk about RedDB platform. Certainly that core of RedDB server and RedDB backscalers of foundation, but there are a few other things included. There is native support for Kubernetes and Docker. That's through Helm charts and Docker images, Docker Compose images. There are a number of adapters for pulling in external data. So you might pull data from Spark. You're doing some machine learning, generating product recommendations and pull those into RedDB server with column store. You might pull quickstream data from Apache Kafka queue as well into column store. And then there are APIs for C, Java, and Python if you want to write your own services and scripts for capturing external data. There are a number of clients, C, JDC, Odyssey, those have been the big three for a long time. A few months ago we also announced support for Node.js as well. And then administration tools. SQL diagnostic manager and SQL Yogg for monitoring and management. RedDB backup as well as RedDB flashback. A little note there. RedDB backup is used to restore data. It doesn't even apply as from a backup. RedDB flashback allows you to actually roll back transactions to a specific point in time. So in many cases instead of restoring a backup from an hour ago or a day ago and then replaying a number of transactions it can be faster to start with what you have now and simply roll back a few transactions. Especially if you discovered a mistake that happened a few minutes ago. So that's a little bit around the architecture of RedDB platform and how it helps support hybrid workloads. I did want to talk a little bit about scalability. So what we see from our customers is they usually fall into three buckets. Some people are simply outgrowing their OLTP database. So if you think about that earlier slide I had when you have a small database with a small amount of data and a small number of users you can get away with all the transactions and analytics that you want. But as you grow you start to hit those boundaries. So one group is simply in that boundary. They can't move forward with limited or lightweight analytics. They need more. Another group of people simply needs to store more historical data in that database. More often than that people will find that three to six months is kind of the upper bounds when it comes to maintaining that performance. You might be able to store more data but if you're doing anything that requires analytics the performance is going to start going down. But today they want to store years and years and years worth of Dora because you can extract a lot of actual insight from that data. The third group is around self-service analytics particularly with B2B or SAS. SAS customers are data-driven organizations too. The problem is they don't have access to their own data that belongs or is managed by the SAS provider. So now there's a rising expectation that if I'm using a SAS provider I expect them to provide me with self-service analytics because I need to analyze that data in all the same ways I would if I had it on-premises and if I was building applications myself. So they want self-service analytics not you know can reports but to be able to do interactive ad hoc analysis of the data. So there are a few options for hybrid workloads and certainly when we look at you know Oracle, Microsoft and IBM they all have columnar storage as well. You don't really see it in the MySQL or Postgres world as of yet. But if we start to look at some of the differences there we all do a row storage. Certainly Oracle, IBM and MariaDB have built-in sharding which will let us scale that out and scale our transactional workload. Microsoft, Oracle, IBM, MariaDB all have columnar storage but there are some differences in how we approach that. You know Microsoft, IBM and MariaDB disk-based so it's optimized to do columnar storage and analytics when stored on disk whereas Oracle will keep the row-based data on disk and is optimized to work with columnar data in memory. On the other hand Microsoft and IBM don't scale out that columnar storage. You can get some degree of scalability with Oracle RAC but there are some kind of practical considerations and limitations particularly with respect to cost when you're doing that. But the message here around MariaDB is what we're building is something that will let you scale transactions or analytics as much or as little as you need in both directions. So typically what happens is you might have a database with row and columnar storage. You can have a primary, you can have a secondary and then you can have two different connections. One goes into the database with row storage, the other goes to the database with columnar storage and then it's kind of up to your application to understand which database is to send a query to whether it's transactional or analytical. Same concept with MariaDB server on a small scale. We have one instance with row storage, we have one instance with columnar storage. The difference is that MariaDB max scale component sitting in the middle. Your application now only has one connection. All it needs to know about is the IP address of that database proxy. The database proxy can then take advantage of various features and rules and capabilities to route those queries to the right database so that your transactional query database queries they go to the left side there in your diagram and any analytical queries will go to the right side in your diagram. If you find over time that you need to scale out your transactional workload you can. You know spider is a storage engine that provides built-in transparent sharding so you can start scaling out across multiple database servers if you need to. On the other hand if transactional workload was fine but over time you know your story more historical data your analytics are getting more complex you need to scale out the analytics side you can do that with column store as well it is built on distributed storage and MPP so there's assumption there that you might start out with one node but certainly you can add as many as you need depending on your requirements and if you hit to that point you could very well scale out the transactional side and the analytical side if necessary and the great thing again is that with reDB max scale sitting in the upper middle there everything underneath is abstracted away from the application they don't have to know that you're scaling transactions or analytics or that the servers are different all that is hidden so it's a little bit about the architecture of reDB platform let's talk about how people are using it right now so we have one customer who's in the market research space they are capturing pricing data every day hundreds of thousands of products in stores I think they pull down a million pricing updates per day from the web and then they're also scanning flyers and other print materials as well they do that so that their retail customers can come in and they can do research they can say you know what is the current price of this product or maybe internally from a transactional standpoint they did update the price or update the description but these things are all transactional in nature at the same token the customers also wanted to be able to say how has the price changed over the past year the past two years and how they started analyzing this data was growing right they was getting more ad hoc it was getting more iterative and so they had a little bit of a challenge there because they couldn't store more than a few months of data without query performance suffering so they were starting to see analytical queries take minutes to complete and that just wasn't acceptable to their customers not to mention the complexity of the backend system generating data marks every night depending on the region and how much historical data they're going to make available so customers were having experience issues and they were certainly having complexity issues in the backend by moving to bdb platform they were able to handle all those transactions and transactional storage replicate all that data to columnar storage and then when a customer came in and said what's the current price it hits the transactional storage on the left when they came in and said what's the price changes over the past year hit the analytical storage on the right because the left side is really optimized for current data the right side is optimized for historical data those analytical queries went from minutes to seconds and everyone was much happier and they were able to extend their capacity to store many more years of historical data the second one is a task provider which is one of those examples I mentioned a little bit ago and this is for ip telephony so phone calls text messages on one hand it's transactional right call detail records or cdr's they're capturing these every time their customers use their platform to make a phone call or to send a text message this data is used to generate bills to you know find out usage on the other hand those customers being data driven organizations wanted to be able to analyze that data to monitor usage identify peak periods instead of seeing their bill better understand what their bill is going to look like or estimate what the costs are going to be as well as a lot of self-service analytics how many calls per hour are happening right now how many texts per minute are happening right now similar situation that core business is certainly transactional it's the ability to provide a service for phone calls and text messages but it now has secondary requirements around analytics so on the left side they store those cdr's you know as people are making phone calls and sending text messages and other communications at the same time that same data is placed into the analytics storage on the right and so now when customers want to do all that analytic to hit the analytical storage so in one sense it's the same data it's this call detail record but one side is optimized for what's happening right now the other side is optimized for longer term analytics and finally finance trading this is a particularly good one certainly transactional right you're capturing trades every time someone makes a bid or a quote or purchase but all those people that are making the trades have secondary requirements to do analytics they want to actually begin analyzing those trades particularly over time at the same token they also have to make this data available to regulators so they need to store seven years of trades and so much like the SaaS provider they're doing transactional work which is trading but they're also providing those traders with self-service analytics to actually analyze that transactional data where I think it starts to get a little bit interesting is hybrid cloud and we're starting to see this happen as well reading the platform does separate and isolate different workloads so some database instances are running transactions some database instances are running analytics you don't have as some people refer to the noisy neighbor issue transactions won't impact analytical performance analytics won't impact transaction performance if that's the case that must mean you can run different workloads on different infrastructure you could place different workloads closer to different users and you could scale different workloads on different hardware for example you might have transactional instances with really fast SSDs but probably a little small it may be fairly modest CPUs those workloads typically aren't CPU bound if anything they're IO bound on the other hand your database instances is running analytics may be running on servers with really big CPUs and maybe really big but very economical spinning disks because they're a little bit different in that regard so you can start to optimize the hardware for the workload if we take that to its conclusion what we've seen in the beginning is customers running on premises for transactions I think trading is a particularly good example if you're in that space you probably want to be into York for example close to the exchanges where you're processing those transactions but if you're also providing your customers with the ability to analyze trade data you can move that analytics to the cloud so it's kind of two things one it's an easy way to get started with the cloud because you don't have to migrate you can simply extend into the cloud and two you could move your closer data to different users and use different hardware so all these things really start to come together when you pair hybrid workloads with hybrid cloud inversely we might also begin to see a little bit of let's put transactions in the cloud closer to users e-commerce might be a good example of this we're doing business all over the world so let's put our transaction processing in the cloud but then maybe let's aggregate all of that data into analytical database instances on premises where we can start to do more maybe internal analytics or analysis of what's going on so both directions but both enabled because of this workload separation and the change data capture and smart query routing that lets us take advantage of hybrid cloud another one that's really helpful is database consolidation so certainly we can begin to take more analytics and pull them into our operational database so you have some degree of consolidation just at that high level of transactions and analytics being paired together you can take it a little bit farther for example we're building microservices and I use a retail e-commerce application here again we might have one service for purchases um transactions are critical durability is critical so you might want to go with a relational database we might have a service to manage our shopping carts scalability is paramount we might have thousands of purchases a day but depending on the popularity and the traffic of your e-commerce website you might have hundreds of thousands if not millions of shopping carts so you might look at a no-SQL database to give you that scalability and then finally I added a clickstream service this might be used for things like product recommendations it's going to make sense to use a clumber database for that type of data so more often than not when I talk to folks we tend to joke a little bit because we might ask what databases are using their answer is often we're using all of them there's a great deal of database spraw in some sense for good reason you know especially in this microservices world we want to have databases that are optimized for that service but with redb platform and this notion of pluggable storage we can do away with that so we might have a database running with Myrocks it is a right optimized storage engine so for something that purchases where the most important thing is being able to insert it and write it to disk as fast as possible to make that customer happy Myrocks is ideal with something like shopping carts we might use spider as I mentioned before that is a sharded storage engine so that'll allow you to add a whole bunch of database instances underneath it and scale out as much as necessary the more shopping carts you have the more of those shards you provide and then that clickstream service still a MariaDB server but now we'll use column store to store that data because it's going to be clumner it's also going to be scalable because it's distributed and let me do analytics on it so now you have these three different workloads three different data but one single database platform to manage them now in this example I depicted three separate database instances which you could do you could also have two database instances because the storage engines aren't applied to the database instance that are applied to the table so I might have one instance of MariaDB server that has a purchases table backed by Myrocks and a shopping carts table backed by spider and then those different shards might be using InnoDB for a nice mix of read and write performance and then I might have a separate analytical instance that's using column store so the great deal of flexibility here but taking advantage of those different storage engines will let you consolidate what might typically require a lot of different databases so a little note on getting started with Docker especially for those of you that already be using it or at least are familiar with it MariaDB platform has a handful of pieces you know as we've seen in some of these slides there's a database proxy there's instances with transactional storage instances with columnar storage there's the change data capture process the fastest way to get started is with this container so we created an all in one or out of the box you know platform in a box container it's one single image one single container when it starts up you'll get a fully configured MariaDB platform deployment similar to what you might see in a staging or small production environment this is really intended for getting started it's a quick start you know as it says here for development you can launch it with a simple Docker command you can run it on your laptop warning for everyone it is not intended to run in production this way we do have separate containers for running in production but this one is really just a getting started container that simplifies things and makes it easy and fast once it's started within that running container you'll have three instances of MariaDB server for transactions those are using InnoDB storage engine you'll also have a couple of instances of MariaDB with Klumner storage for analytics at the top there that instance of max scale is going to handle the query routing and then at the bottom there's a second instance of MariaDB max scale as well as an adapter that sets up change data capture so that everything that you write to the left side and orange is automatically replicated to the right side in purple so a handful of moving parts here but it's all going to be pre-wired and pre-configured and you can get that running in minutes so we do have a little bit more time here so I will touch the MariaDB platform managed service and then I'm happy to jump into Q&A and answer as many questions as I can so the way we're approaching MariaDB platform managed service is on one side when you have things like Amazon your RDS or Aurora you have provisioning installation and configuration typically you know I say cookie cutter but it's really least common to not or everybody gets the same database topology and configuration more or less you also have limited features so everything that's available on MariaDB platform isn't actually available by some of the cloud databases little to no database support if you find a bug or a security issue it's not Amazon who's going to be able to fix that for you it's eventually going to come from MariaDB and they have to wait until MariaDB does that you know delayed upgrades and also you know you have to incur downtime whether it's due to an upgrade or failover or scaling up any of those things MariaDB platform managed service is a little bit different provisioning you know it's still the flexibility of the customer so you can choose what cloud you want to use whether that's Amazon Microsoft Azure or Google and then using you know our enterprise architecture team we will develop a weapons architecture we will let you know what hardware needs to be provision and from that point we can take care of the remainder of that lifecycle from installation configuration recovery if it's needed ongoing continuous optimization regular maintenance and of course that full technical and consultative support so this is really in a sense it's a white club service if you want to take advantage of cloud whether it's purely public or hybrid and some of our customers are doing now you can offload management of that database to MariaDB and our team of enterprise architects remote DBAs and cloud architects will make sure that you get the best possible experience of MariaDB and the best value out of it and basically able to utilize all the features that I've discussed in this webinar as well as a lot of others that I haven't been able to get to so if you look a little bit of those differences certainly you have a preview of hybrid cloud you have limited you know hybrid workloads available and something like Aurora but really where you get stuck on your own is when you need that proactive care if you want to run on the latest version not something that's six months old if you need technical support or even consultative support or security fixes in a timely fashion as well as some functional difference so there's still a little bit of split there between RDS and Aurora but the database firewall dynamic data masking change data capture the Kafka connector all those things are available on MariaDB platform and otherwise unavailable in cloud databases that are built solely on MariaDB server so I know that was a lot of information as we walk through these topics of hybrid workloads and MariaDB platform and hybrid cloud and different types of use cases but I hope that was all helpful to everyone and you found something informative and something you could take away from that but with that said I think now is probably a good time to pause and take a look at any questions we might have Hey Shane thanks for that great presentation we do have a couple of questions coming in and if you have questions for Shane just submit them in the bottom right hand corner of your screen and just a reminder to answer the most commonly asked questions just I will be sending a follow-up email by end of day Thursday for this presentation with links to the slides and links to the recording so Shane diving right in here are there any use cases of MariaDB in an academic environment that you can point me to? I would like to know more about the ease of learning and use by students most academic programs usually point students towards my sequel or MongoDB if you offer academic support then I'll be interested to see how we can integrate it that's a great question we might have to follow up offline I think to provide you more details but yes it is widely used in academia MariaDB some people may realize that some may not has become the default database in almost every Linux distribution having replaced my sequel so yes there are places they're using it in academia it is really easy to get started with but as far as the type of academic support you're looking for I might have to get a little bit more deeper than that to figure out the details there but yes very widely used in academia I love it and I'll get you that question and question or Shane so we can get the for that so we can follow up where would you start when determining what to pull from a data warehouse to create a columnar database for analytics? well I think the best way to edge that is really be platform can be deployed in different ways so the focus of this presentation is certainly hybrid workloads but you can deploy really be platform purely for transactions right just deploy the raw storage or you can deploy it purely for analytics and only have columnar storage so to relate to the question in the context of what would I use in ready to be platform as opposed to the data warehouse when used for purely analytics it is extremely well suited to interactive ad hoc queries so I wouldn't recommend it to be this type of data house that facilitates scheduled reports or something with extremely complex schemas but really think if you have a table that maybe has hundreds of columns and maybe hundreds of billions of rows really be platform with its columnar storage will let you analyze those hundred billion rows in seconds on demand without indexes iteratively so that's really where it shines so I would keep your data warehouse if you're using it something as something akin to a data lake or simply a means to to have a larger repository of diverse data but use ready to be platform if there's a particular subset of it especially a very large subset that you just want to explore and analyze in your own way without having to wait it's going to give everyone a quick moment here to it looks like that was the last question for right now but chain you've mentioned there was something else that you wanted to review if we had time oh that was the better service I wasn't sure if we would have a enough time to touch on it but I think we we talked about a little bit and certainly if anyone has any questions whether it's related to reDB platform or the mattered service or MariaDB in general outside the topic of hybrid I'm happy to answer those questions too I love it thank you so it uh everyone's quiet today cold the storm coming through it's taking it all in it's got to sink in yeah indeed I love it well Shane this has been fantastic as always I really I really love this presentation and and really appreciate MariaDB sponsoring again this just again just to remind everybody I will send a follow-up email to all registrants by end of day Thursday with links to the slides and links to the recording if you think of any additional questions feel free to send them to me and I will make sure to get them to Shane well Shane thank you so much and have a great day everyone