 I think if we don't get more, oh, I'm on. OK. I was about to scream a lot more loudly. I think I can go out there and maybe pull some more people in. I don't know. I'll say we're going to give away free coffee or something. I'm on standby still. Yes? OK, so I guess we're starting officially now. Well, thank you for everyone coming. I guess it's kind of a late session. It's been an already packed conference, so I'm glad that you stayed late with us to talk about the cloud-enabled DBA of the 21st century. So while I think that this might be a slight exaggeration of what you're going to be when you use Trove, I hope I'll convince you that it's not too much of an exaggeration. Oh, and I forgot to introduce myself for those who speak Japanese here. My name is Alex Tomiku, and I can't speak Japanese. Sorry. I'm Korean. You're Korean? OK, so we don't have any? But we're in Japan, so I hope someone understood what I said. If not, just ignore it. All I said was my name is Alex Tomiku. I work with Tesora, who's the number one Trove company. And I just apologize that I could not deliver the presentation in Japanese, but that's the limit of my Japanese knowledge. So I'm just going to overview what we're going to talk about today. So I'll just do a little summary of how we got to where we are in the database world today, talking about the good old days, a little bit about DBAs, the people who are actually maintaining these systems, and some of the challenges they face, and how Trove and Database of Service will hopefully meet those challenges. So just to get to know each other a little bit better, we're not a huge group, so we can make this a little bit more intimate. So the people who are here, who are you maintaining databases in the cloud? Do you consider yourselves DBAs? Or you have a passive interest? So who would consider themselves a DBA or has that role? No one? OK. Are you administering databases in the cloud already? Anyone? OK. Well, we're giving away OpenStack Trove books. So they're here. One of the authors is here. He's a very smart guy. So if you're interested in the book, we're going to pass around this hat. You can just put a business card in it, and then we'll do a raffle and give away the books to whoever's interested in them. So yeah, so they're going to pass around. OK, so we're going to talk a little bit about the good old days. So this was a time when things were a lot simpler. So you had a relational database maybe from one vendor, and you use that system for everything. And maybe in a relatively modest size organization, you had maybe three database servers running on some very arcane Unix system, like AIX or something like that. You had a dev system, maybe a QA system and a production system. And you could get away with telling your users that there would be downtime, and you would survive the conversation. So this was a much simpler time, a much easier time to be a DBA, I would argue. But as we've seen, this was kind of an untenable situation. So you had a lot of times where the DBA had to say no to a lot of things. So you'd have to say, no, we can't ship that software today because we have to plan downtime to do a change of the schema. To say, no, we can't scale to anywhere near 50,000 transactions per second on that machine. And we can't run that query in less than an hour. And so no became a kind of answer that had to be given to a lot of questions of DBAs. And so this resulted in what I kind of jokingly referred to as the NoSQL kind of revolution. So I think the term NoSQL, as it became known later, really tells you everything you need to know about what this meant. It really was a rejection of the difficulty of using relational databases. And so I think this was really a result of web 2.0, social media that just generated just torrents and massive volumes of data that had to be stored. But the relational databases of the time simply couldn't handle this. And so this kind of one size fits all approach was just not practical. So the pressure on the flexibility and scalability proved to be too much. So from the perspective of scalability, we had systems that came out of industrial research like Big Table and Dynamo from Amazon. And I know we're not supposed to speak the word Amazon at this conference, but I'll just, that sort of did some very revolutionary things for the time. So they said, and there's a great example in the Amazon paper through the original one where they said, if we have a shopping cart for our users and if that shopping cart has maybe some slight inconsistency at one point in the cycle of purchasing something, it might not be a really big deal. If that consistency is eventually resolved and handled by the application. So this is far preferable to simply not being able to handle load. So this trade off upset a lot of people but eventually it came to be accepted and now it's just one of the sort of variety of systems that we have at our disposal. And also these systems because they were published as research papers then ended up producing open source versions like HBase and Cassandra which we now know and love. Flexibility, so this was the other big problem. So anyone that has had to manage large infrastructures of let's say even MySQL databases as recently as five or six years ago, you knew that DDL management is a real pain. So if you had 350 servers that were replicated in five different sort of ways with tables that were very large, having these tables simply handle a simple edition of a column was far from being trivial. So when you use a system like CouchDB, MongoDB, if you want to add a column, you just add it and that's it. There's no discussion with the operators to decide when you're gonna have downtime and all this sort of stuff. So this was very compelling to people building applications that didn't want to be subject to those constraints. And then while all of this was happening in the NoSQL world, there's obviously open stack and cloud computing and virtualization as a whole. So I mean we're at an open stack conference, I'm not gonna try to sell you on the virtue of cloud computing because I think we all appreciate that. I grabbed this diagram because I like the way that this has been sort of shown which is open stack, just introduce this big chunky layer in between your application and the hardware that you're running that wasn't there before. And this gives us a lot of great benefits but it also complicates things a lot. So we're gonna see that a little bit later in the presentation. Okay, so this presentation was not supposed to be just about technology but about actually the DBAs, the people sort of maintaining and using the databases or administering the databases. So let's actually talk a little bit about them. So I just sort of put these sort of general areas of responsibility. So this covers things like operations, risk management, data security and availability, development guidance, so if you were having to maintain a relationship with databases you might have to talk with your developers to go through the schema changes that they suggested, lecture them on the fact that it doesn't sort of meet their normal form, reiterate, so on, forth, so forth. Communications and ad hoc reporting. So of course no one wants to admit that this always happens and DBAs always get sucked into doing ad hoc reporting because that's why it's crossed off so we don't wanna actually discuss that that happens but of course it's always happening. In terms of who is a DBA interacting with in the organization? So as with the responsibilities, as the responsibilities would suggest, CIS admins, operations people, developers, security teams and maybe data centric analysts in the organization. And finally about the technical expertise that DBA has pretty much everything down from the data access layer to the hardware itself and everything in between ideally. Okay, so 21st century problems. So what are some of the challenges that DBAs have today? And I mean a lot of these are post-date the 21st century at the beginning but it just sounds better 21st century as opposed to 2015 problems. So I think I decided on six and we're gonna go through each of these. Ranging from what I consider the big Vs of big data to some constraints or some challenges with agility, complex environments, standardization and tooling, security and generally dealing with upper management. So that would be the final one. So just a brief sort of digression something that I think is very interesting worth remembering is this very interesting phenomenon that was sort of observed during the first industrial evolution which was Jeven's paradox. And this noted that or Jeven, that was his last name or his first name. So he noted that the increase of efficiency of the use of coal actually led to the increase of the use of coal. So this seems paradoxical until you think about the fact that when something becomes more efficient, it becomes cheaper, we find more creative ways of using it. So I think that we can all say, we can all agree that this is something that has happened with storage capacity and computing power. So as this has become more efficient, we've hardly had any lack of interesting ideas for how we can make use of that more inexpensive power. So unfortunately, I'm doing a trove talk. I really didn't wanna use the word big data but I have to use it. So I think we'll just kind of go through this a little bit. I think when you discuss big data or when you talk about the volumes of data that are being generated in systems that I have to maintain today, ultimately it comes back to this sort of nice way of formulating the relationship between wisdom and data. So as a DBA, your responsibility is to safeguard this basis of this pyramid which provides you the information that gives you the knowledge that gives you the wisdom to make wise decisions. And then this follows generally sort of the hierarchy of the particular organization. So the DBA is responsible for the data. You provide the information probably to the business teams. Eventually they use that knowledge and then the CEO hopefully is a wise guy and makes, not a wise guy, but a wise guy. Sorry, a wise guy, that's a different thing. Makes good decisions with that information. So unfortunately, and now I'm gonna sort of reveal a little bit of my bias and maybe against some of the tendencies of big data. I think the flip side of that pyramid is the sort of inverse, which is noise leading to errors, which leads to mistakes, which leads to just general chaos. But hopefully that's a little digression. Okay, so I'm gonna talk about something quite abstract and I hope you'll find this interesting because I think that when people talk about the four V's of big data, they are and hope I'm gonna forget them. Volume, which is fairly obvious, just general volume of data that you're generating, velocity, so it's being generated more rapidly, more of it more quickly. Veracity, meaning is this data actually correct? Is this useful or is it just noise? And the last one, I'm missing volume, veracity. Sorry, now I'm gonna go back. Variety, I knew I'd forget that one. Variety, yeah. Yeah, yeah, I was talking about the four V's. So variety, meaning that there's a lot of different types of data that are coming in. But I think that an underappreciated V in this discussion is value. So how important, how useful is this information? And so I drew this graph and I hope you'll kind of find this interesting that I think if you look at, let's say the volume of data that's being generated and let's say we're just looking at this view of data over the course of a day and the amount of data that could theoretically be stored if you wanted to store it. And the relationship with that, compared to the actual value or importance of that data. And this seems very abstract. So let's just think of some examples. So let's say a row in your customer table. If you're, let's say a medium to large size enterprise, maybe you could have as few as several thousand or maybe on the order of 100,000 rows on that table. And if you have, let's say, people who are managing those relationships with those customers, if just one of those rows just vanished from your database, they're gonna notice it. So that's kind of a bad thing. So the types of systems that you're using for this sort of data are gonna be your very robust systems like Oracle that provides you strong consistency guarantees, durability, all that sort of stuff. But then as you go further out on this sort of long tail, then you might get to something like, let's say, you know, a log entry for someone who visited your website on some day with all the data that goes along with that, like let's say the user agent, you know, the geography, all that sort of stuff. And then we can go to the point of ridiculousness and perhaps there's some Internet of Things startup somewhere, you know, in Silicon Valley, some bright-eyed guys who have some cool sensors that are gonna grab all of the data of all of the individual electrons for when you touch an object and how much discharge that generates. And then maybe they collect all that data and then infer some interesting things about, you know, the level of stress if you're employed or something like that. I mean, I think we can agree that that's a pretty absurd thing, but in theory, that data could be stored if we wanted to. And the individual importance of each piece of data exponentially goes down. And this gets more complicated because it's not just that we generate a lot of this data, we also have the fact that data can be summarized or it can evolve, it can be derived from other pieces of data. So maybe that log entry wasn't very valuable and maybe your, doesn't work, I can't actually show you the pointer thing. Well, okay, so, you know, this piece of data, maybe this is just gonna go through your stream processing system now. So maybe this never actually sits anywhere in a database on a disk, but it goes through your stream processing system and it ends up in a summary. And that system might be in a MySQL database that's used by your analytics teams or maybe it's used by an application that provides you a dashboard or something of that sort. I think this sort of illustrates the difference between, you know, the typical big data sort of systems that are managed by Sahara and the stuff that's further to the left of this curve, which is the sweet spot for Trove, some more relational database kind of things. Okay, so I promise I have only one more kind of technical or abstract kind of slide and then we're gonna get to some more practical things. So in the end, I think the challenge for a DBA is really kind of an optimization problem. So it's your goal, in my view, to minimize the economic cost and maximize the value, whatever that value might be, it's a sort of, it can be kind of difficult to define that. Given a bunch of different constraints, like let's say the throughput of the system, you need to provide latency, retention time, your tolerance for data loss and complexity. So okay, so to put this a little bit less abstractly, you know, your users will have a tendency to say yes to everything. So if you ask, I mean to give a kind of an absurd example, let's say you ask your users, well, do you need the response time on this service to be, let's say, less than a millisecond? And they'll say, well, yeah, of course. Users tend to like things that are fast. Of course it can be less than a millisecond. And then you go and you say, okay, well, I'm gonna think, I think I can build an FPGA programmable grid array. I'll rewrite Redis in hardware. It'll take me six months and maybe you'll have a prototype version. Then your user will go, oh, okay, I don't need that because I don't wanna pay for that. And so this is, you know, that's one sort of kind of absurd extreme. On the other extreme, as a DBA, you will always wanna over provision everything because it reduces your risk. It makes your life easier. And if you're not paying for it, doesn't really matter to you, right? So, you know, a balance between those two extremes has to always be found. So some other examples of this sort of trade-off game might be, you know, let's say, if you double the retention time of the data in some of your, let's say, your analytics systems, it doubles the cost. Can you justify that doubling of the cost? Is it twice as valuable that your users have access to twice as much data? Maybe in some cases, data loss is acceptable. Sounds kind of heretical for me to say that, but maybe it doesn't matter. So maybe those, let's say, to stick with that example of the analytics systems, maybe it doesn't really matter if that system crashes. Your users will tolerate that. They'll allow you to have the time to rebuild the system. And in that case, you can use a much more, you have maybe some gains in efficiency as a result. And the other final thing is increased complexity. So if all is equal, you can do something better with something like Redis, but then you don't wanna have the complexity of maintaining another database system. You might say, okay, we'll go with my SQL system because we just don't wanna deal with the complexity. And so that complexity gets us to the problem of heterogeneous environments. So this whole no SQL revolution really was the end of the one size fits all era. So it means that now we have a whole bunch of different systems for a whole bunch of different purposes. And obviously it's difficult to develop expertise in seven different systems that are architected very differently for seven different purposes. So another problem that has to be dealt with is homegrown tooling. So your first day as a DBA, you come in, your boss takes you out for beers, everything is great. Then you get back to the office and you read the employee manual and everything's fine. Then on day two, you go and you check out the DB scripts repository and you find this. And you smash the keyboard and you wonder what you just got yourself into. Hope people understand this. You know, I'm wondering if any, it looks like a typo. Yeah, yeah, someone just, you know. Okay, so increased demands on agility. So a wise person once told me that agile seems to have devolved in the current environment into just more stuff more quickly. As much as, you know, the agile development philosophy had some interesting ideas. I think that's really what it means to a lot of people now and ideally on demand. So I mean, one of the problems that this has is something that was referred to, I think quite nicely in the Bitnami keynote on Tuesday. Which is the fact that if you don't meet the demands of your users, they will find ways to solve that problem on their own. And so as a result, you have this kind of proliferation or you may have this proliferation of quasi production systems. And I say quasi production because any system that people expect to be up and have become used to, if it goes away and they're upset, it's effectively a production system. And so just like this, you know, this sheep who is really happy to be away from the flock and thinks it's all cool to be on this road and on his own, the first disaster strikes, one of those quasi production systems that your rogue development team put into place because they don't have proper backups, they didn't really know what they were doing. When something goes wrong, you as the wise DBA will have to sort of bring that sheep back into the fold. And this is something that you will eventually have to deal with. So, you know, the more that you meet this demand of agility, the easier life is gonna be. And another problem that I think a lot of DBAs face, and this is really more of a communications problem, not a technical problem, is really what I think of as the curse of the security team. So, the security team has this problem, you know. If you do your job too well, you appear to be not busy and someone at some point will notice how much you're getting paid. And they'll be like, you know, well, this guy doesn't seem to be too busy, but then the problem is that you're doing your job well and things are running well for a good reason. On the flip side, if you don't do your job, you're stressed out fixing corrupted data and all sorts of different problems and no one's happy with you. So, it's kind of a no one situation, but fundamentally it's a communications issue, not a technical one, to educate the people that you work with on the value of the things that you're doing. I see some people laughing, so I'm not sure if others have kind of observed this, but I think it's a difficult problem to solve. It's quite generic. Yeah, I would say that too. Maybe it's not just the security team. Okay, and I think probably, you know, if all of that stuff wasn't bad enough, I think the security and regulatory environment has become challenging to say the least. And I think this is in large part because of the fact that it's just easy to have more international firms these days. So even a startup with as few as 40 people could have three or four different offices and three or four different consonants subject to three or four different regulatory environments. And a great example of that is this EU's US Safe Harbor Act. So this was legislation that was put in place for companies that want to ensure that they're compliant with both regulatory frameworks. As recently as this month, this has been clear in devalued due to a number of reasons that I'm not, you know, qualified to talk about because I'm not a lawyer, but I think that there are some interesting and disturbing precedents that people need to be sort of taken to account when they do their risk assessments. So a great example of that was a case last year. There's an article I have a link in the presentation where Microsoft was compelled to release data stored on a server that was physically located in Dublin by virtue of the fact that they were an American firm. So if you are a company that is responsible for cat pictures on the internet, maybe this is not something that concerns you very much, but for certain sensitive sectors might be not a non-trivial thing to consider. So, and because legal frameworks move slowly, they're behind the technical reality. And another interesting side effect of this is that there are companies that now are trying to sort of capitalize on this. So I'm not sort of advocating for this particular company, but I think it's interesting that there's a company actually called Safe Swiss Cloud that is now trying to move into, or trying to capitalize on this reality that people are uncertain about what's going to happen with their data. So they're saying, well, we're a Swiss cloud provider, we will make sure that your data is safe in some respect. And whether you trust that or not is another issue, but the reality that they're offering that as part of their value proposition I think is interesting. Okay, and finally about security. So I mean, I think this is something that's not really discussed enough in the cloud computing world, and I think it really should be, is the fact that data breaches are a very, very, very serious problem. So since 2009, there's a publicly available data set, which I have, there's a link in the presentation which I'll make available later, that there were 177 major data breaches that are known about because they were released in some newspaper article or something. I created a nice pivot table which is just sort of disturbingly fascinating, to go through just how much data has been leaked. So in my little summary, I found that there were almost two trillion records that have been lost in the last six years as a result of all of these breaches. And 64 of these are what I would consider serious or potentially catastrophic because they are leaks of credit cards, bank records, health records. I think many organizations would have trouble recovering from a breach of this scale. And as bad as this is, keep in mind that this is just what we know about because it's in the public record. There are probably many, many more where a ransom was offered and it was paid and no one ever knew anything about it. Okay, so this all seems kind of overwhelming as a lot of stuff and now how is Trove going to help with all this stuff? Because in the end, we all want to be data man, we want to be a super hero, we want to deal with all of these problems. Well, I think the first thing to discuss is that Trove database and service is a full database lifecycle management system. So it's much more than just using Nova to create an instance and then installing my SQL. So among the burdens, among the things that sort of will help with is administrative burden. So you want to provision a database, you just do Trove create and you choose which data store you want and now you have a database. So that's fairly easy. Security, Trove user grant access. So you can create a user with just one command or you can create a user with one command, grant access with another command. Backup and restore, similarly, Trove backup create instance that you want to create the backup for. Replication configuration. So this is where things get far less simple when you think about the ways that different databases handle replication. With Trove, it's simply Trove create the instance and you specify which instance you want to create as the, or which master you want to be for this slave. Short questions? Oh. Oh, okay, yeah. Oh, no, no, no, yeah, there are many. I'll go through some of them. So there's, yes, yes, yes, absolutely. And another example, cluster provisioning. So if you want to put provision a Mongo cluster, so there's Mongo support, it's a simple command. Okay, so this challenge of managing heterogeneous environments. So now a pop quiz. I'm sorry, I have to stress you guys. So the syntax to grant access to a user in MySQL, does anyone know what it is? It's some grants, so I couldn't type it out actually. If you ask me, I think it's grants. But it's the actual grant operation on, yeah, okay, right. Okay, good, so we know from MySQL. Yeah, exactly. Okay, so that's from MySQL. Okay, what about Postgres? It's similar. It's similar, right? Because I think grant is a SQL standard command for granting access. Okay, so Postgres, fine. Aha, what about Mongo? I'll say I have no idea. Actually, I should know, I mean, I couldn't admit that, but I think it's probably, I mean, I could look it up, right. Oracle, it's okay. We're not grading the test, it's just sort of DB2. So I think I hope I've made the point, right? So this is not rocket science. You can easily look this up in documentation. But now it's easy, you know, with Trove, you just do user grant access, instance user, and the database. And it's as simple as that. Okay, so the agile DBA, I'm sorry, I should have used that maybe as the title of the presentation, because, you know, agile is just a generally good thing. So let's say configuration profiles. So this is one very, very powerful feature of Trove, and it relates to the discussion that we had earlier about how you can optimize your system. So for example, you could create a configuration profile that you call unsafe high performance. And then with this, you specify, let's say, Fsync equals off. So for those that aren't technical, Fsync is just where you say that the rights to disk are not, would not be preserved if the system crashes. This can give you really great benefits in terms of performance, but it's unsafe. But in some cases, this might be a much more efficient way of running your systems. Oh yeah, and then to apply that, let's say it's here, datamart, if we have this example of the datamarts that your analytics users are using, you just apply that with configuration apply. And all of that under the covers is handled for you. Another interesting example could be, let's say, creating data sets for development purposes. So security is a serious concern. Even insiders can be a potential vulnerability points. So it makes a lot of sense to create data sets that developers that strip out sensitive data for your developers. So this is fairly straightforward. So create a backup from your production system. Oh, okay. Yeah, we'll do a draw, but I think you seem interested so maybe we should just give one to you. So you do a backup of your production system. Now you take that backup, you create another system based on that backup that you call dev cleanse. You go in, maybe do update customer, set some particular sort of attribute set to some random thing. And then you make that backup available to your developers. So now even if they have this and even if they do decide that they want to try to sell this to some guys on the dark web or whatever, it's not gonna have anything that's of any use to them. Okay, so maintenance burden. I think this is one area where a lot of DBAs that have been around for a long time in a particular organization might be kind of reluctant to bring on something like Trove because they may see that, well, I'm the only one that knows how to manage these 772 scripts. And if we move all of this stuff off into Trove, it might not be so good for me. But I think I hope that I've made the case based on all the challenges and all the things that we have to handle as DBAs that that's actually something that's gonna benefit you as opposed to hindering you. So you can reduce complexity, increase productivity by eliminating a lot of this stuff which is very difficult to maintain. And then on the security note, and I think this is more of an argument just through OpenStack in general, not just Trove is the fact that you might not be willing, allowed for regulatory reasons or even aware that when you go with a big public cloud provider, you're effectively outsourcing your security to that public cloud. And this, if you read some, Bruce Schneier has written some interesting stuff about this, he terms this feudal security effectively. So by giving all of your data to a public cloud, you're effectively allowing them to own that data. And they in turn provide you security. And again, for that sort of cat provider, provider of cat pictures, maybe that's a perfectly reasonable sort of trade-off. As an example, this presentation is on Google. I'll make it available on Google because I'm not really that worried if Google has access to this. But for sensitive sectors, this might not be such a trivial decision. So OpenStack and Trove give you the freedom to choose whether you want public, hybrid or private. And so on, okay, so I'm gonna ask something a little bit potentially controversial, but this is talking about security. So I wanna see who would agree or disagree with this statement when it comes to securing your data as we've seen as something important. So can you defend your servers or is it a futile exercise? So frame, okay, so I mean the next slide, I hope I'll try to kind of address this. Because I think with last year when the Heartbleed bug was revealed, I hope that this sort of dispelled once and for all the fact that as hard as you will try to secure all of your servers, eventually there will always be zero day vulnerabilities, there will be problems, and it's really just a matter of time before something will happen, right? So rather than just waiting around, I think one interesting example of something that you can do with Trove tomorrow if you were to install it is to create a honeypot in four simple lines. So I'm not sure if everyone's familiar with the idea of a honeypot. It's effectively a trap. So you create deliberately a system where you want hackers to try to connect to it because it reveals something about where they got access to the information to get to your database. So now with Trove you can do this in four lines. So you create a tasty instance with some really small instance type. You'll create a database on it, create a user, we'll call it Canary because I'm not sure if you're familiar with the idea of the Canary, it was used in the coal mines because they were much more sensitive to the fumes that would result from, I don't even know what they actually did, but the Canary would drop and then you would know that you had to get out of the mine. And then you grant access on that instance to your Canary user and then just pray that no one ever actually touches it. And then in Mitaka we don't have this yet, but so in an interesting way with two more commands to see if anyone actually fell into your honeypot, you know, you enable general logging, save that log from time to time, see what happened, and then just hope that your file is empty. But I would argue that at least knowing that if that file were not to be empty and you really have a problem is better than not knowing it all and then waiting to find out and then use paper when you wake up in the morning that you've leaked a lot of your data. So I think, you know, just to summarize, you know, what's the path forward for DBAs in this sort of cloud enabled world? You know, I think I would hope it would be obvious that Trove doesn't understand your data, doesn't understand your applications or your users. This is still your job as a DBA, obviously. And I think that this allows you to move up the application value stack. So for example, if a DBA was exclusively responsible for ensuring that data was available on the systems, now, let's say with the honeypot example, you can become more valuable to the security team by saying, hey, I have an interesting way of determining if we've been compromised. And also remember Jevon's paradox because I think it also applies with the efficiency of services. So, you know, the easier things become, the more creative ways we find using those services. And on the other extreme is specialization. So database performance tuning, you know, for complex environments was already a very, very difficult problem. And, you know, understanding all the implications of how that database uses the hardware, CPUs, storage, all that stuff was difficult before the cloud. And keep in mind that all the database architectures that we are using that we support in Trove today were developed pre-cloud. So MongoDB predates the cloud for the most part. All these systems were designed pre-cloud. And so adding in this complexity of virtual ICPUs, virtualized storage, virtualized networking is gonna make performance tuning and optimization even more challenging. So I think that there will be no lack of, no shortage of a requirement for people to understand how all this stuff works. And I just wanna sort of acknowledge, so Jordan Ohringer of Zeflin was originally slated to help out with this presentation, but he wasn't able to be here as a result of a scheduling conflict. So I just wanna acknowledge that. And thank you for your attention. So are there any, yeah, are there any questions? I had a microphone. I don't know. Does anyone have any questions? Anyone? Well, is any, so. My question for the informational databases which are working on bare metal. Okay. To what extent Trove system would be ready? OpenStack Cloud say the right activities to that is, is going through many layers introducing latencies. And great deal of effort is put by the architects for the performance optimization. I think, I mean, it's the classic computer science answer as it depends. So I think, I mean, it's, yeah, so Trove is being run production in a number of new areas, but I think it's, so one thing that's important to remember and maybe I didn't touch on this enough is that Trove is effectively managing the system. But so once the database ends up on your, it's effectively a Nova instance. If that Nova instance doesn't perform, Trove can't do anything about that. So it's really a matter of, let's say, how well is the underlying KVM or the underlying virtualization layer? How well is it doing its job? The underlying hardware that you're dealing with, I think it's just a general performance problem of Nova, you could say. You're really in case of data. Yeah, of course. And that's why I think that this just gets more complicated with the cloud. So I think, you know, a lot of people are focused on, you know, making things easier. That's one of the big sort of selling points of the cloud, but I think a very big risk factor is performance. And this is just a, this is something I can't answer in a simple way, obviously, because it depends on so many different factors, but. Does Trove handle the virtualization overhead less? Yeah, I think we have some kind of research, some work that's being done on that, but it's not. Officially. Yeah. I mean, I think from, I mean at least the work that I've done KVM, for example, this performs quite well, I'd say in, you know, compared to bare metal, it's obviously not the same as containers. So I think this is the kind of thing where, you know, your mileage may vary. I mean, every environment could be different. And I think just, it's just a matter of collecting data, experimenting. Yeah. And I think it's a valid concern, but unfortunately I don't have, you know, any raw data that I can present to you at the moment, but I think it depends on the particular installation of the stack. Any other questions? Anything else? Okay, so it's six o'clock, I think we all want to go and attend some of the interesting parties and stuff. Yeah, so who wants a Trove book? Yeah. I mean, we can probably keep it, I guess five. Yeah, so who wants a Trove book? Anyone? Yeah. Yeah, here we go. No problem. Everyone gets a book. Everyone gets a book. Everyone gets a book. Anyone, sir, would you like a book here? A interactive presentation here. Thank you. Thank you for coming.