 Hello everybody. Yes, this is the last official talk but I'm not sure what Srikanth has in store for you guys. Yeah, more is always good. So I'm just going to start with a quick recap on how I got here and I'm talking to you guys. So Srikanth and I spoke about two or three weeks back about rootconf and he was explaining me all the cool stuff that you guys that all the sessions had in store and all the DevOps audience and the developers audience and the more he explained to me it resonated with what we are dealing with on a day-to-day, more on an enterprise side, SMB side and I requested him to give me the audience to just talk about some of the challenges that we are seeing in the field about scaling databases. So Srikanth was okay with the, you know, letting me talk but he asked me for one thing that the proposal that I had sent had this title and he said this is not catchy and I was too geeky. So he said can you come up with something more cooler? I did not. That's why they're still on the official conference but hopefully databases, the unsung heroes, that's what I have come up with. This is the best I could do to generally talk about databases and how, what are the scalability challenges there and what are we trying to do there? So let's set the stage. I think that's, I'm going to be just introducing you guys to the problem. I think over the last two days I've heard a lot of discussion around scalability which is great which just proves the point that there is a need or there is a problem out there about scaling and there are multiple solutions. I'm just going to propose one alternate architecture and I'll take you through a quick data center evolution as I see it and where databases stand and what are the limitations in current architectures. Then we'll talk a little bit into details about the specific problems as such and try to relate to you guys on why a DevOps guy care about this problem, why even a developer care about this problem. So we'll try to relate these couple of things. And the solution that I have is more presented in a generic form and there are multiple solutions coming out from some open source communities, some more commercial. So I'm just going to present you a quick landscape so that you guys are aware of them. It's possible that some of them are being used by you guys. So just quickly do a summary of what's out there. And then we will go a little bit deeper into this architecture what I call is a database access layer and what does it have to offer, what use cases does it bring out and how does it help you. And that is where the majority of the technical discussion would lie. And I like to sort of brainstorm with you if any of those use cases resonate with you what you're doing in your infrastructures. Similarly you guys are the gatekeepers of everything in the data center. And then we will do the Q&A. So let's start with a quick personal experience. So back in around January or February of this year I got this email from a very famous airline in India saying oh there is a big sale out there for tickets go buy it what rupee one or something that's what they claim. So I got very excited. I said okay I need to fly to Delhi anyway so let me just go to their site and book my tickets. Let's see what I get. And this is what I got. And literally this was the email must have hit my inbox. I clicked right away. The site was down. What's the point? I mean you're doing a lot of advertising. You're doing a lot of marketing around all of these things that you're trying to sell. But your application, your website are not ready to handle that kind of load. So my reaction to this whole thing. Why can't we test our application, our site for scale. And well it's easier said than done. There are lots of parameters out there in the play. As we all understand application is not just one piece in itself. It's everybody is deploying multi-tiered applications. You have web servers to deal with. You have some solution to scale your web servers. Then you frontend those web servers with some sort of load balancer. Looks good. You add some sort of a caching. Yeah, your web servers are looking good now. What about your app servers? App servers have to be written in a way that they can scale. So we have some interesting solutions there. You have solutions like Hadoop and what not coming in to break down your data sets, your computation into smaller pieces that they can be executed. But at the end, when you go and your app tries to do some data access, it still goes to the database. So that's the part where most of your business logic data is living. And I'm here to just focus on one aspect of this, which is the databases. The things go wrong when your application is scaling at the database layer itself. And that's what I primarily want to deal with. So let's start with a quick recap on what I thought has been happening in the data center. So we started off by IED craftsmanship. That's where we started the data center in a way that it's meeting a one-off departmental need. You're provisioning your data center in a more efficient way. Your virtual machines are coming up so that your physical resources don't go wasted. But then the IT revolution said, okay, no, we need to develop this as a service. This has to be a self-service model where I will give you the tools as a DevOps guy. I'll give you the technology and you go serve your own business. And the third one that we are seeing now is, yes, there is a lot going on. Yes, we have all the cool tools to automate our DevOps. But how do we extract business value from it? I mean, the CIOs of the company are saying, fine, you're using these cluster databases, these cluster compute nodes. Show me something that I can take and improve my business. Do I want to invest more money on app A or app B? Because ultimately, everything should be serving the business. So that's the sort of trend that we've been seeing and started, like I said, with efficiency and now moving towards data. But I personally feel that databases are still stuck in the IT craftsmanship era. I mean, it has not come out of it yet. If you look at the biggest database vendors in the world, MySQL, SQL Server, Oracle, they're doing a lot of innovation in the databases itself to make them faster, more efficient, and whatnot. But they're not paying as much attention to the trends that are coming up. The self-service models, the self-healing models, the models where the data itself talks back to the user, may it be a developer or a DevOps guy or a CIO of the company. So that is where I think the gap is in terms of the database technology and the rest of the things that we are seeing in the industry. So let's get a little bit geeky and then let's see what are the challenges in the database tier today. So on the right, I drew a quick diagram of how a web server to database connectivity exists today. You see a bunch of web servers on top and have a mesh of connections to the database layer. And they are all one-on-one, one-to-one connections. When you're writing an app, you're creating a connection string to say, I want to go to database A or database B. That kind of programming is what we are doing. So as you see that this kind of architecture is hard to scale up. As your databases will start filling up, you will have to add more and more nodes. As you add more nodes, how do you tell your application that a new node has come around in your infrastructure? So it's a problem where every time you scale up, you have to do some modifications into your application stack for it to be useful. No SQL is trying to address that problem and saying, OK, we are transparently scale out system. You don't care about how many nodes you have in the back. Your data is going to get scaled efficiently, which is right. I mean, that's one way of solving that problem. But what you're essentially trying to say to the existing app developers that if you want scaling, then you have to rewrite your applications to now no SQL, which is not always the right approach. I mean, you sometimes the applications are not suitable for a no SQL like a data store. And sometimes it's just not practical. So there are all sorts of challenges there. And what I am trying to propose is there is an alternate to moving to no SQL to solve your scalability challenges. We can do that in a different way. Yeah, some of the other challenges is hard to maintain availability. So a database server goes down. What do you do now? I mean, you saw that initially the error message, error connecting to the database. I am not sure, but I assume that a lot of people have seen this error because a physical server goes down. You can't do anything about it. You have to go in as a developer and bring debug what the server is doing, bring it back back up. And in the meantime, whatever applications have been using that database, you have to go change their connection strings to make them point to the other database node. So so that challenge exists. And it's hard to diagnose also. I mean, there is no visibility in the in the database layer as far as I see it. I mean, in fact, yesterday it was very interesting because one of the Microsoft guys have were showing a tool where they were trying to pinpoint the problem in the infrastructure down to the level of the SQL query. They were saying, OK, this is, you know, this is your multi stack environment. Hey, look at this query, which is causing your databases to slow down because the response time is very high. That's exactly the kind of point that I'm trying to make here that is very difficult to diagnose what's going on in your data database here, where the problem lies, whether it's in the app, because the app has is writing crappy queries or in the network, the latency is very high or on the database because maybe the tables are not indexed properly and your joints are very expensive or your, you know, disks are slower. So the visibility of the problem is is also a big challenge and in today's database here. So anyway, so that so this problem exists and there is a lot of people trying to solve it in a lot of different ways. So some of the interesting projects that have that I would just like to share with you guys is first starting with a white paper by another not a white paper actually a research paper by Google called Spanner. So as you all know, Google wrote the big table paper and then in a lot of no SQL solutions are based out of it. So recently they released another paper called Spanner where they are developing a geographically distributed database. So their use cases for the for the Google AdSense application where all these databases are replicated geographically across the world and you can write application out to query these databases and these are relational databases. And in the paper they actually point out that hey, we introduced you guys to the big table world. It's great for some some applications, but we are feeling that that's not enough. That's not the right thing for a certain application, especially AdSense. So we are going to propose this relational model back again, but we're going to improve that for scale. We're going to improve that for availability because we can't afford that a certain data center goes down because of, you know, hurricanes and, you know, Houston and then it affects all my Google AdWords. So it's completely distributed. They do replication on the back and also second requirement that they had was transactions. They wanted asset properties back. They didn't think that moving the requirement of consistency to application is the right thing. So they are doing something very creative. Very recently about maybe a couple of months back Facebook announced another project called Web Scale SQL, which is actually a branch of the community edition of MySQL where they are solving some of the scale problems. I mean, if you look at Facebook, Twitter, and LinkedIn, and actually Google also, these are the four main contributors to this project, and they have a large deployment of MySQL clusters and they are suffering from the same problem that I'm sort of trying to share with you guys how to solve scalability. So they are making some improvements in the database tier itself to address it. MySQL has had certain solutions out. Some of them, like MySQL proxy, have been around for a while, which is the closest to what I will be talking about. It's basically a proxy server which lives between your web servers and application servers and your databases, and then it basically load balances and controls all the traffic. That's the architecture called the data access layer that I'm talking about. So they have some solutions out there in the open source world, and Postgres also has a tool called PG pool, which does something very similar. In the commercial world, we have, like I said, the database vendors are coming out with some solutions. Oracle has Oracle Rack, which tries to address some of these things. There are companies like SkySQL who are coming up and branching off MySQL and then trying to solve this problem in a commercial way. The dedicated data access layers like my company, Scalark, there are other companies like that, which are trying to solve the same problem in a similar way. And there are ADC vendors, people who don't understand SQL as well, but are in the space of optimization, content delivery. So they are trying to solve that by just partially helping the problem by TCP load balancing, by caching of certain kinds and whatnot. But I don't think that's... I don't think they go deep enough to really address the problem. So let's look at what this data access layer can do for you. So if you guys remember the slide before that, where there was a big mesh of connections, now this is the proposed architecture. All your web servers and application servers are now directly connecting to a single entity, what we're calling as data access layer. And then the data access layer then multiplexes these connections to the server, the database form that you have. So think of it, what does it mean to the application? So you can have n number of app servers and you first of all have to only configure them to a single endpoint. So think of it as a single IP address and a port, which says go here for all your database queries. You don't have to know about which one is a read, which one is a write. You don't have to worry about when a server goes down, how do I recover? It's a single endpoint. Always go there, send your queries, send all your queries there. And what this data access layer does then is as it receives connections from all these web servers and app servers, it can efficiently multiplex them. It can efficiently send reads to certain read servers, write to certain write servers, can measure what is the replication lag between all of them. And we'll talk about those use cases in a minute, but that's a place now for you to do these, to solve these scalability issues. You can add transparently a new node into your architecture. And then you start sending traffic to it, application servers are agnostic to it. They don't know, they don't care if a node comes up, comes or goes down. One more use case that this layer opens up is the real-time visibility of the traffic. So as you can see that this layer is accepting all the traffic, all the SQL queries from the application servers, it can now do aggregation on your queries and can drive business intelligence from it. This goes back to the diagram I was showing you where databases were stuck in the first layer, but they're not serving the business, is because they were not able to assess what, how an application is using the queries. But using this data access layer, now you can get some very useful statistics. Think of an example, you have an application which is sending a lot of selects, a lot of joins. The basic thing that you can know now is how many queries per second am I running? How many queries are going to table A versus table B? What is the average response time of my query? You make a change to your app, you add a new join there and then you start seeing it in this in this access layer and is the business intelligence that can come out of it is, oh now you have a new query that is running and the latency is very high and a developer or a DevOps can then show it to the developer to say, did you just make a modification to the app? Because now it's showing that your join is taking a lot more time. So directly correlating whatever application changes you are making to the database tier. Let's change this view a little bit. So what if you have multiple applications going through this database, data access to your database? And you want to know which of my app is being used more? Where should I invest more money? Whether it's app A or app B. So because of the fact that now you can analyze all this data, you can generate very intelligent decisions from it based on how your app is behaving and how is it using data. So that's another use case that I think this offers. Instant scalability and higher availability we discussed. The transparent cache is something I want to talk about a little bit. So one more thing that this data access layer does is is caching for you. And we've heard of no SQL as a means to serve data from in memory. It has a key value lookup. That's the reason it is faster. You don't have to go to the desk all the time. You can apply similar principle over here. So how it works is you get a query, a read query like a select. And let's say your application is such that it sends a lot of these select queries. Typical example is on ecommerce website when you go to the page, it loads the basic front page which has a lot of items that you display that you're trying to sell. Now this is a static query because it's not changing often unless you go and change your inventory in the back. So you can assume that every select query that you're making every time somebody logs in is going to the database. You can choose to then cache this into this layer where the query will come to the data access layer and return back. And we'll talk more about that. So it sort of eases the pressure on the database tier and takes the complexity to itself and lets it do the more complex operations like transactions which is what databases are good at. Security is another interesting area which gets addressed by this layer. So how many of you've heard of SQL injection attacks? I mean we're all aware of them. This is a very nice place to stop the SQL injection attacks because think of it that if this layer is seeing all the traffic all the time and it can also intelligently find out if a new kind of query comes through. It can just compare it with the traditional traffic that it is sending and it can send an alarm that I'm seeing this brand new query that I've never seen before. Have you made an application change or is this something unusual? And then the administrator can go in and says okay these are the common SQL injection attacks that are being that I'm seeing or this layer can learn itself and can automatically protect that. Okay if I'm seeing some sort of a script embedded inside a query I'm going to block that. I'm just not going to let it go to the database because that's where they're trying to steal the data from. So it does very interesting things with the security as well. So I would like to pause here for a minute and try to see if all this is making sense. Do you guys have any basic questions before I go a little bit deeper into some of the use cases that I was talking about? All clear? Crystal? Will this need any kind of change as far as connection pooling is concerned from the application side? So let me come to connection pooling in a second because that's one of the use cases I'll be talking about but it will enable connection pooling even though your application is not doing it. Even if you have a non-connection pool application you can now do connection pooling using this architecture. But if you do have connection pooling on your application it will work hand-in-hand with it so it does not affect that it actually makes it better. All right. There's a question there in the corner. You talk about the security right? So it's possible for the XSS attack what about the in case of CSRF requests? In case of what? CSRF cross-site reference for injections like that. Right so the way, well there are multiple ways to solve it and ultimately if you can parse a request out of, I guess ultimately this is a network proxy so it is reading all the data that is coming in in a request. You can as a system admin go in there and train the system with rules to say these are the kind of attacks that I want to block. Scripting block is another the kind of you're talking about is could be another one. So it's basically a learning engine. The more rules you create on it given your experience in the past it will do what it what you tell it to do. It is looking at every packet at a packet by packet level and then can stop that. But it has been disaster has been disaster right? Attacker can get through it. After get through it only we can make it a set of the rules. Yeah the good point. So what's the point of blocking somebody something when things have happened already? But I mean that's a more of a business use case. There are some things that it can learn over time. I mean it's not that every time a new attack comes in. Typically some sort of vulnerability gets introduced and then it gets propagated. What is more important is you don't suffer from something which is already known. That's a bigger mistake that you're making. So valid point. Thank you. So my question is regarding the cash use case example which you shared. So why would I want my request to actually go through my application server and reach my database abstraction layer? Shouldn't I actually cash even before my application server? So do you have any other use case which probably is like more useful? Let me understand the question. Actually I'll come to caching in just a second. But let me try to see if I understand your question. You're saying why cash at the database? Because I can cash the page even before the request comes to the application layer. Absolutely. Yeah. So it's not sort of changing the caching that happens at the HTTP layer or the web layer. It is talking about whatever you could not cash there and even if you're sending your select queries to the database you can still do another level of caching there. Because if all things that are read only could be cached at the web tier, then databases should only have been used for transactions. But that's not the case. And I'll show you these cases where you've gone into environments where 90% of the traffic was read. The other question is regarding a transaction. So does the layer also handles transactions which is distributed across different shards or like however you split? So I'll come to that in just a second. Let me just go through that. So I think okay, let me just walk you through this. So it's a very simple architecture. A bunch of app servers on the top at the bottom. There are three databases. The one on the left is the primary server. This is the right server. And the other two are the read servers. We're calling them secondary servers. So given what we discussed so far, this is the new architecture that looks like. So let's talk about connection multiplexing. It's a very simple use case. We already discussed that all the web servers, all the app servers are opening hundreds of connections to this layer. But those hundreds of connections doesn't need to be directly open one on one to one to the databases because as some of you might know, databases suffer from the limitation on how many connections they can open at a time. That's one of the very big limitations of databases. And if you're out of connections, then you can't serve the web server. So here, given that you know you can have n number of connections coming in from the app, you can only open m connections to the databases where you know m is what it can sustain. So help in reducing the number of connection errors that the apps can see. Caching is what we talked about. Let's say a web server does a query. It goes to the secondary. The results are sent. And then in the process, the data access layer saved a copy of this response. A similar query comes in from another server. It directly serves it from its cache. Now this is an in memory cache. That's why the lookups are extremely fast and you can control at what time do you want to do the invalidation of the cache. Some data administrator might want to say I want to do invalidation based on time because I know when my tables are going to get updated. Some might say that okay this is this layer might has to have to be more intelligent and whenever a write comes or an update comes to this particular table invalidate my cache. So that's a different sort of complexity that it adds. But very simply it can do an in memory cache for your reads which if you can I mean I'll share some links with you where you can look up the performance and it actually does very, very high scale for reads. If you have an application which has a lot of reads. Was there a question there? Given a SQL query that let's say there are five tables involved in the query and one of those tables are more than one table gets written in a concurrent right. So what happens in that case? Right. So like I said I don't want to go into a lot of detail on the solution but like couple of them I said if you can predictively know that your application is not going to do writes on this particular table you can do a TTL on this cache and say just expire after five minutes and it's okay even if for five minutes you serve me stale data that's okay. Think of the example of a inventory find it gets refreshed after five minutes. But if you want more consistency then you would write something like okay make read all the inserts and updates make sure if you're touching any of the tables that you have cache you invalidate them. If you do such a thing then of course you're adding more extra complexity and you know this optimization performance would actually be affected. So will it still make sense to use your data abstraction layer when such a scenario is correct? Absolutely. So we've seen deployments where this kind of requirements have come in where you know we needed exact consistency and we have configured the system in a way that we have auto invalidation. A write goes in to the database and immediately the cache gets invalidated through APIs and it has worked fine. There's another technique that we incorporate. We give small APIs to your application if you're okay to change your application a little bit where wherever in your application you have a write call. Just before that if you just call in the invalidation API that would hit our layer first invalidate the cache and then the write would execute. So there are many ways to actually solve that. Okay the next interesting use I want to talk about is search protection. So this is a case where one of the server goes down and it happens to be your right server. Now it could be just down for maintenance or it could be a real event. But during this time what happens to the queries that come in. So in this layer you can actually have a search queue where you're just caching or you're just holding all the rights till your right server is coming back of course to a limited time. And when your right server comes back in you're going to get forward it. So the clients will get delayed response but they will not get errors. So this is very very useful and we've seen this multiple use cases of this in the field where somebody is trying to do patches or upgrades on their right servers and they want zero time maintenance. They just don't want their applications to see errors. You can definitely make use of this feature. Read right split a very simple example of how load balancing can happen. You have reads and writes. You have one right server. You have multiple read servers. Why do you want to tax your right server with read queries. You might just want to spread them around on the read servers only. And it can intelligently do it by monitoring what is the load on these servers. If you have one server lying vacant send it more load. If you have server which is busy back off don't send a lot of data to it. Replication of where load balancing another very interesting use case. So you all understand the databases if you have a cluster of them they are replicating in the back end right. Somebody was talking about journals before that in a cluster file system context. Databases do something very similar. You write to a particular database to the right server and then they ship their logs across to the read replicas where the read replicas are going to apply those logs and that's how these two servers are in sync. But there is always a delay right and especially if you're writing a lot to one of the servers there is going to be a delay on this on the secondary. Now if you get into a situation where one of your slave servers are is lagging behind. And the reason of its lagging behind could be because it is processing a lot of queries. Do you think it's wise to send more traffic to this server and make it continue to lag behind or would it be better if I just throttle back and say OK I will let you I'll give you time and resources to catch up while I serve this traffic to the rest of the to the databases. So that's what I mean by replication aware load balancing where you are aware of what's happening in your database. How is the replication working on different nodes and then you optimize your traffic accordingly. Query routing another one. You can write rules to say send certain queries only to certain kinds of servers. Maybe you have one of the server which is you know more powerful does more reporting. You write queries and you write rules on the data access layer saying just send these kind of queries to this server enables the query routing feature. This is the last one sharding. I think how many of you have heard of sharding in databases. Wow a lot of them a lot of you guys right. So sharding is a very good technique to distribute your load from the database and you have a table with a million rows. You split them up you know 500,000 on one 500,000 on the other one so that as the query comes you can now submit from two different data stores may most likely parallel. But when you do sharding how do you tell your application about it. Your application still thinks the data look lives on the in a contiguous data store. So what you would have to do is make your application aware of the shard. You have to write intelligence in your app to say okay if you're querying you know indexes from this range go to database A. If you're sending queries to indexes from this range send it to database B. Why touch your application when you now you can easily do it in the data access layer by just creating rules. This thing is anywhere reading your queries, reading indexes and then transparently sending the data to the appropriate servers. So in a nutshell I'm just going to skip this interest of time. Yeah so in a nutshell what I was just trying to show is there are many use cases which affects the performance and scalability of databases that can be solved by this architecture and we don't have to rewrite our apps to no sequel. We can use our current architectures our current infrastructures of databases use the relational model but still can achieve scalability. I'm just going to skip this. Just these were certain sample use cases that Scalark had solved. So dell.com is deployed us during the Black Friday onslaught. They had crazy deals on their site all around the world. You can imagine millions of customers logging in and getting the database connection errors. So they deployed Scalark solution and they could help sustain the peak. Activision which is the studio behind call of duty. When call of duty three went live they did a sample test by just sending a certain percentage of the signed up users that they can go and try and they saw that even that small percentage of users was not able to sustain I mean their infrastructure was not able to sustain that. Imagine when they would have opened it up to the entire world what hell would have broken loose. So they could also help they could also use our solution to solve some of these things and another company quick and also did that. So that's about it. I think if you want to get in touch with me and sort of learn more about what we are doing or about this particular space please get in touch with me. I coordinates are over here and I would also love to connect with you guys in any way and learn from all the people who have shared very interesting things about DevOps and in exchange ideas on what we are doing in our IT to keep up with it as you can imagine with what we are doing our footprint of or the matrix of things that we deal with is very very large. All the different applications that users and different kinds of databases deployments and cloud deployments and physical infrastructure. So you'd love to bounce ideas with that that's it. Thank you everybody. I can take more questions. Yes. Your data access layer what kind of database technologies are the best suited for this layer? I didn't catch the last part. So the database technologies, right? You showed this initial slide of comparison. There's Oracle. There's MySQL. So what do we support? Yeah. What is the best suited? You think? Oh, what's, okay. So that's a hard one to answer because there are there's multiple attributes over there. I mean you can buy, I mean Oracle definitely serves the the fattest the biggest the most expensive database. MySQL is an open source free database that you can deploy. SQL Server comes around as another database. What is important here is that this layer has capability to now work with multiple databases at a time. And in our experience, what we support is right now MySQL, SQL Server and Oracle all three different kinds of databases. And given that a lot of these databases are closed like Oracle is a completely closed database. You don't have any access to how they're doing what is on the on the wire. We had to do a lot of reverse engineering to understand what they're doing. That's where the intelligence of this layer comes about because in order to do what we are doing you have to read and understand a lot about SQL which they don't expose. MySQL on the other hand is the most friendly because you have the entire source code and you can understand how they're parsing queries what kind of queries they are. So that's where the complexity lies. How big is the cache that you maintain? How big is the cache that we maintain? And I don't want to make this discussion about scale arc. I want to make it generic to anything. But this could be a implementation where you can have memcache D doing caching for you. And how big memcache D can scale? I mean, we all understand, right? It's not something that will affect the solution. The solution only says I have a query and as long as you can tell me that you can take the result from the cache I can keep putting it in the cache depending on how big my RAM is. And based on that I can scale how much ever I want. We've seen cache, 64 GB cache and 128 GB cache is being used and so of millions of queries. Thank you. Maybe I missed it, but what do I have to do from my code side to basically make a connection to this? Very good point. Actually I missed that part. So from the code side you don't have to do any changes. That's the beauty of it. Because all you are trying to change or maybe one change that you do is change your connection string. Okay, so it will work with JDBC and all that. Right. So it will work your existing JDBC.NET, ODBC drivers. You're just saying that, okay, now my connection string goes to a different endpoint. And I mean, just a follow up question. Do you have like something like a MySQL client has so that I can run a few queries from a shell script? I mean, without writing any custom code for that, anything like that? You can use your existing tools to work with this directly. Like even with your MySQL client, you ultimately give a host name and an apport. Instead of that, you can give the host name of the database access layer and then you can see the query going through this machine. I mean, our product has a web portal where you go in and you can actually see your query in flight. And you can say what query you fired, how long it took, which server did it go to, and all the details. All right, cool. Thanks. Yeah. Yes. Three questions. Wow. First, whether your layer is a single point of failure. Secondly, is it based on popular proxies like HAProxy or Varnish? And third, how is the pricing? Okay, I'm not gonna talk about pricing because this is not about our solution. This is a generic solution. And yes, this approach has the limitation of being a single point of failure, which you would have to address by means, such as deploying an active pair or some sort of a high availability in this layer itself. That's how you address it. And what was the third question? Is it built on proxies like HAProxy or Varnish? Is it based on proxy? No, it is not based on them, but we all understand that this has to be a very fast network proxy that also has to deal with a lot of application traffic. So it is an event-based, high-speed proxy, and we all understand the proxy architectures. And it's deployed on normal Linux machines or ASN appliance? It ships, again, Scalark software ships as a software appliance. There are RPMs available. You can just download them on any CentOS-based system. All right, one last question. I think, yes, we are reaching the top of the hour. How do you horizontally scale your database access layer? I mean, would you make multiple of them and put a database access layer, access layer on top of that? That's a very good question. Actually, somewhere I heard this quote that I can solve any computing problem or any computer science problem by adding another layer. If you think about it, it's true. And I've seen hundreds of examples of that. So, yes, you can actually put another load balancer of some sort on in front of us to solve the scalability problem of our machine if there is, or this particular layer. And we have real world deployments of that. A lot of customers, actually, Activision that I was showing has, was deployed in Amazon and they used ELB in front of us to then distribute load to a bunch of scale-art-based solutions. So, yes, you can do that, absolutely. Great. Thank you, everybody. That's wonderful. And I would just take one more second and can we have a round of applause to all the organizers? This was a fabulous event. Thank you.