 to listen to Erika today. She's a staff technologist at EFF and she works a lot at Certbot at the moment. She saves everyone on the internet to be secure and safe. So she's going to talk to us about keeping secrets on the remote machine. So that's very cool, right? She didn't expect this many people, so maybe to encourage her a little bit, please give her a warm round of applause. Go for it. Thank you. Okay, hello. Okay, welcome. Okay, so in this talk, as opposed to the various other things that keeping secrets on remote machines might mean, this talk is going to be about the challenges and different approaches that you can take to building cloud applications where even the server can't see the data that the user has on it. So as the popular laptop sticker reminds us, there is no cloud. It's only someone else's computers. All of the myriad benefits that you get from a modern cloud application that is processing power, having multi-user applications when things are collaborative, backing up your data, all of these come at the cost of giving up your data to someone else. It means that someone else can access it. If you're an organization, the people that you might be competing with, the people that you might be in opposition to in a legal context as well, if they have access to your data, then it's not great for you and also on a personal level as well. But wouldn't it be great if we could make cloud applications that allowed you to get all of the benefits that you get from the modern cloud but without giving up this data? Is this actually something that's possible? Is it what we're going to be talking about today? So to talk about this, I'm going to use the sample problem of search. So this is an email application here, and you might have not great in-box management strategies. So you want to be able to search through the data that you have stored somewhere else. So in this example here, I'm searching through my email to find the effector listserv. And you want certain properties here. You want to be able to, let's say, drop your phone in the ocean and be able to recover your data and get all of those cloud properties, but still without having the cloud service that holds onto your data be able to read it themselves. So this is what a model of the situation would look like. You have on the left side your laptop, which is your own personal data that no one else can access. And then on a server on the right, you've got various documents that they hold onto. And you might want to do a query on the data, such as which documents contain a keyword, such as bridge. And it will return to you the ideas of the documents that contain that keyword. So a basic way of doing search is to create an index. So as you create a document, as you edit a document, you put it on the server, it will figure out what words are in each document and create a mapping from each word to the documents containing each word. So then in constant time, when you're doing a query, you can just look down the list and say, ah, bridge is in documents one, four, and two. So the server will know to return documents one, four, and two to you. Okay, let's do some threat modeling here. Where might the attackers be? You can have a local attacker on your machine, you can have a network attacker, or you can have an attacker on the server. For the local attacker, it's mostly going to be out of scope here. That's a very hard problem, obviously, but it's not one that we're going to talk about. A network attacker, you are mostly taking care of by using HTTPS, which you should definitely also do. But what happens when the attacker is inside the server itself, the server that is doing this computation, what can we do? Okay, so what properties might we want? We want to provide the useful functionality, but even while we're doing this, the malicious server should not be able to just read the data that you have stored, they shouldn't just be able to open it up and look inside. They shouldn't be able to use, they should be able to prove to the user that they can't read the data that they have. So not just not be able to do it, but also convince the user using math that you can't do it. And also there shouldn't be any side channels left open, because if you encrypt it, but you know, you can see it over where it's saved in the server, it's a metadata or whatever, then you lose that protection as well. Let's talk about not being able to read the stored data and how we might achieve that very simply. So this is what Spyderoak does. It's kind of like Dropbox, but encrypted. The way it works is you have documents on a server, and where previously you might have wanted to do this search operation server side, in Spyderoak you can't do that anymore, because when the documents are encrypted on the server, you lose the ability to have the index on the server, so the server needs to be local, and all the computation needs to be done locally. You can keep the key on your laptop and be able to provide a secure storage space, but you lose that functionality. Okay, so we're able, we know for sure that we're able to achieve the property of can't just read the stored data, and that's pretty simple if you don't want to do any computation on top of it. As for proving to the user that they can't read the data, this is a bit of an orthogonal problem over here, but it is something that is largely possible using measurement, which means making sure that the correct code is running, which pretty much amounts to doing a hash over the actual code itself. And at a station which is proving that that hash comes from some sort of chain of trust, that you can believe that it's running. These are separate problems over here, but you just keep in mind that you would also want those on top of any other properties. Okay, and what about side channels? This is something that you would keep in mind for a specific protocol, because everything is going to have a different side channel depending on how it's implemented, but if you want to leak some of the data that would allow you to do the computation, you want to leak it in a way that means that you can't just get it back from somewhere else. Okay, well, this sounds pretty impossible. How are we supposed to be able to do computation if we have no data to do the computation on top of? But as it turns out, it's not impossible. It's just really, really hard. So a major direction that people have been working on to try to implement this in practice is using fancy types of cryptography. So this means something like this. You in the, in the, for example, in the encrypting search problem space, there's different trade-offs that you might make. Over time, the general research trend is to increase the functionality for each of the different approaches that you might take. But for each category of approach, generally, you can make it more efficient the more you leak. If you just have a plain search index that's not encrypted at all, obviously, that's very efficient, but you're leaking all that information right there. Whereas at the complete other end, using fully homomorphic encryption, you're not leaking anything, but it's extraordinarily inefficient. So we're going to dive into some of these and look at what different approaches might look like. So let's talk about tokenization. The leakage level for tokenization is pretty much that you're trying to stop accidents. This is, maybe you have some contractors who get to see the data, and they don't necessarily have the ability to interact with the machine more than, like, a mouse or something like that. So you're not super worried, but you don't want someone accidentally glancing over to be able to see the contents of the data. And tokenization can definitely help with that. So what does that look like? What are we tokenizing? Tokenization means that if you have the index on the server, and the server can't see the contents of the documents, what you can do is you can just take a hash, pretty much, a deterministic hash of each, oh, it doesn't have to be deterministic, whatever. You take a hash of each of the different search terms, and then when you want to go do a search, you just take the hash of the search term and look it up right in the table, and you can find it that way. But obviously, someone just looking at the table doesn't know what the words are that are in each of the documents. Okay, well, how well is this going to work out for us? So let's say that our index looks something like this, and we have some unknown search term, a5876 is the top one, and that's in 220 documents, and the next common search term is in the next number of documents. We don't see what the words are here, but we could still probably figure something out about this, using the fact that it's a natural language document. So, okay, these are the most commonly used words in this index, if this is index, what might they be? They might be, anyone want to shout anything out? Common words that might be in the top of an index. Yeah, I'm hearing the, yeah, great. A, I heard, yeah, exactly. B, you can start reconstructing it using the frequency of the natural language that you know, so you automatically start revealing that. And it's, so you might also say, well, okay, we have an index and we know that this is contained in these 220 documents, but what if we just have like some jumbled up words in a document, and we don't know what order they're in, we just know what document they're in. You can still get a lot of information from that. For example, here are the alphabetized words from a sentence that, if we just know what words are in this sentence here, we can probably figure out that it's actually meant to be this popular sentence from a English novel. So, okay, that leaks a lot of information. That's pretty unfortunate, but it is very efficient. Everything is constant time. What can we maybe do going one step down? Searchable encryption. Here, the leakage level is that before, if you even got a passive snapshot of the index, you would be able to reconstruct all of that information. At the searchable encryption level, you want a passive snapshot to not give you that information. You would have to have information over time be revealed, and you could have a little bit of leakage with each search query that you do, or possibly with an add or update operation, but just a passive snapshot should be safe. Okay, so what are we allowed to leak? One of the things is the keyword access pattern. This means that if you search for the same keyword multiple times, that's something that maybe we're allowed to know to some extent. So it doesn't mean you know what the actual keyword is, but you just see the consistency, you see the pattern. So you might know that the keyword truth, which is represented perhaps it's by some index one, you see the pattern of that happening over time. And then you also have the document access pattern, which is the reverse of the index almost, and seeing if the same document appears in multiple keyword searches, this is something else, another area that a searchable encryption scheme might leak to an attacker. Oh no, okay. So can we actually achieve this? Is a question that's actually surprisingly well explored. There's been a lot of research into this particular topic with people under the assumption that okay, maybe it's actually kind of fine if we leak those two things. It's sometimes referred to as either the honest but curious model, or less euphemistically the NSA model. So if the NSA comes and says hello, we would like to take all of your data, but that's all we can do, we can only ask for a snapshot of data to be able to go to them and say, well actually we can't give you anything. So that's a model of what we're working on to hear. So how might we achieve this one? Let's talk about a simple searchable encryption scheme that achieves some of these properties. So let's say you have your index locally. Now the first thing that you're going to do is you're going to split it up. So instead of just being a mapping of each word to the documents it's in, you create pairs of word document. And then you hash each of those individually. So now you just see an unassociated series of hashes that you could mix up. It really doesn't matter at this point what you do with those. And they don't reveal any information because it's a one-way function. So then you could send all of those over to the server. And then when you want to do a query you can reconstruct those, but you need to do it for every single document. So if you want to know which documents contain the word dog, for example, you would create the pair of dog and every possible document ID, hash all of those, send all of those over to the server. The server can check the two lists that they have and see what matches there are. And if there's a match, they return to you the hashes that match and you know which documents contained that keyword. Okay, so that's one way of doing it, but it's not a super ideal way of constructing a searchable encryption scheme. It doesn't leak any information during search, but you might also want to not leak any information when you add documents, when you remove documents, when you update documents. There's various different stages where leakage might happen and in this one, that's not ideal for those. But on top of all of those you might want to also allow low communication overhead, less computation, document sharing, Boolean queries. Over there, that was just a single index one, but we're not gonna go into how to do each and every one of these because as it turns out, this leakage level that we were trying to achieve of making a snapshot useless, you can get as fancy as you want inside of that, but turns out that the things that we were allowing to leak are really bad. That document access pattern that I mentioned before and also document access volume, which isn't necessarily just which documents are in a query but just how many total are given in response to a single query. With just those two things, just those two patterns, a paper from about last winter of six months ago-ish was able to completely reduce it as if it were just a tokenized database. So it's almost like saying that we thought that allowing this pattern of leakage wouldn't be so bad but actually it's really atrocious. Oh, and they do it in polynomial time, which in crypto speak means fast or reasonable. But it gets even worse than that because if you get away from the model of what you're working with, it's pretty much the case that snapshots in general don't exist outside of theory because you have to think about where you're actually keeping this information, which is often in a database. And these databases aren't built from scratch with the goal of keeping to this model of information. They have other information as well. For example, there's often transaction logs. Sometimes they literally just cache the query so you have this information over time available to you even if you think that you only have a snapshot on your machine at a particular time. Okay, so this leakage level of making snapshots useless as it turns out isn't really a thing that exists in reality. So okay, if that doesn't exist in reality, what can we do instead? The next level here is what we call oblivious RAM. Okay, so what will we leak at this stage? Here, our goal is to have no leakage over time. This means you can do multiple queries and you still won't leak more information with each query. Okay, so in an oblivious RAM setup, what you might do is that the server has a bunch of blocks and the client will say, I want a particular block but won't request that specific block precisely. The client instead will request a series of blocks and therefore the server doesn't actually know which particular block the client wants. So if the client actually wants block two, they will ask for zero, two, 59 and 86 and the server will happily respond with all of those blocks. And this is what we call an oblivious RAM. The name is a little confusing because it originally came from an implementation that was meant for actual RAM but oblivious RAM in general refers to the practice of hiding access patterns, whether in actual memory or in just any sort of general block store. Okay, so you wanted to return more than one document but it is crucially important how you actually go about implementing one of these. Do not try to just naively build your own O-RAM, it will not work, if you just do it in any sort of naive method, it's probably not gonna work. To get a bit of a sense of what implementations actually look like, this one over here was in a paper entitled an extremely simple oblivious RAM protocol which really tells you much about the other RAM protocols. The basic idea of this is that on the server you have your blocks in a tree which isn't actually implemented as a tree but the client keeps a mapping from each block to where it might be in a tree and then all the blocks along the tree are returned. This means that you need to do some computation client side and also have a roughly log-in storage for a cache but it is possible to implement these things, it's just hard. Anyway, okay, so you might have even within O-RAM there's different ways that you can construct any search index over it. What you could do at the kind of most basic level here is you can just put it directly into the server so you can have a searchable encryption on top of O-RAM entirely. So if you want to search for a keyword then locally you have pretty much the entire index and you just ask for the documents and that takes a lot of overhead. Otherwise you might put the index into an O-RAM on the other side, on the server but have it separate from your document store and then you just do multiple rounds of communication and since the index itself is inside of an O-RAM and it's in the same O-RAM as the documents the server doesn't get much information. Slightly more efficient way is to have them in two separate O-RAMs which is on the right side over there and then you could just put your index into an O-RAM and not have a block store or you can just have everything in the clear. So these are kind of decreasing levels of security but increasing levels of efficiency. But if you analyze it from this perspective what you can get from this is that at each leakage class you could call it you're still going to be leaking some information. So at the very left leakage class zero zero leakage is the spider oak model you do all of your computation locally but if you even reveal a little bit of information you're still getting some volume overhead of communication that gives information to an attacker who's watching this go by. And that document volume metric that I mentioned before if you can estimate that from the communication volume you've introduced a form of leakage. So even O-RAMs still has some sorts of leakage. Oh and also in leakage class one and two it ends up being more information processing and communication overhead than just re-downloading the entire database every time. So you get a performance hit without in practice getting that leakage protection that you wanted. Okay O-RAM not perfect either. What about fully homomorphic encryption? Everyone always mentions that. Why don't we just do that? So there was a paper from April the head quote that I really like which tells us a bit about the current state of the implementation of fully homomorphic encryption which is that the implementation they found in this paper was two to the 80 times more efficient than existing candidates. So and this is just an important milestone that's not all the way there. We're getting two to the 80 times faster and we're still not there yet. It's not ready. Well okay. So it turns out that the crypto is not necessarily going to be our best bet. What else can we do? We can turn to hardware. So what is hardware going to look like? So the fancy thing that is the excitement in all the rage these days is called Intel SGX software guard extensions. And here's what the model for that looks like. So you have your different layers of your machine. At the very bottom you have your hardware layer. On top of that you have a hypervisor, operating system, application, all the regular rings of trust. But what SGX does differently is that inside of your application, in your application space, you have a protected enclave which is tied directly to the hardware almost like a reverse sandbox. So you can have an attacker in the application, you can have an attacker get out of the application layer into the operating system, you can even have a malicious hypervisor and the enclave will still be protected because of the hardware connection. The way that this is implemented is that there is something that they call an enclave page cache and that's access protected by the hardware and the different enclave pages are mapped to different areas of memory and they are encrypted when they're written out to regular memory. And when it's written out to the rest of the memory, no one else can read it in and access it. There's a lot more details there. If you look into it, it's not exactly the most free ecosystem for production applications. Last I checked Intel still controls what you're able to even put inside of these if you want the proper hardware protections and signature. So there's other problems with it as well. But just from a security perspective, let's talk about some of the properties that this has. One thing that's nice about it is that it does address the measurement and add a station things that I mentioned before so you can provably say what the application is doing or rather more specifically that the application that you think is running is the one that's computing on the data. So you know it's not just like printing everything to the screen when you want to be protecting the data. Well, okay, this sounds like it could be pretty cool. We could just stick everything inside an enclave then we don't need to worry about the rest in the machine. Does it work? Can we just do that? Well, there are some things that we can nicely call challenges to working with it. The first challenge is really more of an implementation one than a security one which is space versus efficiency. So in SGX, like I mentioned, you've got the encrypted page cache and everything is written in and written out but you don't have a lot of space or you don't have a lot of memory and if you're encrypting it every time it goes in and out that takes a bit of a performance overhead hit. Not too terrible but you do need to be conscientious of what you're putting into the enclave. You can't just stick your entire application in there or your entire operating system and expect that it's going to work. It has constraints that you need to think about when you're doing it. But that's not so much of a problem. Let's talk about Yago attacks. This Yago attacks are named in reference to a Shakespeare play that I have never read but I'm told it has something to do with lying. So in an Yago attack, the enclave which is an application might call down to an operating system, a function call such as, for example, asking for some entropy for a random number and if the operating system is malicious, it might lie, it might return a number that is not random at all. So that's an Yago attack. Well, what can we do about that? Luckily there's a system called Haven that pretty much puts an entire operating system inside of the enclave and it uses what it calls a shield module to shield the application from a malicious operating system by making sure that the code inside there is trusted. So there's a layer, a translation layer that goes on and again you have this performance hit but it is possible to not have to rely on operating system functions which is like I mentioned before a pretty tricky thing to actually implement but it's been done and a generic application that conforms to a particular API is able to be put in there at this point which is pretty cool. It means you don't have to build a special purpose program just to be able to put the most important information in an enclave which is what you would otherwise have to do. And there's actually a bit of a trend of just sticking things inside SGX and figuring out how to do that. So if you're looking for a particular thing that you want to get going with and stick inside there, people have put Docker containers into SGX. They have put Hadoop into there. There's even a startup now that's putting key management into SGX which may be a bit of a marketing thing but you could put whatever you want in there these days which is great. So can you just do that though? Well, let's get into a bit more complex attacks. Access pattern attacks are possibly one of the major drawbacks of current implementations of SGX and as it turns out, the implementers of SGX think that access pattern attacks are outside of their threat model but as we'll see, it's a side channel that is seriously important. So what this means is that when you need to write things in and out of memory, you can end up seeing which pages of memory are accessed more or less and figure out information about the program that's running because of that. So for example, this is, I love this image. Just using access patterns, you can figure out some outlines of what the original image was without actually seeing the content of the data. The way that this one works is that you have heavy computation being done over the image and there's some performance optimizations that, for example, when a particular pixel has a lot of zeros, instead of actually doing the computation, you can maybe shortcut a bit but this leads to different numbers of page faults because you're doing more or less computation and when you look at this over time, you can extract things like this but it's not just for pictures. AES is known, a popular implementation of AES does a similar thing. It uses a lookup table and then based on what indices you're looking at, which is based on your key, you can use access patterns to recover the secret key. So if you're going to be using AES inside of something like SGX, you would need to make sure that all of your computations don't have these bugs but it's a pretty insidious form of bug. So it's kind of hard to do that as well but what can we do about that? Well, there's a couple options here. One of them is something called Ghost Rider which the Orem that I mentioned before, they pretty much stick that into SGX. So you get both the access pattern protection of Orem with the SGX hardware protection and this is pretty cool. It used programming language theory to figure out what pieces need to be in the Orem versus in the enclave at all that's encrypted versus what pieces can be outside and does this analysis automatically because otherwise you have giant performance hits. And another system that's proposed, it's called Sanctum which isn't actually SGX. So the problem with SGX is that the cache is shared across different applications. There's no specific cache for a particular application meaning that if you're an application on the same system as an SGX enclave by using your own access to the cache, you can figure out what's being evicted when and by knowing that you'll get information about what the SGX application is doing. So what Sanctum does is it sticks it into different caches and then you don't have to worry about that at all which is nice but it doesn't look at this point like that's something that's going to be implemented in the official SGX one but if someone wants to go manufacture a whole bunch of chips, it's a cool layout that certainly has promise. And a fun one that's specifically for Intel chips is called TSGX which hides the information from the OS itself. So you can have a cache attack by another application but if you're the OS, you have a bit more power in that you can try to cause page faults yourself and you can look at the page faults that are happening. What TSGX does is it uses another set of extensions called transactional synchronization extensions to hide this information from the OS itself. It pretty much throws all sort of exceptions into a single page and then the OS can't tell what's going on with the different ones. It bypasses the OS's own way of handling it to hide that while still being in the regular system. Okay, so there's different attacks that can go on but there's also proposals there to mitigate the different attacks which is pretty promising so far there. And another thing is that this problem comes from them being small. So you might need to have multiple enclaves talking to each other and their communication patterns can also reveal information. So you need to be careful about that as well. There was a cool system called Opaque that was using map reduce clusters and looking at the communications between those and seeing what data that leaks and found a way to hide that as well given the particular size constraints. Okay, so we have all of these problems that might come up but it seems like there's still things that could potentially be done to fix them. What I haven't said is that there's been any theoretical proof that it is literally impossible to ever protect anything using hardware unlike where we are with cryptography. This is still a very active area. Most of these papers that I've mentioned have come out in the past one or two years. So there's still a lot of potential there but of course there's a lot of potential to break things. We're very much still in that cat and mouse game of attacker defender at the hardware stage but still pretty promising. And then just as a reminder, a security is imperfect. This is not how to just throw, slap one of these onto your system and then everything is going to be great. Obviously local attackers, if they are on your machine they can see the data that's on your side, get your secret key as well. And then authentication, classic. If they can just pretend to be you then you're out of luck. Fishing will get you every time. And then hardware attackers. If you have physical access to the hardware and you've got an electron microscope you're not in a great place. Now it's not quite as bad as that. Like the people who design these things do take this into account and there are protections that are put into place which pretty much looks like hiding things by, oh you need to encrypt this here and this is obfuscated by doing it here but all the information is literally there. It needs it to be able to do the computation. You could make it so that reading the information makes it so that you can never use the machine again but if you're willing to do that the information is still literally there so depending on your threat model obviously it's never going to be perfect. The other thing that we don't know is that there's not a hardware backdoor. The specs for these things aren't open so people can analyze them, get a hold of them and try to reverse engineer it but we can't say yet for sure that we know for an absolute fact that there's no hardware backdoor where it's just shipping off everything to the NSA. So we don't know that. It'd be nice to know that but it's probably not going to happen. Okay, conclusions. What do we have? So pretty much from the crypto side of things if you're going to allow any sort of leakage at all it usually is just going to end up reducing to the stopping accidents which is honestly in and of itself a very helpful thing because then you can track the accesses stops things like LoveSec where people are looking at their partners. Things where at least you can tell afterwards that they've done something wrong are great to have in place but if you're looking to completely protect the information you're not actually in effect over time going to get much better than tokenization. No leakage crypto is still impractical but maybe at the next one of these in four years I'll have something different to say about that and hardware has potential. It's not there yet. It's exciting. We should go find out a whole lot more about it keep attacking it but it's not completely discounted and out of the way yet. And as a reminder if you like this talk the Electronic Frontier Foundation is a non-profit and I'm sure that many of you are members but if you're not come talk to me and we'll figure something out and thank you very much for listening. Thank you. I think you did an amazing job and we still have some time for some Q&A. Great. There are two mics so if you have a question for Erica please step to the mic and ask your question there for the sake of the recording. So if there are any questions I'd be happy to moderate them for you. Thanks. Hello. Hello. Hi. I unfortunately missed the first time. A little bit closer please. Okay. I unfortunately missed the first 10 minutes of your talk but my question is if I want to, maybe there's not a whole solution for this yet but if I would like to prove what I'm running on a server to a third party what parts are there and what parts are not there yet and when will they be there? If that is the main thing that you want to be doing right now today you are in luck. They've got that there. So the way that SGX works is that if you are running any code at all it needs to be measured which means you take a hash of the data and that hash needs to come from a signature chain of so there's other modules that come with the system built in and one of those has a connection over to Intel and Intel has a PKI that they use to sign that chain of trust to your machine and then other Intel machines also have that same root of trust so there's cross-signing that goes on in communication between the different enclaves that talk to each other that there's specific enclaves that are for talking from a server to your local machine to send you that proof along. So just using SGX at all gives the proof that you're running a particular piece of code to show what the code is doing then you would probably want to make the code open-source and show it and show that that code leads to that measurement but once you have that the rest of the steps are all in place for doing that proof automatically. Thank you. Yes, sure, go ahead. I'm sorry. A little bit closer to the mic probably. Thanks. My question is what are your insights with respect to the use of a racial encoding with multiple cloud storage providers to mitigate? Sorry, I'm having trouble hearing you. Can you maybe get closer to the mic? Yes. I was wondering what your insights are with respect to the use of a racial encoding and multiple cloud storage providers to mitigate the problems with leakage. Sorry, what type of encoding? A racial encoding. A type of crypto rate basically. That you use different cloud storage providers to store different encrypted parts of your data in a rate-like manner. Okay, so if I'm hearing you correctly you break up the data and use different passwords to store different parts of it in different places? Basically. Okay, that's a slightly separate problem. If you don't want to do computations on top of it, like if you ever want to do any computation on top of it you still have to be able to do that so you would need that in conjunction with one of the methods that I mentioned. You don't think it mitigates the leakage problem because you define a request to different parties? Oh, you're talking about having like actual different parties do the things? Oh yeah, so if you allow like mutually distrusting third parties then you can do a whole class of different things there. Just breaking it up and having different encryptions on each of them is maybe like not even the easiest at that point. Like if you allow mutually distrusting third parties you can have them do different sorts of computation which gets slightly better results. But I don't know as much about that because I think it's not the most practical for large-scale paradigm shifts. I think it's, then you have to solve political problems which are harder than math problems. That's roughly my insight there. Any more questions? Then I have a last question for you if you don't mind. Will you be staying longer at the camp for now? Yeah. At least for today. If people have other questions where they can they find you? I'll be around here for about the next hour and then after that I'm over at the Explody tent. Okay, thank you so much. I think you did an amazing job. Thank you so much. Thank you. Give her a round of applause.