 Good evening. Welcome to DevCon. My name is Alon Mirenek and I work at Synopsys, where I manage R&D for the Seeker agents. But this is not what I want to discuss today. In fact, today I want to discuss my career before joining Synopsys, before working on security. So for the 18 or so first years of my career, I did not work on security. I worked on what we call infrastructure. I build libraries for the quote-unquote application engineers to use. I handle databases. I handle storage. All of these things are not security, of course, but security was always in the back of my mind. We always had to think about it. And the big problem was that we, or at least I, was ill-equipped to do so. So today, I want to take you for a journey through that part of my career. I want to share what I did not know about security back then, what I wished I had known about security back then, and try to think why I didn't know it. Why was it so out there for me? So I think that like a lot of problems, like a lot of issues, this all starts with perception. Software engineers, in a close generalization, often tend to have extreme opinions, at least when it comes to technology. And when we talk about security, this is no different. For non-security professionals, they're usually one of two extreme perceptions of security, or of developers of themselves and how they think about security. One extreme, of course, is the developer who knows everything, he or she are super smart, and no one can teach him anything, because they're so smart. Now, if they're already also smart, and they know everything, they, of course, think they know everything about security, and can never learn anything. Which is unfortunate, because chances are, they do not know everything about security. This is, of course, bad. The other extreme is the developer who is sure he or she knows nothing. Because security is too complicated. It's mythical. It's magical. You need to be a super hacker to understand security. And since this developer is sure that security is so complicated, and so mythical, and so out there, he or she is sure they do not know anything about security, and therefore they cannot know anything about security, because they have blocked themselves out from the opportunity to learn. Now, you may think this is better than the first example, but this is also a huge problem. In today's world, we can ill afford developers who do not understand or refuse to understand security. In today's world, if we deploy to production twice a day, we are too slow, right? And when we move so fast, and when developers are becoming more and more empowered to own the entire cycle from design to implementation to deployment, we call the, and I hate this term, the DevOps evolution, we cannot have developers who say, I don't understand or I can't understand security. We need to strike a balance between these two extremes. We need to get to the point where developers understand security, at least in the basic level, feel empowered to make smart decisions about security, to own it. But on the other hand, to also know what they don't know, to be able to say, okay, wait a minute, I got to the point where this requires some expertise I do not have, or I have a concern I'm not sure about this. Let's stop, raise a flag, I need to consult an expert. But we can't do that for every single thing. We do have to make some basic decisions and some basic considerations ourselves. How do we do this? How do we start? Well, first of all, let's start at the very beginning and examine our perception of reality. Let's have reality checks, check. A lot of us, a lot of engineers, we are essentially in our heart of hearts, we are builders. We look at the world, we see how imperfect it is, and we ask ourselves, how can I make it better? How can I build something that will solve a problem that would make someone's life a little bit easier, a little bit faster, have a tiny bit better experience? This is for a lot of us, definitely for me, this is what we do, it's ingrained in how we think. Problem is, and psychologist calls us the false consensus bias, operative word being false, is that if you have a notion that is so ingrained in your sense of self and the way you perceive the world, you start thinking that everyone perceives the world like this. So we build systems and we ask, how will a reasonable person use it? How will a reasonable person benefit from our system? And we assume everyone thinks like that. This is false. There are bad people out there. There are people who have no intent to reasonably use our system. We call these people attackers or hackers, if you must. These people will ask, how will an unreasonable person themselves abuse the system? How can I use the system for my own gain without any interest in the original intent, what it's supposed to do? How can I exploit it? And we, as developers, need to design systems with this use case in mind, not in the sense of how can we make it exploitable? You definitely don't want a big red button in your application that says steal all my data. But we have to think that there are people who would want to do this. And part of our job is not only to build a system that caters to the legitimate use cases, but defends against the illegitimate use cases. How do we do this? Security, of course, is a vast world of knowledge. But there are a few really simple principles that as developers should keep in mind. First and foremost, the cardinal rule rule. We do not trust user input. Anytime our system interacts with the outside world, any input it gets, be it an end user typing in some data to populate the form, a microservice we're interacting with, a third party API, whatever, we do not trust outside input. Wherever we have a vector to introduce outside input to our system, we also have a vector of attack. Some evil person, some hacker, can take this vector and try to input something that would be malicious, something that would not be used by the system the way it's designed to and cause damage. These are all the famous injection attacks, SQL injection, cross-site scripting, which you may have heard of. So generally speaking, and this is, of course, a world of knowledge and I'm not going to go over 10 million different attacks on 15 million different ecosystems and discuss how we handle each and every use case. But as a general concept, first of all, we do not trust user input. If there's any place we can avoid input, we should. If we can't, and unfortunately, most of us do not design systems which are in an air-gapped computer on a submarine guarded by an armed guard, most of us do need to interact with the outside world. Think about where and how your input is used. Think about the context. Is it written to a file? Is it saved to a database or used to query a database? Is it used to query LDAP? Am I passing it? Am I writing it back anyway? Get to know your system, get to know the data flows, track them. And once you've done this, and this may sound like a cop-out, but it really isn't, read a bit about us. Chances are, and I'm willing to take this bet with anyone in the room, your system is not that special. Chances are whatever you're doing with the data has been thought of before. A lot of systems are trying to do databases. There are a lot of systems that safety files. There are a lot of systems that pass XMLs. Chances are your system isn't that unique. And if it is not that unique, chances are there is literature on common security vulnerabilities for that area and on how to avoid them. And when I say literature on how to avoid them, most often libraries. Best practices, how I can use my components or what library I can use to defend against these common practices or common vulnerabilities. These are often called sanitizers or validators. So get to know your ecosystem, get to know the security tools in them, and use them. So now that we thought about malicious data, let's take the next step and talk about malicious volume. As engineers, we have become really, really good at thinking of scale, thinking how my system will cope with an increasing volume of input of data of usage, which is great. The problem is we often think about this in the positive terms. We often say, yay, great. My system is a great success. There are more people than I expected using it. Need to scale up. That's great. The flip side of this is a malicious use case. Quite often, one of the easiest ways to take a system down or to harm it is just to overload it, bit with a huge volume of requests or request with a huge size. Now, these are commonly known as denial of service or dust attacks, where the idea is that I don't try to exploit some use case nobody thought about like SQL injection. I just hammer the system with too much data, too much requests, take up all the resources, and that way the system conserves the legitimate users who actually want to use it. The good news is scale, we usually know how to handle. This is where rate limits and size limits and throttling come into place. And again, the same principles as the previous slide apply. Get to know your system. Think about the inputs, think about what a legitimate input is, and then cap anything else. True story, I saw this couple of months ago at a customer site. They had a password login screen and a username password. This gets serialized to JSON, gets sent to the server, which passes it, decides if the user authorized or not, and returns a session cookie. Sounds great. Now, if a username is eight characters and a password is, let's say, 16 characters because I'm security conscious, together that's 24, some overhead for JSON, some buffers, let's say, 40 characters. Is there any legitimate use case to send a megabyte of inputs to this endpoint? Of course not. But if it's not capped, and in this case, it wasn't at this customer, you can definitely do this. The backend will attempt to pass a megabyte of JSON, which takes quite a bit of memory and CPU if it's designed to pass a 40-character payload. And if you send enough of these, you just bring the system to a screeching halt. But the design would be to just check the length and deny those requests without even passing it. This is just a weird example, but get to know your systems, get to know what legitimate input looks like, maybe give some buffers for extreme use cases and cap or deny anything else. This isn't a case where we don't know how the system is used and everything is legitimate. You need to know your systems. The third and last point I want to touch on, excuse me, is that even if you really know your code very well, your code is probably just the tip of the iceberg. And I probably don't need to say this here to this audience in DevCon. We all use third parties and open source. We all use a lot of it. And if you look at the typical application, chances are there's a huge amount of frameworks and open source libraries that you're using to do all the boring stuff like logging and database handling and web scaffolding, et cetera. And your logic is a really small part of this huge application. For an attacker, this is inconsequential. All these third parties and open sources and frameworks are there, they are running, they can be attacked. So you cannot say I've audited my code, I know it perfectly, there aren't any problems here. You have to keep track of everything you use, have a manifest, all the open source components, the frameworks, the libraries, keep track of them, keep track of their disclosed vulnerabilities and have a plan in place to upgrade however, periodically when something is disclosed, whatever works for you, but have a plan and be able to do this without it being a crisis. Now, I'm getting near to the end of my time, so let's wrap it up. Having said all of this, the one thing I really want you to think about from this entire talk is that although we often sensationalize and mystify and glorify security and mainstream media, even technological media, we need to make security boring. Boring is good. We need to make this a non-issue. The good news is developers are really good at taking complicated and amazing and exciting things and making them boring. The way we do this is with tooling, especially for things we are not experts on. For instance, I'm not an expert at writing bug-free code. I have tools to help me do this, both static analysis tools and tests which are run. We need to think about security in the same way. Let's take this so-called mystified and magical realm. Let's make it boring. Let's have tools in place and I'm not advocating for anything specific to your homework, but find the tools that work for you. Have them in your CI. Have them as part of your process and make security boring again. Make it a boring part of your software development life cycle so it's not sensational. Because if you fail to do this, I guarantee it will be sensational in the sense that your company will sooner or later be featured in the front page of the New York time because of some crazy breach. With that, I'm really out of time. If there are any questions, I'll happily take them after this recording is over. Until then, I just want to say thank you for listening. Thank you for your time. Have a great conference.