 Hello, and welcome to Security Patterns for Microservice Architectures. My name is Matt Rabel, and I grew up in the backwoods of Montana. No electricity, no running water for 16 years. We had an outhouse, we had to haul water from the stream, had to walk to the bus stop a mile and a half each way, but in the winter we got to ski. And in reality, it was pretty fun. So I'm a web developer and Java champion. I also work at Okta as a developer advocate. I live in Denver, Colorado with my lovely wife Trish and my two awesome kids, Abby and Jack. I also have a middle child. His name is Hefe, short for Hefeweizen. He is German. He's got a Porsche 911 engine in him and a 915 transmission. I bought him off eBay in 2004, and it took 12 years to make him look like this. I have an expensive obsession with Volkswagen, as you can see. I also have a synchro Westy there on the left. His name is Stout. And if you have a similar problem, I'd love to compare stories either in the comments or hit me up on Twitter with a direct message. So today I'll be talking about microservices and security. And if you've attended a lot of Java conferences, you'll think that everyone uses microservices. It's a trendy topic. Developers everywhere interested in learning more about it. And for good reason too, microservice architectures are a technique for delivering code faster. Chris Richardson is a friend and expert on microservices. He suggests a helpful guideline in a recent blog post. If you're developing a large complex application, you need to deliver rapidly and reliably, then a microservice architecture is often a good choice. So Chris Richardson also runs microservices.io, which lists numerous microservices patterns at the bottom. I noticed that access token is the only item under security. And I was like, whoa, that's a little limited, I think. The information on security patterns for microservices should be much broader. So map to the rescue. Today I'll describe 11 patterns to secure your microservices. And not everyone needs these patterns, just like in Chris's listing. And so I hope you find some of them useful. And I did write a blog post about this, so I'll give you a reference to that at the end if you want to actually read instead of watch. And I'll also list it in the comments or the description below. So just to give you a brief overview really quick, we're going to talk about being secured by design, scanning your dependencies using HTTPS everywhere, using access and identity tokens, encrypting secrets and protecting them, how to verify security in your delivery pipeline. How to slow down attackers using Docker's rootless mode using time-based security, scanning Docker and Kubernetes config, and knowing your cloud and cluster security. All right, so number one, be secure by design. Secure code is the best code. Secure by design mean you actually bake security into your architecture and your software design from the very beginning. And if you have user input, make sure and sanitize that data and remove malicious characters. I asked a friend of mine, Rob Wynch, widely considered a security expert, he is the spring security lead, about user input and sanitizing data. And what he said was that it makes sense to design your code to be secure, however removing malicious characters is tricky at best. And a lot of it's really about the context and where your characters are coming from. JavaScript has cross-site scripting, SQL has SQL injection. But if you're removing apostrophes and you're worried about JavaScript or from someone's name, then obviously people with an apostrophe in their last name are going to be upset. So it's very important to just know about the context where these characters are coming from. So what he recommends is that the characters are properly encoded for that context rather than trying to limit the characters or strip them out. As engineers, we're taught early about the importance of creating well-designed software and architectures. You study it, you read about it, and you take pride in it. When you developed a system that works well and has, you know, plugability and extensibility, you're really proud of it. And so design is a natural part of building software. And well-known security threats should arrive and drive design decisions in security architecture. So reusable techniques and patterns provide solutions for enforcing the necessary authentication, authorization, confidentiality, data integrity, privacy accountability, and even while the system is under attack. You might be asking, what about OWASP? So if you haven't heard of OWASP, it's the Open Web Application Security Project. And it's a nonprofit foundation that works to improve the security of the web. And they're one of the most widely used resources for developers and technologists to secure the web. And they provide and encourage tools, meetups, education and training. It's a great resource, but this is a great quote from Johnny Christmas on an InfoCube podcast called Johnny Christmas on Web Security and the Anatomy of a Hack episode. It's great. And I really like what he says. He basically says that the OWASP top 10 hasn't really changed all the much in the last 10 years. And despite being the number one approach to educate defensive engineers on how to protect their apps, SQL injection is still the number one problem. So even though we tell everyone all the time, watch out for SQL injection, they still make the same mistakes. This is why security precautions need to be baked into your architecture and your design. It shouldn't be an afterthought. So I like the example from the Secure by Design book. This was written by Dan Berg Johnson, Daniel Dogen and Daniel Solano. I apologize to any of those guys if I said their names wrong. They show you how you can develop a basic user object that has a username and displays it on a web page, for instance. If you accept anything for the value of the username, you're open to crossite scripting. If you let a script tag in there and it has some malicious JavaScript, then obviously displaying that on a web page if you're not escaping the HTML is a bad thing. So you can fix this with input validation like the following. Where we have a username, first of all, we verify that it's not null in the constructor. And then we have a validate for xxs method that basically strips out any sort of malicious characters or maybe it fails if there are any in there. However, this code is still problematic. Developers need to be thinking about security vulnerabilities. Developers have to be security experts and know how to use the validate for xxs method. And it assumes that any person writing the code can think of any potential weakness that might occur now or in the future. So a better way to do it is to design a username class sometimes called a domain primitive that encapsulate all of the security concerns. So here's what that might look like. You can see it's got a min length, a max length for validation. It's also got a regex for its valid characters. And then in the constructor, it makes sure all those things are true before it sets the value. So then you can use that in your user class just as a regular parameter into the constructor. So obviously for developers, that's much easier to use. And your design makes it easier for developers to write secure code. So writing and shipping secure code is going to become more and more important, especially as we put more software in robots and embedded devices, because sometimes those don't have an internet connection and they can't update themselves. So that's why I think it's very important to design secure code by default. Number two, scan your dependencies. So third party libraries typically make up maybe 80% of an application's code. Many of those libraries we use to develop software have dependencies on other libraries and those might have dependencies on other libraries. Those are called transitive dependencies that can lead to a large chain of dependencies, some of which might have security vulnerabilities. So you can use a scanning program on your source code repository to actually detect if there's any dependencies that are one out of date or might have security vulnerabilities. So you can use automated software to actually create pull requests against your source code repository and then you can fix those pretty easily. So you want to make sure and scan for vulnerabilities not only in your master branch, but also in your deployment pipeline as you're going to production. Another thing I like to do is run it on released versions of code. A lot of times people will tag releases, so make sure and run it on those tags because something could have gone out of date while you're in production. Then any new code contributions, obviously for any pull requests, have some sort of automated system that verifies there are no vulnerabilities in your dependencies. Rob Wynch recommends watching the application patching manifesto. This is a talk that was giving at a conference, I think, Locomocosec by Jeremy Long. And in this video he mentions that 25% of projects don't report security issues. This is from a sneak survey. Security only add release notes and only 10 actually report vulnerabilities. So you can watch that video, there's a link at the bottom there, and it really explains how you should always use tools to update your dependencies because manually just doesn't happen that much and people aren't really reporting those vulnerabilities a whole lot. So if you're a GitHub user, you can use dependabot to automate everything via pull requests and GitHub also has a feature called security alerts that you can turn on for your projects to see when there are vulnerabilities in there. Here's what it looks like when someone reports a vulnerability and dependabot creates a pull request. You can see this update includes security fixes, it's got the vulnerabilities listed, the release notes and the commits, and then of course you should have CI running on that and it reports if all checks pass. So dependabot will check for updates, pulls down your dependency files and looks for outdated or insecure requirements. It opens pull requests just like this one's and then it gives you the ability to actually close it yourself and say no, I'll do the upgrade myself or merge it in if it looks good. You can also use more full featured solutions like sneak or JFrog X-ray. I've heard sneak called sneak and snake, but I think it's sneak. So I'm going to stick with that. Number three, use HTTPS everywhere even for static sites. So if you have an HTTP connection, switch it to HTTPS. Make sure all aspects of your workflow from build files, Maven repositories, Gradle repositories, you know, Node repositories to XSDs in your XML files refer to HTTPS URIs. HTTPS is an official name, TLS, and you might have heard of SSL in the past, but TLS is the better name to use nowadays, Transport Layer Security, and it's designed to ensure privacy and confidentiality and integrity between computer applications. So how HTTPS works is a lovely site with a comic. If you want to learn more about how it works, it's great because it has this guy that says why do we need HTTPS, this elephant, and then there's this cat that's like privacy, integrity, and identification. And then you got the certificate cat and the compugter and the browser bird. And then they go on to show that how a message is not encrypted when using HTTP and a crab can be a man in the middle or a crab in the middle and potentially, you know, change the message and make it evil. But HTTPS is easy. Troy Hunt has a really nice site called HTTPSisEasy.com and it's four short videos, five minutes each, and basically shows dead simple how to set up HTTPS on your site. So I did mention that HTTPS should be used for static sites, too. And people often laughed at that and they continue to laugh at it. And I love this tweet from Troy Hunt, where someone actually, you know, is basically saying, you know, HTTPS is easy, but we don't need it. And Troy says, that's perfect. I've been looking for a volunteer. And then he publishes a blog post and a video on how he goes through and hacks that static site. And surprisingly, what really the problem is, is where you're coming from. So he talks about how you might be in a hotel room and you're trying to access a HTTP site and the hotel will actually inject code into the page. So it might be JavaScript for an ad or it might be something similar. If you're on a airport network, similar, those might be able to inject content into the page. If you're on an ISP at your home, similar thing. So it's kind of to protect you more than the website in the static site case. So try to use HTTPS for those to protect your users. So HTTPS or TLS, you'll need a certificate to use it. So it's a driver's license sort of thing that serves two functions, a grant permission to use encrypted communication via PKI or public key infrastructure. And it also authenticates the identity of the certificate holder. And you can get free certificates from Let's Encrypt and you can use an API to automate renewing them. And there's a great quote and really a background on Let's Encrypt from Sergio D. Simone in a recent InfoQ article. He says, Let's Encrypt launched on April 12, 2016. So not that long ago, less than four years ago. And somehow transform the internet by making a costly and lengthy process, such as using HTTPS through an X509 certificate into a straightforward, free and widely available service. Recently, the organization announced it had issued one billion certificates overall since it's a foundation and it's estimated that Let's Encrypt has doubled the internet's percentage of secure websites. So yes, Let's Encrypt, you rock. So Let's Encrypt recommends you cert bot to obtain and renew your certificates. It's a free open source software tool maintained by the Electronic Frontier Foundation. So it lets you choose your web server in your system, then provides the instructions. Here's the instructions for Ubuntu with Nginx. You'll SSH into the server, you'll add cert bot PPA to your list of repositories, and then you'll run the command to install cert bot on your machine. And then you'll choose how you'd like to run cert bot, either get an installer certificates or just get a certificate. And then you can test the automatic renewal of it on my website on RableDesigns.com. We actually use a cron job, so it renews it every 90 days. And then you won't need to run cert bot again. You can test the automatic renewal and all that, and then you can confirm it worked. And if you want to check that you have the top of the line installation, you can use SSL Labs for that. So you might ask, why do we need HTTPS inside our network? That is an excellent question. It's good to protect the data you transmit inside your network because there may be threats inside your network. What I've heard is a lot of hackers have access to gigabytes of credentials. And so if they're going to try to hack your network, typically first they just have to guess an email. And since most companies use first name dot last name or first initial last name, it's pretty easy to guess someone's username. And then they can basically grab all this information from the dark web and be able to attack your network. And so phishing, obviously very effective. So effective that a lot of companies won't pay to have a security company do an audit and test for phishing because they know it's going to work. And so in both cases, phishing, guessing someone's credentials, the attacker could gain access to your network and then holy cow, they're inside. And that's why you want to protect your data inside your network. So you might wonder about, we're talking about REST APIs and microservices a little about GraphQL. GraphQL is a very popular alternative to REST APIs where you say what you need from the client and the server only gives back what you need. Well, the good news is GraphQL runs over HTTP so there's nothing really new that you need to do there. The biggest thing you'll need to do is keep your GraphQL server up to date because GraphQL relies on posts for everything. And this means that your server will be responsible for the sanitizing or looking at that input and making sure it's correct. So always make sure your server software and your client GraphQL software is up to date. And here's an example. Apollo is a platform for building a data graph and it has client implementations for React and Angular. So if you'd like to connect to a GraphQL server with OAuth2 and React, you just need to pass an authorization header. Right here you can see in this set context, we're setting operation.setContext and we're setting that authorization header with a bear token. And so if you were to make a REST call in a React app, it would look very similar. You would still set a header and authorization header and go for it that way. There's also Rsocket. So you might be familiar with Rsocket in case you aren't. It's a next generation reactive layer 5 application communication protocol that's designed for today's modern cloud native microservice applications. So what does that all mean? It means it has React semantics built in, so it has the ability to communicate back pressure back to its clients and provide more reliable communications. It has implementations for Java JavaScript, go.net C++ and Kotlin. And it enables you to do things like request response, right? A stream of one or request stream, so a stream of many, fire and forget with no response and channel, which is bi-directional streams. So Netify is one of the main contributors to Rsocket, as well as Facebook, I believe. And they're basically a cloud native application platform that's built on Rsocket. And Spring Security 5.3 has security support for Rsocket. So as far as I know, you can use Netify or you can use Spring Security. I'm sure there's other solutions out there for securing Rsocket, but I'm not aware of them. If you want to learn more about Rsocket, I recommend getting started with Rsocket to the blog post on the spring.io blog. And it's a great way to just see how to get up and running. It takes you about 15 minutes. You can see it was just written in March. So number four, use access and identity tokens. So OAuth 2.0 provides delegated authorization and it has since 2012. One ID added federated identity on top of OAuth 2.0 in 2014. So together they offer a standard specification that you can write code against and have confidence that it will work across multiple identity providers or IDPs. So the spec allows you to look up the identity of a user by sending an access token to a user info endpoint. And you can look up the URI for this endpoint using OIDC discovery. Step number one in this slide. Which provides a standard ways to get the user identity. So it's a really nice, simple way. Now simple, let me clarify that. Once you know about OAuth and how it all works, then it seems simpler. But before that or without it, you're just going to write your own authentication. So try not to do that. If you're communicating between microservices, you can use OAuth 2.0's client credentials to implement secure server to server communication. In this diagram, you can see that the API client on the left there is one server and the API server is another or the API service. So they're both talking to the same authorization server, but they have secure communications between them. And if you're using OAuth 2.0 to secure your microservices, you're using an authorization server. So the typical setup is a many to one relationship where you have many microservices that talk to a single authorization server. The pros of that are the services can use access tokens from that server to talk to each other. There's a single place to look for all your scopes and other definitions and your claims. It's easier to manage. It's going to be faster because it's less chatty. Now the cons are it opens you up to rogue services causing problems, right? If someone adds a new service that uses that same authorization server, then yikes. And if one service token is compromised, all services are at risk. And there's vague service boundaries because they're all talking to the same single identity engine. And so an alternative to that a more secure way is to use a one authorization server to each microservice. And if they need to talk to each other, you have to make those authorization servers trust each other. So there's definitely more work there. The pros are clearly defined security boundaries. It's slower because it's chatty or talking over the network. There's many authorization servers. There's many scopes. You have to look at each one to see how they're configured. And it can be hard to document and understand. So what I recommend is use a many to one relationship until you have a plan and then document how to use a one to one if you decide to go that way. So use Peseto tokens over JWTs or JOTs. So that's the short name for JWTs, JOT, it's actually in the spec, J-O-T. So JSON web tokens has become very popular in the past several years, but they've also come under fire. Also because there's a lot of developers that try to use JWTs as server side session tokens. And I like to think of Peseto tokens as JSON web tokens are good parts. You might recognize that this is a doctored image of JavaScript, the good parts. So there's a lot in JavaScript and then there's a good parts. Same thing with JOTs. There's a lot in JOTs and then there's a good parts, which is Peseto. So one of the main selling points of JWTs is their cryptographic signatures. And because JOTs are cryptographically signed, a receiving party can also get the JWT and validate that as trusted. But you know what else does that? Web frameworks and sessions and session tokens, they've been signing those and their cookies and sending them across the wire for years, for 20 years. So why reinvent the wheel there? This means you get the exact same benefit as JWTs and their whole signatures. And you don't have to add processing on your server or client. So my colleague Randall wrote a great post on this, why JWTs suck as session tokens. And he's even produced a t-shirt. So if you want a t-shirt that says, I want you to stop using JWTs, you can go to his Twitter, you can see it's a pinned tweet. So Peseto stands for platform agnostic security tokens. And it's everything you love about Jose, which is the JavaScript object signing and encryption spec, which includes JWT, JWE for encryption, and JWS for signing without many design deficits at plague the Jose standards. So long story short, using Peseto tokens isn't as easy as it sounds. If you're using an identity provider, they're probably still using JWTs. If you want to write your own security, which I don't recommend, you can use Peseto tokens. And hopefully someday a lot of the identity providers will add support for Peseto tokens. Public Peseto tokens are not encrypted. They are digitally signed. If using Peseto tokens that aren't public, you can actually make them so they are encrypted. This means if attacker gets a hold of your Peseto token, they'll be able to see all the data that it contains, but they won't be able to modify it without knowing about it thanks to the digital signatures Peseto's use. Five. So when you develop microservices that talk to authorization servers and other services, the microservices likely have secrets that they use for communications. For instance, an API key, maybe a client secret, maybe credentials for basic authentication. The number one rule for secrets is don't check them into source control. Even if you develop code in a private repository, it's a bad habit to get into. Don't do it. Don't start. That's basically to cause trouble. You can actually search on GitHub for remove client secret or remove password and there's thousands of commit messages with the history that shows what the password was. The first step to being more secure is to store your secrets in environment variables. So that's a little better than in your source control, but that's only the beginning. You should do your best to encrypt your secrets and maybe use something like hash accord vault, which has spring support or Azure key vault to actually store your secrets and retrieve them in a secure manner. My coworker Randall is also a big fan of Amazon's key management service, also called KMS. And the way it works is you generate a master key using KMS, right there. And then each time you want to encrypt data, you ask AWS to create a new data key for you. All right, new data key. And then a data key is unique representation or encryption key that AWS generates for each piece of data you need to encrypt. You then encrypt your data using that data key, and you will then merge the encrypted data key with encrypted data to create an encrypted message right here. So the encrypted message is your final output. And that's what you can store in a file or storing your database. And that is very secure. So you could probably even store it in an environment variable if you want it. Number six, verify security with delivery pipelines. So dependency and container scanning should be a part of your source control management systems. And so I used to think that, you know, scanning dependencies, palm dot XML files, package dot JSON files, probably good enough. But no, you need to scan your container configurations as well, right? Using infrastructure as code, basically means you just have YAML in your source code repository. But you should also perform tests when executing your CI and CD pipelines. So dev sec ops, you might have heard that term used to be DevOps, right? And that's all we called it. We were happy with that. It's the term that many recommend instead of DevOps to emphasize the need to build security into DevOps initiatives. I just wish it rolled off the tongue a little easier. Dev sec ops. Oh, well, dev sec ops, you know, injecting security into delivery pipelines involves security unit tests. So writing unit tests for your security code, also static analysis. So as part of your process, if you have static code, especially being able to analyze it and see if there's any, you know, bugs in there, there's many tools for that, for many different languages and dynamic analysis, security testing. So abbreviated SAST, S-A-S-T static and dynamic D-A-S-T. And unlike SAS, D-A-S-T examines your application from outside in its running state and then tries to penetrate or do malicious things to it, much like an attacker would do. And so to learn more about a continuous hacking way of actually making sure your delivery pipelines and CI works well, check out this article from Zach Arnold in Austin Adams. It's how continuous hacking of Docker containers and pipeline driven security keeps Y green safe. I think that's it. Y green energy fund. It's a financial corporation that provides property assessed clean energy financing. Great article by them. You can see there's a bitly link at the bottom. Now you can go and read the whole thing. They basically recommend creating a whitelist of your base Docker images, pulling only cryptographically signed base images, signing the metadata of a published image cryptographically so you can check it later. Using only Linux distros that verify the integrity of the package, using the package manager security features, only allow HTTPS for third-party dependencies and don't build images with a sensitive host path as a volume amount. But what about the code? So Zach and Austin use automation to analyze it too. They run the static code analysis for known vulnerabilities. They run those automated dependency checkers for the latest versions. And one of the things that they said that they really like to do is the spinning up of the service and running automated penetration testing bots on the running containers. One of the ones they recommend is Zet Attack Proxy from OWASP. The URL for that is Zaproxy.org. I've used that a couple of times. And it allows you to give it a whitelist of URLs. And they'll go crawl those and look for vulnerabilities. You can also actually record a session. So you would turn it on and set your browser to point to Zet Attack Proxy on a different port. And then it records your browser session. And then it plays it back and tries to do all kinds of malicious things. So I've used it, works great, highly recommended. Number seven, slow down hackers. If someone tries to attack your API with hundreds of gigs of username and passwords, it could take a while for them to authenticate successfully. Well, if you slow things down, so instead of making 10 or 100 attempts a second, they can only make one attempt a second or one attempt every 10 seconds. It's just not worth their time. They're likely to go away. And you're more secure because of that. So you can implement rate limiting in your code. And a lot of times you can do it on your microservices architectures API gateway. I'm sure there's other options, but these will be most likely the most straightforward to implement. And most SaaS companies have rate limiting built in to prevent customer abuse. We had to have API rate limits for not only our API, but our email rate limits as well. And those protect our customers against denial of service attacks. It also protects us from people trying to hack in. Number eight, use Docker rootless mode. So the developers of Docker designed this feature to reduce the security footprint of the Docker daemon and expose Docker capabilities to systems where user can gain root privileges. So if you're running Docker daemons in production, this is definitely something you should look into. However, what I've seen is most people with microservices are using something like Kubernetes to run their Docker containers. So if you're doing that, you'll need to configure the run as user in your pod security policy. Make sure it's not running as root. Number nine, use time-based security. So the idea behind time-based security is that essentially, your system is never fully secure. Someone's going to break in. Preventing intruders is only one part of securing a system. Detection and reaction are essential too. So you can use multi-factor authentication to slow down intruders, but also to help detect when someone with elevator permissions authenticates into a critical server, because that shouldn't happen a whole lot. You don't really need to log into your domain controllers and change things a certain amount. So if you have something like a domain controller, you should send an alert every time someone with elevated privileges like administrator logs in and notify the rest of the team. And this is just one example of trying to detect anomalies and react to them quickly. There's a great book called Time-Based Security from Win Schwartau that many security experts recommend reading to learn more about this whole concept. It is available for free on his website. So I do have a funny side note from my buddy Randall again, his thoughts on MFA. He basically wrote this article on the developer blog called multi-factor authentication sucks. And it was funny because he gives two perspectives. One is the security administrator's perspective, which is like, I love MFA so much. It makes my job so much easier. But then the developer perspective or the user perspective is like, really, I have to go find my phone again and be able to authenticate by walking across the house, getting my phone, and then entering the code. So what he recommends is basically trying to use adaptive multi-factor authentication. So if your security provider has that, try to use it. And that will usually make it so you don't get prompted as much. It trusts you based on where you are, what country you're in, and things of that sort. Number 10, scan your Docker and Kubernetes configurations. So your infrastructure is code. Make sure and scan those. Docker containers are very popular in microservices architectures. Our friends at Sneak have published a 10 Docker Images Security Best Practices Guide. It's a great guide. You can print it out, put it on your wall, and now you know. So prefer the minimal base images instead of not only whitelisting them, but also look for the base ones, not just the ones that are built up with more things on them. Use the least privileged user. Sign and verify your images to prevent those man in the middle attacks. Find, fix, and monitor for open source vulnerabilities. Of course, they do that, so they're going to recommend it. Don't leak sensitive information to Docker images. Use fixed tags for immutability. Use copy instead of add. Add can specify URLs. And so if you're not using HTTPS, that's going to be an issue. Use labels for metadata. So one of the things you might have heard about is a security.txt policy that you can put in there and then people know who to contact if they find a security vulnerability. Use multi-stage builds for small, secure images. And use a linter, such as Hatterlinter, which shows a warning for any errors that it finds in your Docker files. And you should also scan your Kubernetes configuration. But there's much more than that. So I'll cover Kubernetes security in the next section. There's also a great blog post from the white source folks on top five Docker vulnerabilities you should know. And I didn't list them out here because some are more to do with PHP and certain libraries. And I figured you could go look it up. Number 11, so boom. Know your cloud and cluster security. If you're managing your own cloud or even a cluster in a cloud, you're probably aware of the four C's of cloud native security from kubernetes.io. Basically code, container, cluster, and cloud. Each one of the four C's depends on the security of the squares in which they fit. So it's nearly impossible to safeguard against poor security standards in cloud, containers, and even code if you're only addressing it at the code level. However, when you deal with these areas appropriately, then adding the security to the code obviously augments an already strong code base. So the Kubernetes blog has a detailed blog post from Andrew Martin titled, 11 ways not to get hacked. And Andrew offers these tips to harden your clusters and increase the resilience in case a hacker does come along. This blog post is from 2018. And not a whole lot has changed. Statically analyzing your YAML, still a thing. Rotating your encryption keys. Obviously use TLS everywhere. Great idea. Use a third party auth provider. Also a great idea. Run a service mesh. Now, I do think there's a fair amount of hype around service meshes since 2018. And that hasn't made a huge difference. A service mesh provides critical capabilities, including service discovery, load balancing, encryption, observability, authentication authorization, and circuit breakers, and everything. So you don't have to actually do it within your code, which is nice. You can just do it with sidecars. This is a diagram from Red Hat that shows what a service mesh is and how sidecars can actually do a lot of that infrastructure related thing. So running a service mesh, for instance Istio, might allow you to offload security to a shared battle-tested set of libraries. Still, I don't think it's really simplified the deployment of the next generation of network security, like the blog post said it could. So yes, look into running a service mesh. I don't know that it'll solve all your problems, though. I hope you've learned something from the security patterns for microservice architectures, and it's made you a more security-conscious developer. It's interesting to me, though, that of this list, the first five are the only ones that pertain to developers on a day-to-day basis. Six to 10 apply to more DevOps or DevSecOps folks. And since most of these apply to your whole architecture and your whole product, it's very important that you're doing microservices right, and you have your developers and your DevSecOps folks on the same team. That's how microservices should be done. A team owns a product all the way from concept to production and managing it in production. Once you can do that, according to Conway's law, then your organization will be very independent as well and can communicate via these teams, and you can scale to the moon. So action. Design with security in mind. Scan your code. Always use TLS. Use OIDC and OAuth2 because friends don't let friends write authentication. Plan for attacks and study time-based security. Look into that and see how it can help you prepare for when the attacks do happen. If you want to learn more about API security, a bunch of developers at Okta, including myself, wrote this book, and you can look at it online. API security, you can also, I believe, go to Amazon and purchase it, but it's available online, so just read it there. I think it's about 100 to 150 pages, and it's got a whole lot of information about securing your APIs. We write a lot of blog posts on developer.okta.com slash blog, and my whole team is at OktaDev, we're the developer relations team, and we even have a YouTube channel, so if you look for OktaDev on YouTube, you'll find videos like this one out there. Like I said, I wrote a blog post that covers everything in this presentation. You can read that on the OktaDev blog. There's a URL at the bottom there, and thank you for coming. You can find me at rabildesigns.com. That's my personal blog. It's got some tech stuff on there, but mostly Volkswagen's. You can find me on Twitter at mrable. My direct messages are wide open, so hit me up any time. I'll upload this presentation to SpeakerDeck, and you can find a lot of the code and the examples I write these days on the OktaDev GitHub. May the auth be with you.