 Basically, my team at T-Mobile is called Digital Technology Development, what we do. We basically build, run, and operate infrastructure and applications that are used by our customers. So before I talk a little bit about the company, I just want to show off hands, how many people are actually developers here? Developers? Okay. We work on web applications, okay? So a lot of people. Do you guys think state is evil, like session state is evil? I'm just curious, how many think session state is evil? You think we can not have session state? Okay. So yeah, it's a necessary evil, right? So hopefully, you know, what I am hoping to achieve from this talk is to kind of talk about, hey, maybe session state is not so evil. There are use cases, especially if you think about our applications that we work on, our typical websites where there's some commerce type of things, you log into the website, and then there's a session established, and there are things that you do as a part of that session. For example, I might add some products into my shopping cart. The shopping cart might be, the data might be in the session state. And then when I leave, that data is gone, right? For us, like what some of the use cases are, you know, I might want to upgrade my phone as a customer. I might want to buy new accessories. So, you know, there's experiences that takes the user to kind of looking for something they want to buy and then add it to the cart. So there's a lot of like state data that we deal with. So sessions, like even though we're kind of moving into a modern application architecture where you have web experiences, and then they're powered by APIs, right? And we all know, like, we want to try to, as much as possible, build our APIs in a stateless manner, but, you know, session management is still relevant. And so hopefully, in this talk, you'll be able to see, like, how you can architect your modern applications in such a way that there can be a stateful component to your architecture. Okay, so I have to talk a little bit about the company I work for. So I promise you'll be just two slides. So basically, we like to call ourselves Uncarrier. We were based out of Bellevue, and we've been changing the industry since 2013 with innovative products and services, plus how the way we, you know, consumers and businesses buy wireless products and services, we've been changing that, you know, for a while now using innovative products and solutions. We're a publicly traded company with two flagship brands, T-Mobile and MetroPCS, and we're based out of Bellevue, Washington, where it's pretty much sunny all the time. I think we all know that, right? No, I'm just kidding. So it's not bad. Somewhere it's awesome. I love, I mean, Seattle's just like any other place in terms of weather. Okay, I get super excited about this slide, and I'll tell you why. So we have our genius marketing team. I think so. What they do is they listen to customers' problems, and they understand the pain points, and then they come up with these really innovative programs, what we like to call Uncarrier Moves, and we've been doing this since 2013. And our job, our primary job on the technology side is to make these Uncarrier Moves come to life using innovative products and technology, right? So the way we like to connect with customers, there are multiple ways customers connect with us. They use, the first two are called what we call assisted channels. Customers can go to a store for things like buying a product or accessory. They can call our awesome customer care team and get answers to questions related to their account, bill, or any new products that they want to buy. They can also use a website, if you like to do self-service kind of thing where you don't really need somebody helping you out. You can use the website and then as well as the mobile app that you can install on any device of your choice. We're also pretty active in the social media. I don't know if some of you might already follow John Ledger, who's our CEO. He's pretty active. He tweets a lot, and you can ask questions. So we stay real connected with our customers through various channels. We've won numerous awards for customer satisfaction. Okay, moving on. So real quick, even though we live in a cloud first mobile first world, web applications are still relevant, meaning enterprises have not really jumped onto that mobile bandwagon. So there's still a lot of legacy applications that are web based, but you want to kind of transform them. The second thing that I want to kind of mention here is, so considering that you have the web applications, sometimes it just makes no sense to invest in kind of transform, you know, rewriting them in mobile applications. So you still have those. So with web applications, you have, as we talked earlier, you have this problem of, you know, state, right? So web is essentially stateless. We all know that. And typically what the way it works is when you first log into a website, you, a session is established on the server, and then server then sends an ID, a session ID down to the client using a cookie. And that browser will then subsequent request for various pages, browser will then pass that session ID onto the server, and server is able to kind of bind to that user session, right? That's typically how it works. This is all good when you think about traditional web applications, right, where the number of instances that are serving your website is pretty static, right? And the POS, you know, they don't really change a lot. Now when it's not okay is in the cloud native architecture, right, where you have, your instances can go up and down if, you know, during a busy time, you might scale it so you can, you know, meet the demands of your need. So the problem with that is, with that, you have this problem of how do you make sure the session state data is actually consistent across all your instances, right? In the traditional world, if you think about how we've achieved this in a production like architecture, is you use a load balancer sticky session. So if I hit one server, subsequent requests go back to the same server, right? So that was okay back then, but in the cloud native world, your request could go to any instances, especially if your experiences, you know, web experiences running in a container, you could have multiple instances, and so you can't afford to, you know, have all that, like, use sticky sessions or have all the session data kind of be persisted in that instance where you're always hitting. The second point here is it also doesn't confirm the 12-factor app methodology. Specifically number six, which states, you know, your 12-factor processes are stateless and they share nothing. If they need to share the data, you have to persist it into an external backing store. It can be any database, or typically a database. So what this slide is showing you is potentially what are some of the solutions, right? So you have, so we chose, for one of the applications we worked on, we chose to use Redis as that backing store. So let me kind of walk you through the architecture here from the top. So we have, on the top of the layer, we have users hitting the load balancer, and from load balancer, your request comes to a gateway layer, and that gateway routes the request based on if it's, you know, request going to your web experience. It routes to what we call micro apps, and we have, you can deploy these things in a container if you choose to. You can also, you know, if you're using, building your web experience using Angular, you can put your Angular assets in S3 bucket, assuming you're using AWS. And then you can have CloudFront that actually serves those static assets, right? So they're simply, in this case, if it's a call to, like, a request going to a UI asset, it will just simply, you know, route to your CloudFront URL. And if it's call going to what we call BFF, so no, it's not Best Friends Forever. Although I like to think it's really Best Friends Forever with the UI, right? Because essentially that's what we call back-end for front-end. It's the lot, you know, you're, so you have a set of services underneath, microservices underneath that provides various capabilities. But at the same time, what we want is something to orchestrate, because typically, you know, if you look at any transactions that the user, typical user would do, it involves orchestrating multiple services, right? So, and the other scenario is you may need to kind of format the data in a specific manner for your experience. So in those cases, you can use a back-end for front-end kind of model where you have, it is like an arrest API, right? So your request go through the gateway, you hit the back-end for front-end layer. And that, because that's also stateful now, that's the other key point here, is it's using Redis to get to the session state that, you know, for that particular user session. The other thing I should point out here is we moved the security and authentication pieces down into the gateway layer. So if you, when your request comes to gateway, if it's accessing an authenticated resource, you're going to, gateway is going to, you know, force you to go do the login. And then once you log in, your session is established. And then the request flows into, you know, potentially if it's an API call happening to your back-end for front-end layer, it will hit the back-end for front-end layer. And in that case, also, it will bind to that same session. And so it can then, in that layer, you can then get access to all of the, you know, state information that's already a part of that, that session. A couple of other things I should point out, you know, we do, sometimes we do things like we have a lot of, you know, obviously, you know, big enterprise, there are legacy components that you can't just, you know, one day swap it out, right? It takes some time to actually transform the entire architecture. So we still have, like, you know, needs to call legacy components. So we do things like, as soon as the session, like login process is over, we kind of call some, asynchronously make some calls to actually load, pre-warm the data, I mean, the cache with some additional information that users might need, right? Because based on, typically, you know, when you think about our use cases, users will, you know, you log in, you might, you'll go to your usage, you know, information, you view that information. So we can do things like, this architecture, we can do things like pre-warming the session with data that's needed for that particular session, right? And then down in the bottom, you can see, you can see all the microservices layer. So it's really behind Gateway and then back in for front end, which is for that experience, and then down below is the back in for front end and discovers all the services it needs to orchestrate using a service discovery. Moving on. So the other thing I should talk about, you know, so I mentioned the way we solved the session problem is using, you know, one was we needed to externalize the session data, so we used Redis for that. The other thing I want to mention here is a project called Spring Session. How many of you have heard of, are you guys using this today? Just curious. So what do you do today to, you know, for externalizing session state? Do you store, like in a product, are you using like a load balancer, sticky session kind of thing? Okay, I think that's mostly, most of the folks, that's what we've seen. You know, traditionally you use load balancer and sticky session but all the requests go to the same instance. But anyway, so Spring Session is basically, again, Spring does a great job of abstracting and keeping things very simple for developers to actually implement. So Spring Session is one of those projects which allows us to kind of externalize the session data into some kind of backing data store so we don't have to worry about where that data is going, meaning, you know, it's decoupled in the sense that I can swap out, today I could go with Redis and tomorrow I can go with some other solutions like Mongo or Gemfire or Hazelcast. So with the Spring Session, you, by default, you get implementations for Redis, JDBC, Mongo, Gemfire and Hazelcast. JDBC is where if you want to use some relational database to persist your session information, it allows us to use that. So if any of these out-of-the-box solutions doesn't work for you for whatever reason you have a need to integrate with some other backing store, you can roll your own implementation. So the architecture is extensible in such a way that it's really easy to build something custom of your own. Also, it supports clustered sessions without being tied to an application container-specific solution. Again, we know like in the Java world, you know, you got Servlet, with that you get basically, you know, we talked about this earlier where, you know, your session is established and then the session IDs send via cookie, right? So with the Spring Session, you can, you know, again, the problem with that is, you know, you have to kind of stick to the load balancer sticky sessions. This allows us to have clustered sessions without actually being using anything like load balancer sticky sessions or even, you know, being tied to a container-specific solution such as Tomcat. The other thing to, you know, kind of call out here is you can also use session IDs can be exchanged via HTTP headers. You saw in that architecture slide that we looked earlier, right? We have API calls going to the back-and-for-front-end layer. So we can actually, now the UI can actually pass when it's calling the BFF layer. It can pass the session ID as a, you know, via the header. And then the last thing here is the WebSocket support. So if you're using WebSockets for, like, you know, real-time communication from server to client, spring session actually works really nicely with WebSocket. So who's using Redis today? Anyone using Redis? So mostly for cache, I'm guessing, like caching things. Okay. So just a quick overview of Redis. Basically, Redis is a, as we may already know, it's an in-memory key-value database. Because it's memory, it provides us fast access to the data, which is really nice. You can also, I don't know how many people know, Redis can actually be used to persist data into the disk. So it's almost like a database. It does the persistence in an asynchronous manner, so there's always a chance of some data being lost in failures and disaster recovery type of scenarios. It's written in C, and it's recommended to deploy Redis on Linux. So there are, from a developer standpoint, you know, we, the reality is we build applications in different platforms, right? Java, some people build applications in Java, some .NET. So there are a variety of client implementations for various programming languages that's available. So it's really easy to, the point is it's really easy to integrate Redis, you know, into your application. And then last bit here is you can, there are two ways you can deploy Redis. One is in a master slave mode, where you have a single master that's taking all the requests for, you know, query for data. And then you have maybe one or more slaves that, so in case a master goes down, you know, one of the slaves can be promoted to a master from an HA standpoint. And then you have the cluster mode, which is primarily used to scale, where you basically shard the data across multiple master nodes to achieve scalability and high availability. So the charts down, what they're showing, I'm not sure if it's really clear or not. I don't think so. So the key thing to call out here is, this is again from one of the articles published by Redis Labs. They did a survey. The first one, the one on the my right, which is showing the use cases of using Redis, and specific one to kind of call out is that the second one from the other end, that's basically session management. So people, you can see from the chart that people are really using it for session management. And then the one on my left is basically, you know, kind of the use cases, industries that are using Redis. So there's a variety of industries that are already using it. So again, the key point is, it's a proven solution used for various, there are various use cases it supports, and then a lot of people are using it. So, okay, next I want to talk about some of the deployment options. So there are basically, obviously there are more, but I want to kind of focus on the three things. So you can deploy Redis on your own. I mean, use the open source version and spin up of, you know, the infrastructure that's required, and install Redis, and then you can run and operate and manage yourself. And then the other option is you can, Amazon has a service called ElastiCache. It lets you run Redis or Memcache D, depending on your choice. And it's a fully managed service that you don't, now you don't have to be responsible for installing, configuring, and even supporting it, right? So you get, you know, management monitoring aspects of it. But one thing, I'm not going to go too detail into it. There are various options for deployment. But one thing I want to kind of call out is I did some cost analysis, and there is a, you know, if you look at the cost for a cache node in AWS, there's about 35% increase compared to an EC2 instance. So if you were to, you know, spin up an EC2 instance and then do installed Redis on your own, you know, the cost that you would pay for that versus cost for a cache node is like a 35% increase. Sometimes that may be a blocker for you, right? Because depending on the cost constraints. But again, that cost you're paying for, you know, being able to not have to worry about the management and monitoring aspects of it. Again, Azure also has the service that allows you to, it's basically a fully managed service that allows you to spin up Redis clusters. And there are basically three different types of deployment options in Azure. One is a basic, which is basically a single node. And you don't really have any HA or, you know, SLAs around that. So if that node goes down, you know, you're basically down, your application's down. Typically not recommended to, you know, use it in production like, you know, situations. There's also the standard and premiere tier that gives you, you know, HA and SLAs around when, you know, the master's down or you need to fail over to a secondary cluster. You also have Redis labs. Again, I don't have a much of experience, but essentially Redis can actually, you can have Redis labs kind of spin up your Redis cluster in the cloud provider of your choice. I think one of the key things to kind of keep in mind because Redis kind of created, you know, they created the Redis. So it might be a good thing for people that they know better than probably any one of us, right, in terms of, you know, the best way to tune it, provision to tune and manage it, right? So that may be an option that's worth considering. We chose to run on our own. We have some teams that, in some use cases, we use our own Redis cluster, and then there are certain other teams that use a managed service, because, again, the maturity level of that team is not as high where you need it to be to be able to kind of manage the infrastructure and monitor all aspects of that. So, again, you know, depending on, you know, where you are with the maturity level, you might choose to run on your own versus use a fully managed service. Okay, next up, I want to kind of talk about a couple of... So, you know, how do you monitor Redis? So let me just kind of quickly go through it. So basically this is what's shown here is, you know, basic system metrics that you should be monitoring. You know, one of the key things is the... over here is the memory usage. Again, it depends on how many keys you have in the database, and you also need to factor in the memory that your OS needs. Also, you should monitor the disk usage, you know, because if you're using persistence, it's a good thing to have. And then you can use this as a reference. So there are, like, you know, guidelines on what that does and then when should you use alerting, you know, things of that nature. And then the next one, the stable, what it's showing is some availability-related metrics. Specifically, the one to call out here is the connected client. So this is the number of, you know, if you want to see the active sessions, the connected clients is something that you can look to see how many users are logged into the database into your application. Some of the other things to kind of call out is the... there's, you know, RDB changes last save. That's basically, you know, the amount... the number of changes that are pending since the server persisted last time. You know, again, if that timeline is pretty high, it's a good indication that something's failing in terms of being persisting to the database, to the storage... So I'm going to quickly skip through this. Again, you guys can look at this for reference. Let's quickly talk about some of the monitoring tools. So you have ready CLI. CLI has two commands, info and monitor. Info kind of, you can get general information about the server and then client connections. It also gives you memory usage information. You can also, you know, view things related to persistence. And then CPU usage statistics. And then the... So one other thing to call out is info command doesn't have much of an impact on the performance, overall performance. The monitor, on the other hand, has impact on the performance. In some cases, typically you'll use monitor if you want to troubleshoot an issue. So just be aware that when you're using that, there is an overhead. So the other thing to show here is readyStat. ReadyStat is actually a Ruby application. There's kind of web-based dashboard, and then you can also view performance information in terminal using a VM stat like format. Again, I'm going to have to... I'm running out of time. So one thing to call out is we use telegraph, so we use influx as a time series database to store all the metrics, and Grafana to kind of visualize that metric. So we use telegraph in our world. But there are different options depending on what tools and technologies you're using for your monitoring stack. You might choose one of these options. So let me... I'm going to quickly just jump into this slide. So why use Redis for SessionStore? So number one here is you want to externalize data. Hopefully you've seen, you know, having the gateway... We moved gateway, all the authentication authorization aspects into the gateway, and then you have multiple components in your application stack that actually depends on that SessionState, right? So it's kind of good from that perspective. And then also, you know, in terms of being able to share Session Across multiple instances, a couple of things to note is you can expire kill sessions. If you see a rogue, you know, the user session, you can easily go into Redis and expire that key. You can act, you know, identify active users in the system. It works very well in a clustered environment. And the last two things, it scales well and performs really well. Again, with clustering option, you can scale, partition your database across multiple nodes and achieve maximum scale. And then because it's in memory, you have a lot of performance gains. Just some real-world use cases. You know, we have an application called... used in the MetroPCS. It's a retail channel, what we call retail channel. It's a web-based application used by stores and dealers in all the call centers. Some of the patterns that we use, we built it for maximum reuse. And then basically, it uses leverages Netflix OSS stack, as you've seen earlier in the slides. And then we also use the same strategy for AM-based application. Again, in terms of externalizing the session state data. I think I'm about time. If there's any question, my contact information is all the way in the end. It's a lot to cover in one 30-minute talk. Just reach out to me in terms of if you have any questions. Also, I did have a demo, but unfortunately, as you see, I couldn't even go through all the slides. But I will point out the link to the GitHub repo that has the sample application, which basically... think the architecture diagram that I showed, right? Basically, there's a sample application that actually models that entire architecture. So I'll point you guys to that repo, and then you guys can look at that as a reference. Thanks. Thank you.