 I don't care about Kubernetes. Okay, that's a lie. I am a nerd, so I care personally about Kubernetes. But let me explain what I mean when I say I don't care about Kubernetes. About two years ago, my son was turning 13, his birthday is on New Year's Eve, and my wife and I decided we wanted to give him his first real rock concert experience. So he was this huge 21-pilots fan. Do we have any 21-pilots fans in the- yeah, yeah, you can applaud that. You know, one of their most popular songs, he would wander around my house singing, it's Blurry Face, my name is Blurry Face, and I care what you think, right? To which his loving siblings would look at him and say, well, we think you should stop singing. There's a lot of love in my house. So we took him to Sasquatch Festival. I don't know how many people here have been to the Gorge. I took this picture at the concert that I took him to. It's an amazing venue. It is so much fun, beautiful, you camp kind of back behind where these folks are at. And what I did was, you know, I drove out four hours from here with my son, and we spent some time in the car. This is kind of what it takes to get a 13-year-old to talk to you as a parent. And then we spent two days at the Sasquatch Festival. He got to see 21-pilots, but he also got to open his eyes to a whole bunch of other experiences. And it's something that him and I will go home and we'll never forget, right? So when Ticketmaster talks about how we power unforgettable moments of joy, this is what we do. We're in the business of giving people amazing lifetime experiences that they'll never forget. So when I say I don't care about Kubernetes, I don't care about Kubernetes, the technology. I care about how Kubernetes will help me do this for more people, right? And LiveNation, our parent company, we do one of these, we do something like this every 20 minutes on average, it's pretty cool. It's really cool to work for a company that our job is to make people happy, just to make people have fun. It's not always easy. So some of these really huge events, they're really high-demand events, we might have tens of thousands of tickets and hundreds of thousands of people who want those tickets. So when the tickets go on sale, we see something that looks like this in our network and in our infrastructure. It's effectively a Black Friday and Cyber Monday combined event, right? Taylor Swift or Garth Brooks or somebody like that, when they're going on sale, everybody wants those tickets. And it's not like a typical retail event. Every single seat that we sell is unique, so we can only sell it once. It's a unique problem. Ticketing is a fun and tough challenge. And it means that no commercial DDoS service on the planet knows what to do with us because this is what our normal traffic looks like. So that leads to some interesting scale. We're not a Google or any Amazon for sure, but we have 27 ticketing systems. We have about 1,400 people across our tech and product organization. And we're here to talk about Kubernetes, right? So to date, we have about 1,000 nodes running on Kubernetes, 16,000 pods across our clusters, and we've been growing like mad. So this has spread across AWS and our on-prem, so we're hybrid with Kubernetes. We've almost doubled this year. Next year is gonna go even bigger, I'm sure. So I'll introduce myself. My name is Tim Nichols. I run the Hybrid Cloud, Kubernetes and Developer Platform Organization at Ticketmaster. I'm gonna tell you basically our story, how we got here. It's really hard to have this story without talking about our DevOps transition that led to it, because Kubernetes is really ultimately the culmination of that story. So when I started at Ticketmaster about six years ago, we were dev and we were ops, right? I think most companies were sort of at that same stage. We had developers who would want a resource, they'd want a server, let's say, and they'd open a ticket. They didn't talk to ops, they opened a ticket, right? And then ops would build the server to the spec that they got and they'd hand it back and dev would say that's not what I wanted. And then they'd open another ticket and they'd go back and forth like this. Sometimes it'd take a week for a developer to get resources. Do any of you guys remember what that was like? Still have that experience today sometimes, probably? Yeah, so we dug in really hard. We knew we had to fix this because as a big leader in the industry, we were gonna get eaten alive if we didn't change our pace. So we dug into DevOps. We started a developer cross-training program. We started teaching developers how to do the operational work because they hadn't seen any of that before. We started giving them access to their servers that they had never had before. And then this really accelerated for us but we still had another problem with TechDet, 40-year-old company, right? And we needed to create something of a carbon filter. We needed to figure out a way for developers to up-level their products. So we brought in some containers and we brought in some AWS. We put together a team that we called the Cloud Enablement Team and they put together a toolkit called the Cloud Enablement Toolkit and then they went on a road show around to all of our development centers in Virginia and Quebec and Scottsdale and Seattle and LA. And they'd spend a week with those development teams teaching them how to containerize their apps, how to build in AWS. And we created all of this freedom for developers because we gave them all of the primitives. We gave them all the operational access, we taught them how to build everything and we gave them toolkits to do that. But across an organization of 1,000 developers, that turns into something else. It starts to look a little bit more like managed chaos. Managed, right? And we, at this point, knew we had to make some changes, right? As with any agile, scrum, devops, transition, it's all about the learning, right? Fail, learn, iterate, fail, learn, iterate. So what did we learn? I heard one of the other speakers earlier talk about guardrails. This was something that we felt pretty strongly right from the beginning when we dug into our DevOps journey. Guardrails not gates, right? But what we learned was when we lifted the gates, the guardrails were pretty hard to build. They weren't so simple to just, it wasn't so simple to just say, oh, well, let's build a tool and it's gonna define what everybody wants to do. These are nuanced things and we didn't invest enough in it early on. And so that was one of our learnings. We needed to invest harder on making sure that those tools were in place. We also learned that autonomy tends to be the enemy of alignment, especially at scale. So yes, we gave teams a lot of opportunity to go and be free to build their own tools. But what they would do is they would go and build their infrastructure in their own ways and then six or nine months later, they'd go and try and work with another team and they'd find out that they'd done it completely different. And that meant that what we needed to do was sort of step back and dig into how we were going to build alignment tools for these teams that needed the freedom to evolve. Ultimately, that means what we wanted to do was give them the freedom to innovate but not the freedom to reinvent. So when you think about, think about what the mission of a given team is. Ultimately what you want them to do is to stay in their lane. They give them a mission, give them clear responsibilities within what they want to do and you want them to have lots of freedom to operate to solve that mission. But you don't necessarily want them to be distracted by something else, right? And we got this feedback from our development teams. They'd get to where they'd spin their wheels on how to stand up an auto scaling group or load balancers in AWS or whatever it was because it really wasn't their core skill set. So we learned that we needed to give them the opportunity to innovate but not to reinvent. And some of that comes out of building good ops tools. So back to this guard rail space, we wanted to focus on giving these teams the opportunity to build their own tools. And when you have your ops teams build their automation tools, they actually don't need as much access as they used to either. So at Tiggimaster we went down the path of using core OS in AWS and then making immutable OSs. So there's no SSH access in our AWS space actually because we've built these tools out to limit that. Even for our administrators, it's a break glass model to get into the base OS. So how does this tie back to Kubernetes? Well, Kubernetes is basically all of those things, right? First off, we knew early on we needed to do container orchestration but we sort of did a stepping stone model, right? We gave people the tools to run containers inside of a VM early on because we didn't want to over complicate the process but we had an absolute evolution plan to get to orchestration and we knew we needed it. Kubernetes was there as a competitor with Swarm and various other tools and we wanted to give it some time to see what was gonna grow. This abstraction of primitives is all about staying in your lane. Teams want that ability to ask for a resource and infrastructure but they don't wanna have to be specific about the way that they get it. They just want a resource and that's really what Kubernetes is all about is defining those abstractions. So a developer can define an application in a container and ask for some resource to run it on and it doesn't have to be super prescriptive. And then back to these building the ops tools, it also allows the ability to offload the infrastructure operations, right? So we had in our DevOps journey we had gotten to the place where all of the infrastructure operations especially in AWS was back on the developer team and then they were getting totally consumed by that operations task even if it didn't have much to do with running the application. And Kubernetes does a really good job of doing that separation between operators and bringing the infrastructure back to a core administration team then the development teams get to focus on what they're actually running and stay in their lane. So at that point we knew it was Kubernetes for us. We made a strategic partnership with CoreOS because Tectonic was this product that brought enterprise features to Kubernetes two years ago that you just didn't see out in the open source, right? The big one for us was RBAC. We knew we needed to run Kubernetes at scale and we needed to be able to isolate teams from each other. We couldn't do that without RBAC and Tectonic had it built in. So that was fantastic. We started talking to these guys and we just, we sort of fell in love with them and did a fantastic partnership with them over the last couple of years and then hired and recruited internally a core team that has really done some amazing work. This isn't everybody, it's everybody I could fit and sort of the core of who's been doing some major contributions back into the, into the community. As you can see, we did a lot of work on the ALB ingress controller. We did quite a bit of work in Helm charts, Prometheus operator, external DNS you can read. But this has been something that ultimately we hired a team, we got out of their way and they've really done some amazing work to make it feasible inside of Tectmaster. So back to why I don't care about Kubernetes. Ultimately what I care about is the Tectmaster mission, right? And how Tectmaster's contributing to that. So what are we doing? I wanted to make this sort of consumable by you guys. We've done, we're running Kubernetes in a lot of different places, but if you're, how many people have bought a ticket from Tectmaster in the last year? Yeah, a lot of hands raised, right? So you've probably seen this page. This is how you select your seat and your price level, where you wanna be, all of that stuff. This of course isn't run by a single service, it's run by many services in the background, but ultimately every time you buy a ticket, you're doing it, you're using Kubernetes to do it. So even in these high volume sales modes like a Taylor or a Garth, then you're consuming Kubernetes to buy those tickets. And it's similar in the model and how you get in the door. Sort of Tectmaster has sort of two basic primary functions. One is to sell you the ticket and one is to get you in the door, right? And then getting in the door also is a significant portion of that logic runs on Kubernetes today. So that's pretty cool. 2019 we're just, we're all about replatforming. Kubernetes is gonna be a major portion of where we're going. So we're driving much of what we do onto Kubernetes platforms and onto new Cloud platforms. Lot of work ahead of us and I see the adoption growing exponentially here. So that really, that's really it. If you have a great interest in figuring out how to get your own son or daughter to their first concert or your spouse to the Paul Simon concert or this new queen tour that's going around, I'm totally excited about that for next year. And you're interested in figuring out ways to make that happen faster and better with Kubernetes. Come and talk to me or the team. We're absolutely hiring and we need help. So thanks a lot. Thank you very much.