 Aloha, oh, we got responses. Excellent, not everyone knows to do that. We'll see if we can hook in here. Is it good? Anyway, we'll start out. Hi, I'm Francis Origo. I run the network team over at Ticketmaster. And I've already dropped. Sorry, one, two. We'll see if we can do this. I run the network team at Ticketmaster. We run a hybrid cloud at Ticketmaster. ARPA thought that it would be a good story to tell of our journey to a hybrid cloud. Some of the decisions that, or the thinking that went into the decisions to go hybrid cloud. And some of the lessons that we learned along the way from a network perspective. But first, why do I have a guitar? I'd like to begin with an observation that I've had about networking and music. So I've been a network engineer all my career. And one of the things I've noticed that's curious is how many network engineers are also musicians. So you're probably thinking it, too, like, oh, yeah. But you never thought, oh, yeah, there's a correlation. But it was enough when I went to conferences, when I had vendor meetings. Oh, you're also a musician? How curious. So of course, your mind spins on this. And this is one of the theories that spit out for me. So one of the first things that we learn in music is about music timing. You have your whole notes, your half notes, your quarter notes, your eighth notes. The whole note, of course, takes the whole timing of the measure. It's like 1, 2, 3, 4, 1, 2, 3, 4. You split that in half to get your half note. That's 1, 2, 3, 4, 1, 2, 3, 4. And so on and so forth. Quarter note, 1, 2, 3, 4. It constantly cuts timing in half or doubles it like this. Now, there's a concept that we use in networking all the time that has the same structure. Anyone? No? Submit masking, right? If you split your slash 24, you can split it into two slash 25s. You take a slash 25, you split it into two slash 26s. The reason that network engineers are frequently also good musicians is because we've been subnetting all our lives. So what does this look like to you? So this could be four slash 27s or two slash 26s. Or it could be, or it could sound like this. Can we hear it? It sounds like Tracy Chapman's Fast Car, almost. So this is, you got a fast cloud. I don't want data centers anyway. Maybe we can make the deal. Maybe together we can get away. From larger front cap extending, refreshing gear is always a pain. Somebody's got to take care of it. I'd rather it be you if it's all the same. I wish we were just coding. See, I CD, deploying so fast I felt like I was drunk. The pensies surround us, but we wrap them all in darker containers. I am at a feeling that I belong. I am to just focus on the business rules, the business rules, business rules. And that was pretty much the theme song at Ticketmaster when we started our journey to the hybrid cloud. It started with a question. Can't we just all go to the public cloud? Focus more on business rules, focus on the business rules, focus more on speed of delivery, open the floodgates to public cloud. And there's also a lot of reasons why Ticketmaster fits in the public cloud, especially with our traffic profile. So if you're not familiar with Ticketmaster, excuse me, do we have any water? But if you're not familiar with Ticketmaster, thank you. Thank you already. If you're not familiar with Ticketmaster, we sell tickets. Frequently for concerts for highly popular artists, such as Beyonce, and what happens is she'll open a concert at 10 a.m. on a Friday. And at 10 a.m. on a Friday, her fans attack us. It's not just her fans. It's bots that are from resellers that are trying to scoop up tickets so they can resell it on their markets. And they attack us all at the same time for a five to 30 minute window because that's when those seats are available after they're gone. So internally, to us, it looks like a DDoS. Now that, yeah. Yeah, it's 20X to 50X, depending on the artist. But that is also a model that fits well in the public cloud because of its on-demand nature of public cloud. You can on-demand spin up resources. And then after the concerts, we can give them back. And one of the other things that fit for Ticketmaster for public cloud is it gave us a forcing function to actually take older cloud VM technology and filter it and make it into or make a new stack, containerize to increase development speed, to orchestrate with Kubernetes, to simplify our deployments, give all the scalability, give all the availability. And in reality, we did that. We did that. So we created a public cloud platform where we're giving our developers that chance to increase productivity. And one of the things that came out of it was Ticketmaster verified Vant. Sorry. The singing took it out of me. Verified Fan, which puts more tickets into the hands of fans as opposed to those bots and scalpers that were attacking us that we were talking about earlier. Yes. Sorry. But not all services in our discovery, not all services fit well into the public cloud. The first of which, actually there were many services, but the first category is services and clients that needed better than internet levels of service. What does that mean? We all know in this crowd that internet is best effort. No matter how much you make your providers redundant, regionally redundant, you can't control the hops between you and your client. You can't control the client provider itself. And we all know this internally, when you go to the airport and you're getting to TSA, do you rely on the app to download your boarding pass from the internet? No, you download into your wallet because you don't want to have that stress of it not being available at the time. And that's true for these public cloud services that we've been creating. Sorry. So this is not acceptable for a lot of the services that we have at Ticketmaster. What's an example of that? For example, when tens of thousands of fans are waiting at the gate outside of football stadium to get checked in, those scanners that validate those tickets need to have better than internet levels of service. Now there's two ways to solve that. Either you can keep providing a more reliable network to those services, or you can re-architect the client. And that's something that we did with TM Presences. You could see that's the new way to check in where we replace paper tickets with digital passes. And by doing that, we also give ourselves the ability to control the client. We can retry connectivity, and it has offline ability so that the internet levels of service isn't so bad. The other thing that we discovered that don't fit in the public climate is high cost workloads. Dents always on workloads, search, Kafka, data mining. This is the stuff that when we went to the public cloud where these services cost a multitude more than what we were spending on prem, and it just wasn't a good fit. So that led us to the decision to going hybrid. It's kind of the best of both worlds. It gives you that dynamic elasticity. We get to still maintain those higher SLA services that need better than internet service levels. It gives our developers a choice of platform, develop in the public cloud, develop on prem. We created an on prem Kubernetes cluster so that we can share workloads between the public and the private infrastructure. And it seems like it's the best of both worlds. However, it comes with consequences. One of the consequences that we noted, and again, this is more toned towards the network infrastructure. One of the consequences that we noted is security becomes at least twice as more complex. So if you think about you do security in containers, in the VM world, traditional firewalls, but now they all have to sync up. We have to have a unified policy over all of these platforms. Now automation is key, especially in the traditional networking. Automating to create an SDN-like experience over traditional firewalls was key in our case to be able to provide consistency across all the platforms. A good security policy still allows your developers to deploy fast, but at the same time, putting guardrails around security and compliance. One of the other challenges definitely in hybrid that we backed into is governance. It's the same with security. Now you have double the complexity to figure out how you monitor your assets, not monitor your assets, but to keep track of your assets, keep track of roles and responsibilities. Whose container is that? What services does that go to? Who's responsible for that, as well as the financial part, which is who's going to pay for those? What department is in charge of that? As we take a look at spending and who's spending, it's important to keep those assets in track and who's responsible for that. Performance and measurements, monitoring in SLAs becomes a big component in that as well. The other part that is a challenge for us is when you go hybrid, the connectivity that you have between your public cloud and your private cloud is either by Direct Connect or VPN. It's still WAN connectivity. There are some latencies involved. And when you have a hybrid cloud, it's easier for developers to think of it as one entity. I'm just deploying it. And the problem is, if you're not WAN conscious, then it will introduce latencies in your application that you weren't aware of. So if you develop a service in the public cloud and you're depending on service back at the on-prem cloud, you have to take those into account and you have to design your services to be less dependent on that slower WAN connection. And so in summary, where's my timing? But in summary, we've gone to the hybrid cloud. It's a very flexible solution that gives us the best of both worlds, but it also comes with complexity and that you have to take into account. Thank you. Frances, that was amazing. Last keynote of the last day actually was surprised to see as many things happen. I was hoping for less. No, that was really good. So thank you for doing this. Thank you for the awesome song. Yeah. I think it will go viral on YouTube. Yeah, just stop it there. The rest of the presentation you can do. OK, all right. All right, thank you. All right, please.