 Let's get started. We have Eli here. He will talk about, you know, from shore to ship, you know, really exciting topic here It's not about connected cars. It's about connected ships. So take it away. Thank you Okay, it's the switch that says on All right, can you guys hear me now? Okay, there we go So thank you guys for joining This has been a really interesting journey for like the last year of my life, so I'm Eli Senevoi I'm from Ernst & Young. You probably don't associate Ernst & Young with, you know, this type of technology or technology at all It's something that we're working on and I'm personally Responsible for building so I lead our microservices practice Where we essentially help any company out there There's people that are obviously ahead of the curve in terms of looking at their monoliths and, you know Breaking them apart into microservices and there are some that are laggards and we really try to help across the line by bringing You know, our architects, our developers to speed up the process coach your developers, your architects, your IT organization on what it takes to run microservices in production and From there on we, you know, are there for advice, etc. as we take this on so I'm actually presenting here on behalf of Royal Caribbean They wanted to be here to co-present this with us, but unfortunately we cannot control the weather and Hurricane Irma was last weekend and they've been stuck in some DR activities, but You know, there's a silver lining because it's a really great use case to talk about in terms of What that has meant to us from the ability to do DR with microservices in production I can actually tell you talk you guys through a real live example of how that worked out for us So let me get started here So today when you go on a Royal Caribbean ship and you're trying to Sail anywhere, right? As a guest You get those two pieces of paper on the screen there, right? It tells you everything that's going on on the ship today Like what type of activities are happening where to go what restaurants are out there? What menus they're serving, you know, what shore excursions are going on which ports of call you're visiting What's available on the shore once you get off the ship things like that, right? This two pieces of paper get printed out every day and get placed in every guest's room And it today costs about a million dollars Per ship per year to print out So you can imagine that it's not the best way to do things especially in the digital age So there's not a real good app to go into In order to get the same information And I know about you guys I'm not carrying around a piece of paper with me when I'm on a cruise to figure out where to go But I do have my phone on me, right? and so This is probably my my dongle HDMI doing a little flicker action, but At the end of the day, you know The vision is to get this information on a phone and create a truly like great experience for a mobile app for for a cruise ship guest You know when you try to do an initiative like that You really want to start from scratch, but unfortunately like you know for big enterprises You know a lot of the data is locked down by big mean main frames and as for hundreds and all kinds of Legacy systems, that's really hard to get the data out of and I'm going to talk you through some of the specific complications that that We had to deal with but at the end of the day We thought that this was a really great example of Truly releasing distributed systems because we had to put an Application out there that worked on ships and on shore so across a Fleet of 40 ships. We had to have the same mobile application running that is running You know today in production if you are on the shore Over satellite connectivity right in the same way that you're not looking at your mobile app and saying oh this sucks This won't load. I don't have connection. I don't have this right So that was really the challenge that we were looking to solve So I spoke a little bit about myself. These are the other guys that were supposed to be here with me Laura Mao leads the software engineering team at Royal Caribbean and then Roberto Alamon is one of our microservices developers that we've been working with been essentially a team of Initially six developers on shore with about four off and since that beginning we've actually grown to from overall from a project perspective from about 20 people on this project when it was just in the architecture phase to about 200 plus now across different aspect of this app, so it's very rapidly growing It's getting not just into the mobile side of things, but also into ML and cognitive evaluation of some of the data you can collect from a cruise ship as well as Getting into web and and other areas too. So it's it's been very very much Over-encompassing and the early successes obviously pave more of the road for Investing more and more into this so we can accomplish a lot of great things So Classically as a consultant I had to get this deck approved by our audit department so that I You know don't say anything that I can't say and things like that And so I have to put like the consulting framework the you know complication Action all resulting from some sort of a situation Otherwise don't let me show it to you guys so This is what we're gonna be talking about today, but I want to spend too much time on this So let's jump straight into it so the goal for the business was to release like a state-of-the-art mobile app, right? and this app was supposed to be able to work Ship and shore no matter where you go same type of experience right so in a sense when You're opening your mobile app anywhere, and it's really hard to find like some of the enablers that you know you have companies like pub nub and You know elastic beanstalk and some of these past solutions that lets you do a lot of these things really easily In order to operate the same ship the same thing over a hybrid architecture on both 40 ships and on shore in a way that doesn't require you to You know create different code bases and run in a really simple way We really thought that you know microservices and Docker and DCOS was kind of our answer, so I'm gonna talk you through a little bit How we got there and why we chose those things and what the drivers were but the situation we were in is essentially You know a lot of really complex solutions have evolved on top of an ace 400 solution, right a Cruise line is really dependent on their reservation system, right? So anything it's like the it's like an ERP essentially for that type of operation So any reservation that comes in any type of accounting any type of like B2B with travel agents or other Partners that direct people to the cruise line reservations. They all integrate into this one system. The system is very old I'm sure you all have cases like that if you're in companies that have been around Over, you know 10 20 years that you have that, you know one seat system that holds all the secrets, right? That you you have these grandiose plans of like one day We'll get rid of these things and you know replatform to something that gives us some more flexibility But for now we basically have to Come up with ways to get data out of it and for for us. It's this reservation system So it's as for a hundred base it has some proprietary You know version of DB2 behind it It's got an RPG service layer, which I'm 29 RPGs a little bit before even my time Never even heard of it until these guys told me about it So on top of that there's like a web sphere layer and another service layer and another service layer It's a mess, right? So how do we start untangling some of this stuff, right? When you talk to some some of the guys out here, you know the Kafka guys like Kauffman and Data stacks guys they'll essentially tell you that you know when you ingest all this information up You can essentially start recreating a lot of the systems a lot of the data that you have essentially in a cached layer above all the complexity and Begin using it as as you please from there on and and that's something that we were trying to do and unlocking that issue there wasn't a lot of You know source of truth for content management and authoring so in our situation We chose a system in Adobe experience manager But essentially a tool for people to take master data and then for the the mobile app decorated with guest-facing content, right? So you don't want to see you know ship gross gross tonnage of like this many tons and you know this other master data detail you wanted Told in a more illustrative way right like welcome to allure of the seas You know this ship is beautiful and largest in the world yada yada yada And you need a place to write all that copy and get it approved So getting one place to do all that targeted our product managers and our cruise directors was a part of this as well In terms of scalability and modernization there was no existing service layer that could actually take on The traffic that a mobile device could potentially Mean right so today When you're on a cruise ship and you want to go on a shore excursion or you want to buy a wine package or an internet package You have to actually go to the guest services desk stand in line Move a little bit more in line move a little more in line and then talk to a person that You know will try to accommodate you and you know you sign up on a sheet's first come first serve You know there's capacity. There's inventory for these things You know it from a millennial perspective like I'm not gonna do that It's it's a little bit of a ridiculous expectation if I can't do it from my phone It's probably not gonna happen and so Also the potential of all the lost revenue that could come from people just able to book this stuff on their phone as Opposed to people that get frustrated from waiting in line or not knowing that there is a line to go to Right or just not knowing about a specific event or a feature of or activity that's offered on the ship You know that was kind of the thinking behind that and then From the perspective of what we wanted to build You know today if they wanted to make any change It's a monolith right so Everybody kind of has to put their kid gloves on and you know talk to the base database team and and ask them like okay We want to use this table. We have to change this one aspect like whoa, whoa, whoa Like we're changing one value like just simple update to a row in a database, but kid gloves have to be on everybody's prepared it's like You know the analogy I use for it is we have these go-no-go sessions right where you make a decision It's like electing a pope, so I call it the white smoke sessions At the end of the session you have some white smoke released and we finally decide okay We've updated a row in a database and good guy could that have been a whole lot easier if You know there was something that allowed us to update it without cascading issues across everything else right so The microservices aspect helps us with that because we can actually divide things up and If I forget to talk about this remind me, but I actually can give you a Situation that proves out the value of microservices over this hurricane Which is essentially how we had a situation where we had unexpected logic happen in our app because all of a sudden a voyage had to be extended by four days and We had to go to a port that wasn't even a port that was in our database Because you know we're trying to stay away from you know people going into a hurricane and Had to essentially still reflect accurate information in the mobile app and we Did it really easily I could literally go screen by screen in the app field by field because of my ability to control each Service in a fine-grained way to go and change that and that is really really powerful I don't know from a business perspective if you've ever had you know if you were a developer or a product owner and Someone on the business side say hey we had to change this and they really don't understand the cascading effects of what that changed My mean in a monolith and you had to kind of say well, it's not that easy. It's you know require x y and z This gives you an ability to build a product where you're essentially Controlling how fine-grained you want your ability to be in the future to control some of these things So it's really great from that perspective And then we wanted to make it resilient and scalable, right? so You know today a Ship is maybe 99% connected all the time. There's satellite connection, but for example We cannot deploy a Cluster that has nodes both on the ship and on the shore, right? That's you're gonna lose that connectivity and that won't they'll just throw too many errors So that's not possible today because of you know extended periods of Disconnectivity sometimes, but it's not as big as people may think right satellite is actually pretty strong It's about a 300 megabit connection a good allocation of that goes towards the guest ability to use the the web But there's you know a qos layer allocated for ship applications And just quite a bit of it and the connectivity is usually good, but there are periods of blackouts Sometimes you're in China sometimes you're behind a mountain sometimes you're Bermuda triangle You don't know right so we do have to kind of plan for that scenario And then finally I'm not sure if you guys have ever gone through The process of educating an organization that's been really running in a traditional IT mode about how to do Devops what DevOps is what it means Honestly, I've done this several times and usually they think they you know just set up Jenkins and it just all starts going But you know Just to paint a scenario If I want to deploy a microservice today First of all, this is an explanation that I have to make all the time Usually people are just used to use DevOps for deploying code right, but when I'm deploying a microservice it includes Like in this situation there was legume code built on Java right then we have our Kafka connectors we have our Kafka topics that have to be created either manually or by script or by turning on configuration Then we have database schemas database tables and key spaces that have to be created for each table because we want to deploy Our services in an immutable way right so we don't want any shared database pattern and we want to Have a clean copy so that you know situations like actually you want to have today We had something funky happen To one of our services where the database table that's relying on is corrupted in some way And I can simply spin up another instance of it redirect my API management To that new instance take that on and down and let the DBA team loose on it to figure out what happened So that's something that that we need to make sure we could do but from a DevOps perspective Culturally has really been the toughest thing because I have to in order to make a tiny little change. I have to Talk to if it's if it's across the whole stack I have an infrastructure guy which is separate from the cloud team guy, which is separate from the DBA guy Which is separate from the middleware guys that actually manage DC OS and Kafka, right? And then we have a development team and then I have the business side So, you know in order to make one tiny little change. That's eight different teams And I don't know if you guys have been on Conference calls with like over 25 people trying to make consensus on something, but it's probably one of the worst things in the world So that's that's kind of day in and day out, right of the situation we came into So We thought like we need to create a different vision like we need to kind of let loose on it So our vision was to create kind of one digital hub one Footprint or reference architecture of what royal Caribbean's future architecture would run on so You know, there's obviously still kind of struggles with this But the idea is any new application any new service would be deployed Along with these micro service principles it would be deployed on DC OS It would use no sequel and Cassandra in some cases depending on which you know, we sometimes would use You know some sort of a relational database as well, but In most situations it would adhere to this Kind of footprint that I'm about to share with you in a few more slides, right? But everything that is to this date kind of that traditional integration type services, so Enterprise service buses any Type of that older, you know XML soap-based type of integration technology, etc We're gonna go away from that and leave it to The things that are simple don't change too often don't need to be a micro service, etc I don't have a slide for this here But there is a slide that I share with my clients that it's essentially a matrix that says are you a micro service? and it essentially splits it across The speed of change from a business perspective and the complexity of management, right? If it's complex to manage and it doesn't change too often You don't need to make it a micro service because it doesn't change too often. It's already set up and You're probably gonna just trade off for more complexity to manage that one thing that doesn't really need that level of complexity But if it's something that changes every few days a lot of requirements. You really need that fine-grained ability And it has some level of complexity to it then You know by all means you need the tools that microservices gives you in order to manage that But you also need to set up the kind of the Enablers around it to be able to manage that complexity as well. So things like you know circuit breakers and service catalogs and service discovery, you know ability to do green blue deployments and Canary deployments All those types of things are enablers to essentially manage microservices at scale that You need to have if you're gonna be building them, right? so that was kind of the goal there and then The next part of it was essentially Getting everybody on the same page right from an organization that was really living in the SOA world around ESBs and things like that just getting everybody on the page that okay Our ESB is no longer the main place where you're gonna use in ESB in places where we don't want to be in the business of building Integrations we don't want to build new connectors. We want to use easy simple connectors for things Because there is no competitive advantage in building it. There is no reason to it's already available in our ESB And it's simple that's when we can use that traditional integration technology But for everything else where it gives us a competitive advantage to build something new something that brings logic that No one can compete with us on right that essentially makes us better than our competitors That makes sense to for us to own that code for us to customize and maintain it locked you know in time forward so Obviously it isn't a smooth road so the next slide is complications So we tried to answer these three questions You know, how do we deliver with what we have today and what else do we need so? when I look at I actually came into the whole microservices native cloud world from doing a lot of IT strategy work and I used to do a lot of application maturity assessment and rationalization assessments which There are essentially a way to take your cmdb if you have one if not it's nice Excel spreadsheet exercise, but you take your application inventory and Gather a bunch of survey information from both business quality and technical quality of your application to help you determine One of four things is that application? Redundant not in the good way redundant as in it's duplicative and you need to essentially retire it or Do you need to either re-platform it enhance it or? Completely replace it right so one of those four things you can figure out through that So we essentially did a similar assessment here to say, you know we need these things for us to be able to run microservices and This is what we have here today, and these are the gaps, and we need to bring in some new technology in We also wanted to make sure that we have a standard ship shore footprint, so This is one of the key reasons we chose mesosphere But in general when we were doing our assessment we were choosing between Pivotal and OpenShift and DCOS in terms of how do we orchestrate containers and scale and When we were doing that assessment one of the key things that we were focused on as you know We cannot run a cloud on our ships. We have we're constrained on space We don't have the skill set, you know we have our IT on the ships our folks that are basically guys that you know our sys admin level and below that Are committing six month contracts to work IT on a ship right there we we're not sourcing You know the next brightest engineers and Google and Apple and you know all the places where some of that town Goes so we had to come up with something that was easy to to work But also allowed us to deploy to deploy the same code ship or show no matter what it is So we needed the same experience from The infrastructure layer up for everybody and DCOS Which we ended up with gave us that but the other solutions that as well right same interface one CLI or one UI to go into One way to essentially configure your services and deploy them whether you're on ship or on shore like to us We don't care what it's running on we can SSH into either one we can You know type in an IP and we get onto it and see the same thing whether we are on ship or on shore So that was really important and then the final thing is just keeping ship and shore and sink which It's probably the toughest mission. We still have a few patterns. We were perfecting but You know Kafka and solutions like that were really important for us on that front So what we were really trying to achieve is first of all that we have the same data ship and shore but then we had the additional complexity of Some systems exist on the shore and some systems only exist on the ship and some data falls into essentially the realm of It goes to the ship and never makes it off of it It goes on the ship gets manipulated on ship comes off the ship and then we have to analyze it It originates on the ship comes off or it originates and it doesn't get off the ship, right? So for each of those scenarios While that data is still live and isn't retired we had to think about You know if we're using in the mobile app It still needs to be on the shore and we have to give it the same Time to live as what it would have on the ship originally and vice versa with the shore data And how do we arrange all that so it's not a mess to manage, right? And if we're doing it in a microservices way, we can't share it really We have to essentially create immutable versions of it that are specific for each functionality. We have to really do some some really Selective domain modeling on you know How to make this really cohesive and how to make it really decoupled in a way that won't be an absolute nightmare to manage So this is the kind of assessment that we looked at so from an e y perspective This is our map of all the different areas in middleware or integration, right? And and I can understand how in a way not everybody thinks of this field or this area in the terms of middleware I actually think it's a pretty bad term as well, but in that specific case it was You know, they were called the middleware group So they were owning it and we were using terms to essentially Resonate with folks, but all the green stuff was essentially all the traditional Integration tools that you would typically have right very so an ESP centered and then The red stuff are a lot of things that you probably would have And running microservices you can obviously see there's a lot more red than green because It requires a whole lot more components just the smack stack itself is five things and And there's obviously a whole lot more running in there, right? and Then there's a lot of you know governance a lot of little aspects that that often get missed out so We essentially tried to Come up with what solutions do we need to bring in that helps us cover as many areas in this as we can and the least amount of time right and also with the least amount of money and Probably we don't want a solution per each of the boxes, right? We really try to optimize as much as we could So The other aspect of it is you know, you're trying to explain to people Why microservices, right? It's a quite an education for a lot of folks that don't understand the term Especially if you're talking to someone on the business side, which you know Most of the time they're the guys that are gonna fund your project And you have to explain to them what they're gonna get out of it So this was my attempt at it which There we go Which which was around explaining, you know the aspects of agility scalability Resiliency and those things so like going one by one, right? Find green functionality and quicker release times so I can little I'm gonna use the Example that I that I had with the hurricane. It's probably a good way to apply it so We had a situation where Our voyage because of Hurricane Irma There are two voyages one September 3rd and one September 10th the September 3rd one got extended from seven days, which is the typical duration to ten days and the September 10th one now was four days and began on September 13th Which is when the September 3rd one now ended so one that got extended by three days and one get shortened by three days, right? That is not a typical behavior that we expect from our voyages and so the way that we had built Some of this it had to be able to be flexible enough that when we haven't built the logic to Change some of these aspects. We weren't showing guests wrong information in the mobile app, right? and We were still honoring some of our product tenants, which one of them is the accuracy of information in that app and so The fine-grained aspect is really nice because in that situation specifically we have a voyage picker screen on the app By the way, if you guys want to look at it live the app that's running On DC OS if you look up Royal Caribbean International in the iOS or Google Play Store It's right there You know go on it leave a review if it sucks it sucks if it's great. It's great The service layer underneath it is what's built on DC OS and you know using some of the Principles here as well as the team that we did with this with I happen to think that it looks pretty slick and it's working really great So in terms of the quick release times etc, right? I was able to go individually to the service that you know the main page which is our voyage picker that lets you select which voyage you're on and Individually manipulate just that page without affecting anything else with a complete confidence that My app will stand up that no other page would be impacted, right that The app wouldn't go down just because of that and I would specifically if I Stay true to the API contract, right between the mobile and the microservice that everything will be just fine and that Confidence is really great because even though we have these white smoke sessions, right with the go no go I'm sitting there being like okay. I just tell me when right I can deploy this really easily Even though I have to answer like what's the impact of you know if we do this change etc I can basically always answer with well. It's really isolated the only thing impacted It's gonna be this label of this piece of text and it's gets manipulated by this piece of JSON in my API Right it doesn't impact anything else because that's running in completely separate container in a completely different service So there's no impact downstream. They don't share a database. They don't share anything. They're completely immutable I could launch three more if we want to take that one down You want to dissect it and look at under a magnifying glass you can I just need to you know route the traffic over and we can do it in production if you want to because The app is only going against this one service and it's isolated, right? In terms of the downtime what's really great about it is kind of the aspect I just talked about you know There's really zero downtime with the rolling deployments that we do we typically do things More prescriptive than just using kind of the restart or deploy option within DC OS So if you've played around with some of the options for zero downtime deployment that they come with marathon LB, but My preferred way of doing it is actually I'll call it kind of the vamp way or the canary release way which is essentially Spinning up an instance on its own that's Nice and good doesn't even have to be in the same pod can have a different name It has its own IP or its own service discovery DNS whatever it is rerouting my my actual live traffic to it and then The old instance I can take down. It's a rolling deployment, but it's a little bit more prescriptive as Opposed to mess those can handle it right like I can just do it all in DC OS have it Based on my upgrade strategy I can set up kind of the minimum amount of live ones and non-live ones etc But specifically at Royal this has been more effective because I can show people Right that the traffic is being routed here. You have nothing to worry about on the other side When you do some of the more mass deployments like some of the guys I've spoken out here to and like GE and Verizon where they have Their clusters are just huge and they're running way more workloads than we are The automation probably makes more sense right but for a lot of these things It's about the comfort level as well and getting that comfort level with some of these technologies for folks You know as they're very new and they coming on to it. It's important important to Do it in the right way? so I'm gonna blaze through this slide because I'm talking a little bit more than I Usually would have I think it's just a nervousness But we essentially ended up with two footprints, right? This is just a simplification, but To solve the first portion of it of the Same footprint ship and shore We essentially run the same exact thing ship and shore. It's really as simple as that, right? We have underneath here vSphere and underneath here ec2s on AWS, right and From there, it's really DCOS and up we were on Cassandra. We're on Kafka and From there, you know, we we do our API management through Apigee and that's that's essentially our structure the nice thing about that is It's exactly the same ship and shore same folder structure same service names I can take my Jenkins and Jenkins actually does a deployment to my Clyde DCOS and And our Jenkins actually lives in a completely separate like DCOS environment It's our dev environment and you can go deploy to any environment I want to because it's also centrally connected to our secret store and it has everything it needs to know and all the permission that it needs to Go put our code and all the other stuff that goes along with it and other clusters, which is really really convenient and then We have the confluent aspect in between right so we do use confluent Kafka to to power things up and You know, it's I think it's been really great working with them on that solution But one of the really great pieces that comes with kind of the enterprise confluent version is the replicator, right? Which you know the traditional like mirror maker that comes from the just the Apache project. It doesn't do enough to To really get you the capabilities that this does which essentially, you know If you've played around with Kafka connect it lets you just create a really simple and easy connector that makes sure that you're Replicating topic to topic across to Kafka clusters, right? Like usually you have your sync and source that are some sort, you know one of them is Kafka and something else is like a DB or you know a Search engine like elastic search or something like that in this situation. It's Kafka to Kafka and it's really great I mean, I could even have a case where I am essentially doing a replication of like duplicating a topic by Writing like you know using the replicator into my cluster as well to create too. So it's really great To have that flexibility for replication Alright, so overall From from our stack perspective here This was what we started out with I could say that Out of all the things on here that that we use, you know, it's been really great Writing things in Lagom especially if you have developers that have never Written microservices in our you know career Java Spring developers or just You know are really new to microservices our guys were Java devs for a long time We chose Lagom because it's a very opinionated framework on building microservices. So it really You know if you're concerned that your developers have kind of a cowboy mentality and can go build things like You know a hundred different ways, which I love developers. That's one of the best and worst things about them I Think that's something that helps you kind of harness that and and from an architect perspective is someone overseeing this It gives me a lot of peace of mind that they're building it in the right way You know we run things on rel You know Docker runtime, which we may or may not switch to The universal runtime sometime soon In terms of service discovery, we did we do a lot of the in cluster service discovery specifically for actual like Addressing between services using the built-in capabilities with marathon and kind of marathon DNS, etc But we actually found that using console for cross cluster service discovery is really useful So console gives us an opportunity to do service discovery across all the ships in one single pane of glass, right? So I can install Console agent on every ship and they all talk to each other using a gossip protocol that allows me to view every service That I have running across a ship of 40 a fleet of 40 ships, right? So it's it's a really great way for me from a monitoring perspective to go to one single pane of glass and see every Service in every cluster that's running how healthy it is and there's even opportunities to send metadata and things like that Into console to display as well along with it. So that that's really been great In terms of dynamic property management, you know, this is an interesting area We tried to do a lot of kind of the Netflix OSS things around like hot reload configuration stuff like that so that we could You know essentially change configuration without having to restart the service and there's a lot of really great solutions out there I can't really get into the specifics, but we do our config management and Console as well because it provides you the key value store and our micro services essentially each one of them is connected to console and is essentially watching it for changes relating to itself and If we want to for example turn on Like debug for example to go from info to a more verbose Logging level we can do it in console without restarting the service, which is Really nice and it gives us other capabilities if we want to change anything else within it But it really gives you a lot of flexibility and really changes the game in terms of zero downtime Like I don't even have to think about a rolling deployment I can literally go change a cell in a KV store and the service would pick it up In terms of circuit breakers we used kind of the built-in circuit breakers and legome We are working on a way to essentially instrument that using turbine and hysterics, but we haven't gotten there yet And I'm getting booted here, but Do I have time for questions, okay Sounds good. I'll just leave you guys here with just this slide so you can see a picture of the mobile app If you haven't downloaded, but let's open up for questions in which oh between ship and shore So honestly the way that we have our network running It's really just another The ship network essentially extends into our data center, right? and For us it's just another IP. So the way that we've been dealing with it We have done some troubleshooting in terms of latency and things like that, but Just to eliminate things along the way, right? There's probably an inordinate amount of Like load balancers and proxies along the way that you kind of have to deal with so we have done quite like one of the biggest pain points have been opening the right firewall ports to To Essentially get this thing flowing, but once we did and kind of have templatized it a little bit now We're able to take it from from ship to ship and then one of the things we're Looking into but struggling a little bit is configuring the right polling interval for for some of the Kafka replication that we're doing, but we're getting better and better at it, right? I think we started with Like every 24 hours for some of our things and we've knocked it down to You know for some of our more fragile most of them are real-time our more fragile systems are down to you know Two-hour updates, which really happy about that's from a you know from our AS 400 But everything else that's just purely running on here like our weather service our ship statistics things like that That's real-time across ship and shore. Oh, yeah, I I personally was not doing that But yes, we've had so many headaches Yeah Yeah, it's so it's you probably want to segment it a little bit, right? So there is data that is only Relevant to a voyage, right and then there's data. That's a little longer live. So we have things like There's even things that are only specific to a day of a voyage, right? So today we show things like the current weather sunrise sunset When the the gangway and the ports open and closed that's probably for that specific day, right? Then we have things like the activities for The whole voyage in which they they happen on what products are there, etc And that gets set up by our content team for each voyage It's that's gonna get more complicated as we start doing that on 40 ships right now We're only a one right so that process itself will get very complicated then we have very long-lived data, right? Our list of ships or list of ports The itineraries that each ship take for example the Allure goes Eastern Caribbean and Western Caribbean it alternates every Every week it always leaves on Sunday always comes back on a Sunday Some of that stuff doesn't change very often unless there's a hurricane Yeah Cool Anyone else? All right. Thank you everybody