 Hello, welcome to the cloud multiplier. We are coming to you live today's episode three Today we have the pleasure. I'm gurney should say I'm gurney Buchanan your co-host joined by my other co-host joy deep and We're honored to have two guests from the Submariner project today near and Steven You already know myself and joy deep Steven near do you want to introduce yourselves? I'll go top to bottom Steven Right. Thanks for having us. So I'm Steven Kitt. I'm one of the developers on the Submariner project Awesome. Hi, thanks for having us. I'm near and I managed a submarine routine over at Red Hat Awesome. Thanks for joining us. So we'll get into all things multi-cluster networking here in a bit The first thing that I admitted while we were waiting before the call started I did not take a networking class in college. So my networking is I'll be a very good very good User here. I want somewhere to make things simple because I I don't want to have to learn all of this fun networking I'd rather it figured out for me. So y'all are the experts. I think we'll have some fun there But first as always we have top of mind topics. I have still stolen that name I think I came with a name in the last stream and then forgot it. So we're back to this one My I'll I'll start off because I actually gave joy deep some prep time this time on on what he wanted to bring up But my past couple of weeks have been dominated by an incredibly fun time around understanding How every CI system holds an unbelievable amount of power because you have to put credentials into it If anyone else saw Travis CI has continued to leak Some sensitive secrets that you have in jobs and we used to be a Travis CI shop We still have some stuff that we ship in Z streams for the red hat advanced cluster management project That's on on Travis. So every time I see a news article in my feed that says Travis has been leaking your keys Once again, it's it's always a fun leak. So came back from vacation to that notification Joy deep I'll message you after so you can rotate all of your keys as well I don't think any of yours worth early legion Travis, but As always, I remember I'm reminded about the need for ever-increasing security And the fact that I should probably learn more about hashy court vault That and sealed secrets. I think are I think are two of the more interesting projects I've hacked that lately is is vaults for Seeing seeing as we operate a bunch of kubernetes clusters seems to be a pretty cool piece of tech We can we can talk about that some more later joy deep We might actually have someone on to talk about that at some point But joy deep you had a book that you wanted to talk about you were telling yeah, yeah, and let me tee off by Going after what you were telling Gurney we have more and more customers talk about a hashic of what and indeed in today's scenario, I mean With with all the security concerns the last thing you want is your secrets to fall into wrong hand So that is a very hot topic. And yeah We I guess we can have folks talk about that I think in the last one Gus kind of referred to when we were talking about policies. He kind of touched upon that but on the book Yes, I think I think the I think that the big one is being able to rotate your secrets because you should just Assume that they're going to get leaked eventually. So if there's one button that just rotates it and then everyone gets the new I think that's I mean that really big headache saver as a layman gurney Encryption, it's all about changing as fast as you can. It would be act It's mathematically possible to hack it, but you change it before they can figure out what it is. That's the game, right? So exactly book of why I think it's called the book of why it is It's written for lay people. It's written by UCLA professor who won the touring award Nobel Prize for computer scientists in 2011 It's a it can be read by anybody. It's a fantastic book that talks about it's all about causation How you how you build a causal model? It is fantastic. It talks about how philosophers have treated this subject How statisticians have treated this subject? How computer scientists and machine learning folks have treated this subject so it is and this coming from a Extremely, you know, he's top of the notch, right? So he's explaining to you So you just you you just enjoy that, right? There are things you can understand and there are things I cannot understand I have to read it 10 times But the other interesting thing is It at least makes me think that hey all of this data We have and the way we are trying to extract meaning out of the data is there a different more robust way perhaps Which we could use so, you know, it's it's that kind of stuff that gives you enough thought in your brains and it's Interesting. So it's it's slightly more than it tells you how to ask why when you ask why five times. Oh Yeah, yeah, you know it There are three levels. I mean his goal is to build a machine that will pass two ring test and how do you do that? Right. So first level is prediction Girl, we are all for hey, can we predict next level is intervention, you know, what would happen if I did this instead of that? Third level is counterfactual You don't have that information What would have happened if I didn't do it that formation is not there in the data So those informations are not there in the data. How can you still have the data? Have a model and in for these things I would recommend anybody to read it. Oh Goodness I will add that I haven't pulled up on Amazon right now, which is which is just It's thing it suits me to have already pulled it up on Amazon Anyways, that sounds wonderful. Joy deep. I'm gonna have to give that a give that a try as always open floor Stephen near any any fun Open source projects. Have you been reading any good books? Any good documentation? Okay If you think of it if you think of something later, let me know I bet but behind the scenes we did just I have been working with near and Stephen to get a new build of Submariner getting ready to go out and in a revision to an old build for the past week So it has been a very busy time for them Think that segues in go ahead. Yeah, so just I just got this one You were talking about not knowing anything about networking So I haven't read it yet, but this is a book about the company. I started my career in just three comm which is Everybody's forgotten about it now, but they were a big Networking company 20 30 years ago by one of the inventors of ethernet Okay, I have heard of three com, but I haven't heard anything farther than the name. I don't think Yeah, okay It might be joy deep I think I know what the three the three com may have been one of our I started my career at IBM So it may have been one of my colleagues there because I knew know some people from the mainframe division So so that might have been And a networking related product come to think of it. Anyways, that's that's really cool. What's the book called? It's just called three come the three come story Okay, that's very interesting. Okay the unsung saga of Silicon Valley startup Yep. Yeah, one of the first unicorns before unicorn was a term for startups and it just all went down the sink Wow, that sounds like a story Even it seems that we must give a refresher about the seven layers or something that I have to bring You know to bring folks up to speak the only has never done it and me like whenever you talk about that I have to think about okay. What is their seven app and what does therefore, you know, I think it doesn't come automatically So a refresher like that, but first time we bring networking into the cloud multiplier and it's it is important It is one of the key backbones Come to think of it. Yeah, it is. Okay. Give it. Give us the networking primers set the stage for us So you want well if we talk about just the layers So they the idea of networking is that you stack layers and so there are different models TCP IP has one model There's a an OSI Model which is supposed to be an official standard and that's the one everybody reasons about and so the the bottom layer is The physical layer which in the case of Wi-Fi doesn't actually exist And then there's layer two which is corresponds to MAC addresses and most people know so that's sort of the the transport layer That's how you get data from one one device to another then there's layer three, which is interconnecting Networks so you start routing things here and obviously that's where the internet starts You've got all these you know the internet that means internet work. So it's all these networks that are connected together And then you add so layer four above that it was When you start to have structure to your communications, so this is where TCP lives so you have long-lived sessions which are more than just You know UDP jokes where nobody gets you don't know where anybody gets it TCP jokes you actually do know if the recipient Guess it because there's a two-way communication with handshakes and there's a whole state State diagrams with state transitions So everybody that's involved in a communication knows where the others at sort of thing And then you add layers on top of that so OSI I can't remember what five and six are and we tend to talk about seven Which is the application layer and this is where things like HTTP come in Where there's actual content that has Semantic meaning let's say And so that's in the cloud This is where most of the interesting things well, I wouldn't say most of the interesting things happen But it's what you ultimately care about If you want your communications to get through you need everything to work up to layer seven and you can Figure out most of what's happening just by looking at layer seven and you can in theory change a lot of what's happening just by affecting or just by acting on layer seven Okay, so if we want to get into subrener that's subrener specificity really is that it's a layer three level project Interesting, okay, so it gives you that solid foundation where you where you can go back to not worrying about The layers below you and start thinking about that that nice high level layer where you're working with Full data and more constructs. Yeah, so yeah, so yeah, it's important because Well, the reason I mentioned that submarine is a layer three is that when you're working at a certain layer Anything that's built on top of that Will work transparently so if you're if you're acting at layer three then anything at the layers above that will work on top of Whatever you're doing without noticing that there's a difference if you work higher up Then things that rely on specifics below below you won't work So that's why for example, you can't uh, you know, if you've only got a layer seven Say a layer seven proxy for example on a network then you can't root you can't use that for anything Uh, below that so that's why for example, you might have an htp proxy, but you can't use it for um Well anything that doesn't go over htp so your games might work over it or irc or something like that. Yeah Okay, okay. That's that's very good framing. That was that filled in some gaps for sure um, let's see so So Submariner's at level three. So what is I guess to frame the problem better Uh, Submariner I from my very brief reading seems to be A solution for the problem of now I've created a a heterogeneous Highly scalable varied everywhere network landscape And I need to run an application across more than one of those and I need to be able to communicate In some sense of a reliable way with with a nice low-level interface or a nice powerful interface And that's where Submariner steps in I guess Here's here's the point where I'm curious what Submariner's biggest use case is where do people put Submariner? Sure, maybe I can take that so Yeah, so that's exactly what we're trying to solve. Um, so obviously we are seeing uh users and customers Deploying all sorts of uh Kubernetes clusters all over the place, right? It could be on prem or on each of the major public clouds Uh, or any combination of of these and then what we are trying to do is to really interconnect them directly um, so as stiffen said basically, uh, go to this kind of level three l three foundation Uh, and at the infrastructure layer just interconnect the clusters um, and then In terms of use cases Um, maybe I can share my screen because I have a bunch of very interesting ones And there's one that fits in nicely with the vault story of earlier Yeah, oh, okay mentioned that one Uh, we also already we can get this out of the way really early We already have the Submariner pronunciation question in chat I think the correct me if I'm wrong. I was reading the docs earlier It looks like both are fair game. You can pronounce it either way, right? Exactly. Yes We we follow. What is it the? uh, john postel I think principle, uh We're uh liberal in what we accept Okay Yeah, there we go Yeah, so I guess the the most basic example of our use case is just to basically Interconnect components of the same application across different clusters, right? And we actually have a demo of this exact use case later on So here in in this slide what you can see is a cluster a over on the left side Where there's a database component and a front end component and then over on cluster b on the right side There is just this front end component and then Uh, uh, as you can see like in order for the front end on the right side to Connect to the database on the left side. You just need to have some kind of a Secured direct kind of vpn, uh connectivity between the clusters, right? So that's the very kind of a basic use case and then the more Kind of interesting one. Uh, this is something that we did with core core gtb, which is a very popular cloud native database here Yeah, so here Again, as you can see we have three different clusters in this particular diagram They are in different regions So for think of it like for example, all of them are running on aws, but just on different regions of aws For availability kind of reasons and then core core gtb as an application Compose out of different pods And it's just a requirement Of of those boards to be able to directly talk to each other or communicate with each other in order to form this massive distributed highly available database, right? And they need this very secure Load latency connection between all the components all the pods and this is where we can come in We come to it. It's a minor to just offer this interconnectivity, right? um and one Other use case, which is quite popular in the context of some minor Is disaster recovery? um, and this is something that you are actually Delivering together with the red at open shift data foundation teams. That's what used to be the red at storage earlier And this is where Over on the storage side. They have all the fancy disaster recovery Feature set with volume replication and whatnot, but then in order for this to actually work They need the underlying infrastructure the underlying connectivity between the the site, right? And again, this is where some minor comes in offer this kind of l3 infrastructure And then, you know, the replication just works on top of it, right? So these are the type of use cases we are currently Kind of supporting and earring Most but there are definitely others and and near obviously you mentioned that You provide low latency Very critical for replication for let's say cockroach db and stuff like that. So Does Submariner give any indication? Publish any metrics on what kind of latencies are being experienced and stuff like that Is there any way to back up the claim that hey, I'm low latency connection? um Yeah, so first of all, um, one of the benefits of Submariner being an l3 solution is the fact that we rely on the kernel So we really leverage and take benefit of all the the kernel networking stack So, um, I guess from a connectivity perspective Again, we rely on on rail and like Linux and and the kernel And then yes, we do have some, um, health check information That tracks the connectivity between the clusters and we even show the the latency like the live latency numbers Stephen, I think you have it in your demo if you want to show it up. Yeah By the way, we have got a question there Yeah, I was I was gonna I was literally about to say we do have a good question. Um, the first question is I'll pop it up on screen as well Where this is supported, uh Can multi cluster networking work with non open shift and then this I guess is a couple questions non open shift non k8 so so non Oh goodness non kubernetes and pair those with open shift Yeah, yeah, so just from a upstream community perspective, Submariner the project supports any kubernetes, um Clusters so there are some restrictions as to what CNIs can be used and and we don't necessarily test All the possible combinations because there are far too many but if you can The only requirement is that, um, there's one shared cluster that everybody can access and so we call that the broker um That's where all the data that's used to synchronize between the clusters right there is on on screens So that's the the top cluster there so all the All the information that Submariner needs to share across clusters lives there and so any cluster that can access that uh kubernetes api endpoints can Join the the what we call the cluster set, which is all the clusters that are working together um and so you could have Uh, just using Submariner upstream you could have open shift on one side and any other, uh kubernetes implementation on On the other and connected to and mix and match you could have a variety You know you can have more than two clusters and they can all be different kinds As long as they can talk to the broker and there's some way of getting them to talk to each other Uh, then it'll work now from a product perspective a red hat product perspective We ship Submariner inside advanced cluster management for kubernetes And that only supports open shift to planets currently Okay, that's a good answer. Thank thank you for hitting that Steven um Now that we've interrupted near We're we're hopping over to I I think to the demo cluster because i'm very curious to see this This multi front end one database um You also did bring up the data foundation, which is another another group of people I uh, I need to bother to be on the show because they have some really cool pieces of tech over there And we also have the ball sync folks, um that I know you've worked with Yeah, yeah Yeah, so we don't have we don't have anything as interesting as that uh up in the demo But just to take a look so this is a cm, uh advanced cluster management for kubernetes and On this screen we can see that i've got two clusters that are connected Together and one of them is on aws the other's on gcp And so from this screen we can get all sorts of information about the clusters themselves How to get to the open shift console and so on And we can also see here that we've joined them together in a cluster set, which is imaginatively called Submariner And we get the a quick health check there the number of clusters And we can drill down To get more information here. So we get the connection status Um between the two clusters the status about all the Submariner components that are involved. So everything's green here. So it's not saying very much But if something was wrong you'd get a pop-up Telling you exactly which component was wrong And the node labels part this is because We Can't tell well Submariner can't know on its own which parts of a cluster it can use to communicate with the outside world So we rely either on the administrator to label a specific one or more specific gateway nodes that we're going to use as gateways are We rely on setting up A specific gateway. So this is what's been done here Submariner is capable of going to talk to aws gcp and a few other Platform platform providers to actually go and set up specific nodes. So if you're not careful You know, Submariner might run away with and you might end up with a surprising Bill at the end of the month. But no, that's just a joke So once so you've got the nodes labeled everything set up correctly And so like I said, there's nothing special really running here one of the underlying principles of Submariner Which is which it inherits from what's called the multicluster Specification multicluster services Specification, which is a Kubernetes sync Which publishes a spec which describes how to connect Well, not really how to connect clusters together, but how to provide Services across multiple clusters. And so that's the the api that Submariner implements And it's all service-based. So if we go and have a look at the the services On my two clusters. So I've got my aws cluster here and my gcp cluster here And I've created an nginx test namespace On both and it doesn't have any services currently But the reason I did that is that one of the The basic principles of multicluster services is that it's namespace based The idea is that you have if you have one namespace Or a namespace that exists on multiple clusters is expected to have The same or to offer at least the same services So what I'm going to do now is I'm going to Set up nginx on one of my clusters only So I'm going to do it here And so I'm saying on my aws cluster created deployment called nginx using The nginx unprivileged Image and Next I'm going to create a service using that So using a nginx svc yaml file, which I prepared earlier and you can see it appear straight away here On the aws OpenShift console, but there's nothing on the gcp console yet And we can check using kubectl as well that Things have happened. So there's a service. It's got a cluster ip And it's on port 8080 And we can ask kubectl to describe it So we get the same information back Um, and now this is where summer in is going to Come in. Uh, so at least one of the layers of summer in there So there's two aspects to summer in there really one is the network connectivity And that's available all the time as soon as Two clusters are connected. They're networking is shared. So the ip layer all the clusters become Accessible to each other and one pod and one cluster can talk to another pod another cluster using its ip address Uh, but the layer on top of that is the service layer and nothing happens automatically at this layer So we need to tell summer in is that we want to export This service. So we've got the nginx service in the nginx test namespace Uh, and we export it using a command called subcuttle export service and subcuttle is a small tool that the summer in your projects provides and it's a utility in the kubectl style, which simplifies All the summer in operations really you can use it to set up your broker You can use it to connect clusters together. You can use it to export services on export services You can also use it to run diagnostics to gather a whole lot load of debugging information You can even use it to run All the tests that we run in ci We package them all up and make them available in subcuttle and so people can run them on their own setup if they want Uh, and so the way this works is so export the service and this creates a new object on top of the service object that Uh, kubectl's users will be familiar with Which is and this object is called the service exports Um, and so you can see here. It's uh, it's using a Modcluster.xkates.io um api namespace so it's this is not a summer in your specific object, it's a part of the Modcluster standard And it's a service export object. It's called engine x same as the service And it's in the same namespace as the service of the exports and you can see here what happened to it So at first It doesn't exist. Well it exists, but it doesn't have a corresponding global ip Then it gets synchronized to the broker So I I created it on the aws cluster, but it's it's been sent to the broker Uh, and The last status that it was successfully synced to the broker. So at this point the broker knows about it and it's been made available to other clusters and it doesn't so you noticed here it's uh Doesn't use quite the same name in the logs service imports not a service export So I created a service export But what's actually synced is a service import and we can go and look for it Uh on the gcp cluster So I have another kube config file Which points to the the gcp cluster and we can see it's show it shows up in a bunch of different ways um, but this is How services are exported from one cluster to the other so one cluster exports them using an export object service export object And that creates a service import object which gets propagated to all the clusters that are joined together and the service import object is then used by uh dns and the The receiving clusters to make the service available and we can check that by running Um, uh a test pod in the gcp cluster. So just to just To show people that there are there are still no nginx services on the gcp cluster itself So nothing's happening locally in gcp but Thanks to summary here. We can access the nginx service that lives in the aws cluster and we can do that using uh a new domain so people might be familiar with cluster.local and in a Multi cluster service scenario you use cluster set.local instead and that gives access to a service Wherever it lives in the cluster or rather in the cluster set Uh, and we can also retrieve information about it using dig from dns And that tells us it lives On the other cluster. So trust me on that one. This is an ip address that maps to the aws cluster and all the gcp cluster Yeah, and it's important to highlight that this is really relying on the underlying connectivity That's our main or provides right so without the ip the layer three connectivity Uh, dns wouldn't be able to resolve this like ip address, right? So sarmena really provides this base ip connectivity and then also the implementation of this mcs api and and dns This is actually awesome So to summarize if if I can replay back what both of you stated that If I if I need to connect Services that are running across different clusters. The first thing is bring those clusters together Okay, establish cluster set whatever to and make sure they are all connected and then as a developer I create my stuffs in one cluster and somebody a consumer or maybe another part of me creates another stuff from another cluster business as usual If they want to talk I just have to create service export objects on top of it and follow Then they can right each one of them can consume fantastic. Yeah, and yeah Yeah, go ahead me yeah, that's right and Everything to highlight is the the operational model model here, which is really the the connectivity part Is is really meant for the then the admin right like the sre type of person Where they are really responsible to kind of bring up the cluster set right to bring up the the the fabric and and like interconnect the The clusters and then once the cluster step is is up and you know the the natural creativity is there between the clusters It's really each and every developer application developer can just export Its services. So this is the the operational model of sarmena Yeah It's a very clear separation of responsibility. You know ambiguity and the developer can live within his or her comfort zone and Not bother about the underlying things and it's fast I was about to say I I can I can see this being very potent in a world where You have a complex microservices based application. Um, I I know people who work in and financial tech for example And their application developers and their application front end needs to talk to 12 different services that I are hosted in a bunch of geos on a bunch of different clusters And and this would this this is how you achieve that sort of transparent interaction with series of other services for You know the consumer of multiple services that in themselves provides another service Right. Bernie, you you mentioned complex microservices and the thing that came into my mind immediately service meshes What do we how do you work with service meshes? Yeah, that's a complex question. So yeah, so submarine there isn't a service mesh. Let's get that out of the way first um, but it can work with um service meshes, although this is an an evolving space. This is it's all all all moving uh fairly rapidly. Um, but for example istio uh, which is perhaps the Better known let's say if not the most popular service mesh implementation Can actually pick it back on top of a Well any mcs provider so any multi cluster services provider So istio on its own can provide multi cluster connectivity and a service mesh across clusters. But if you wish You can set up some renar and tell istio to delegate its its connectivity and service handling to some renar Got him I guess one thing to highlight is that Submenu is is like the main focus of submenu is really the reachability the connectivity aspect And if you look at service meshes, uh, they they really are targeting Different features set like observability and traceability and routing again load balancing and whatnot. So We are trying to to like keep submenu really, you know focused And then as stiffen said, um, if you want to run something like istio Across multiple clusters. Uh, we can take care of the connectivity run. So Right, so you can run like istio on top of interconnected kind of clusters. Um And we actually have a blog post showing exactly that so we set up submenu and then running istio on top And istio from our perspective is just an application, right? So we talked about layers before We are the layer tree We we provide this infrastructure foundation and then from our perspective from submenu perspective istio is just an application You know trying to leverage or like connect different endpoints And uh, yeah, I can I have the link to that blog so we can share it I was about to say if if you share that I can I can send it out I think steven you're probably working to answer the question I was about exactly Yeah, louis' question in the chat. So, um, are there any case customer resources related to the engine x deployment in the gcp cluster Or only in the aws cluster. So there are But I guess the underlying question is did the user have to do anything in the gcp cluster To get this to work and the answer to that one is no, but I thought maybe it's worth looking at the objects that are actually involved here Um, so this is on the aws cluster and I mentioned service export and service import and we can see them here the crd so there's Service export here and you can see two service import crds because there's a legacy one from before the multicluster sig Implementation, which we used but we don't no longer use that one. So if we look at the service export So this is what I created manually on the ws side of things We have one of them. So that's the object that I created um, and that's all I did but then Submariner so a component of submarine that's called lighthouse, which is why this is called lighthouse here uh, saw that I created the service export and automatically created a matching service imports and it did so in two different namespaces. So it did it in the Operator namespace, which is where our own objects live. So ignore these. They're old ones The these two are the ones that were created as a result of my actions during the demo Um, so the operator one and then it got pushed to the broker and from the broker, so I'll go over to the gcp cluster now Down into the crds again so just to We'll just check first of all that there are no service exports And this is in another namespace. So and it's a bit older So this is from other tests that were done on this clusters But it's not what I was using and it's got a different name and on the service import side of things We have This one here, which is the one I created. So this one was automatically imported from the broker So from the aws cluster and that's all that happened and then the dns So we've got a core dns plugin that runs on all the clusters that are joined together The core dns plugin that's running on the gcp cluster sees this service import and maps The ip address that corresponds to the import to the service name and that's how it all works So I didn't do anything on gcp Apart from ensure that the namespace exists Uh, everything else is taken care of for us by subrater So that means that um, there's there's a great deal of flexibility that happens transparently for the user so you know, you just Uh, you you choose to export the service in the cluster that has it or clusters that have it And it becomes it becomes available across all the clusters That are joined together transparently and this means that you can also move services around from one cluster to another Or make them available in multiple clusters. So for example, I could set up the nginx service in gcp and then, uh It would well When it becomes locally available will prefer the local version for latency reasons But if it's if you if we had a third cluster in the the demo Then I could run a test and you would see it round robin between the two So there's distribution and so that's hopefully going to improve At some point in the future as well so you can add But there's work going on in the six in the community six around all this so that you can have Metrics that will allow you to prefer services in one cluster over another If for example, if you're if you've got bandwidth Costs that vary between different clusters or you you have latency requirements For your services, but it also enables things like a failover. So for example, if you have the if you have one service That's available in multiple clusters Some arena will actually Check regularly which ones are available And if it if it notices that one of the clusters is no longer reachable that will automatically stop Offering it as a as an endpoint for the service And so all the other clusters will stop trying to talk to it and you'll automatically fail over to Clusters that are still available That's that was I was about to ask it sounds like this is a Very applicable in a failover scenario or allows you to run those services and almost a pseudo ha only across clusters across regions across those geos I know we have And this is just because I'm I'm I guess I'm the penny counter when it comes to our cloud platform utilization But I happen to know that different regions of aws have different costs And you were talking about differing network network costs and different metrics that might in the future Effect what service was used more? Um, I assume that's up the same vein You know it costs us more to heavily use the service in this region than another So we're going to bias towards using the cheaper region Yeah, and that's that's one of the use cases that are possible with some arena you can Well near has a nice slide with all the different well with I think eight of the main Use cases that we care about and one of them is Expenditure so you can you can reduce your costs by moving compute to cheaper Regions you can reduce your costs by reducing the amount of network transfers that you use And obviously there are also other scenarios like you might you might have data where Legal ramifications, um, so that's important, you know for for well gdpr in europe are equivalent Laws in california or brazil and so on And so you want to ensure that your data stays in one place But you might you might be able to build services on top of it that can be made accessible to other regions Okay, that makes a lot of sense and I can also imagine that a lot of people have data warehousing on premises or to that effect Yeah Where we're hosting some bare metal somewhere that we have a huge server running some database and we can't we were not moving that to One cloud platform So they might span it across a couple cloud platforms with cockroach or or they might just Connect to you know have everyone connect to that one source Um from their scalable. I can imagine black friday is probably a very busy time having been on aws during black friday I I can imagine For wanting to scale up. That's incredible We did have a small pivot question before we forget the service mesh discussion Um, we had a question, uh with submarine or can we avoid setting up service mesh federation? Um, I don't know joy deep in steven or near if you can tag team this one or if we have a good answer for for this Um Again, I'm not a service mesh expert, but I'm not sure you can avoid Uh federation because that's more for the control plane Um, but I guess you can avoid using like for example, istio gateways or like those data data path component that interconnect istio Yeah, so you can avoid that layer Exactly and near back to the back to the thing you were stating earlier service meshes have a different More uh Goal in life. They serve a lot of purposes on submarine or we are focused only on the network connection So the what's it called the east west? Or I keep on forgetting. Yeah east west. That's right. Yeah east west connect that Submariner can provide the underlying layer of that But federation required for other reasons that would still be required. Yeah And steven, uh, one of the things you mentioned I didn't catch it actually Uh, I mean, what about the cni inside the cluster that is submariner completely impervious to that? How or do we have some limitations regarding that or generally it works? What's I think you said something. I missed it. Yeah, so Submariner tries to be cni agnostic. Um, so there are for example Other multi cluster solutions like uh selium So selium has lots of other advantages. Uh, that can be interesting in some scenarios um, but one of the The big constraints with it is that it is a cni so it replaces Uh, your cni whereas submarine or tries to piggyback on top. Well, it doesn't try it piggybacks on top of Whatever cni you're you're using whether it ends up working or not depends on the cni and whether we've tested it and so on because there are They can be you know specifics and so the way that works is that really, um Summariner adds, uh, so it acts in two different ways. Um Its first task is to create connectivity between the clusters. So to do so it opens a tunnel between the chosen gateways in each cluster I remember I said you had to label gateways Uh inside each cluster. So one of those gateways will be chosen and In each cluster and submarine will open a tunnel between those gateways And you can use a variety of technologies there. It can use ipsec using leverage one It can use vxlan if you don't want to add Uh encryption, you know, if you trust your uh underlying network, for example, it can also use wire guard Um, and it's got a plugin architecture, which means it would be fairly easy to develop new what we call cable drivers for this Uh, and so once the tunnel's up, obviously it has to get traffic through that through the tunnel And so to do so it adds, um ip tables rules on all the nodes inside your cluster. So this is acting Beneath the pod networking layer in kubernetes terms um So that all the traffic that's Generated on one of your kubernetes nodes and that is That has to go to a A node that's in another cluster ends up going through the tunnel and Because it's using ip tables rules and not modifying not configuring the cni. Um, it can work with Any cni As long as it fits in with the ip tables, uh setup really So that's where the testing comes in because obviously if a cni has a an approach that doesn't work with the way we're doing things Then we won't know about it. Uh So we can't handle it. But if we do end up knowing about it, then hopefully we can change some ringer Relatively easily to to make it work Go ahead Yeah, I guess, uh, yeah to add to what stiffen said, I guess first of all the the goal of the project is to really Uh, be compatible with uh as many as kubernetes providers and cni providers, uh, which are out there Um, I think for cni's which are based on qproxy and ip tables. It's pretty much just it works out of the box um But with other cni's we may need to do things differently. Uh, and then this is where Uh, we are listening to our users and uh trying to, um, you know, make sure that we support what Is more, you know popular and what users are asking for For example, we did add support, uh for calico explicitly, um, and also for ovn So, um yeah That's why and that That kind of lets you tame rather than rather than the two ways to tame heterogeneity In in in this case seems to be replace or or, you know, adapt So this is this is adapting to what you have running already. You don't need to change anything. We're not going to cause any headaches Um, hopefully if we tested it, we're not going to cause any headaches. That's that's incredible Yeah, but that's that's a big design or kind of architecture choice that we made because we could have, uh, like built a new cni Right and then uh, but then you just have to use the same cni across all of your fleet all of your clusters in order for this to work With sub-ranger, you can actually mix and match different cni's, uh, and we heard about this as well like from users trying to migrate from one cni to another or like all sorts of like, um Environments and then this is what we really want to support. Um Also, if you want if you want to interconnect for example, eks and aks or things like that You're pretty much running with different cni's out of the box. All right So you need something which is above the cni layer to solve this problem Yeah, once you reach the manage the managed kubernetes the managed service domain You have a little bit less choice and control over the cni that you're using So so you you need to be you need one layer higher That's that is an incredible point and that also I I in many cases probably helps a lot with on-premises and cloud deployments as well and some of that translation right right It's just a theme for it's a theme for for distributed cloud things where we're trying to tame the chaos of the heterogeneity We have a And you know basically an intel nut strapped to a cell tower somewhere And we need to be able to have that talk to a service that's running on a cluster on a This is joy deeps domain running on a cluster in you know a data center somewhere. So That's incredible um Let's see I I had something on my mind and it has gone away Ah, yes. I I before I forget I have to ask the question I put a link out earlier to the Submariner project in the Submariner projects docs But the two questions I have to ask are First where can we find you and where can the community participate? I always make sure we highlight that is it just Submariner project? Is there a chat they should join or a cncf page they should go to? Yeah, so um, first of all, um, if it I mean, I guess we haven't said that so Submariner is fully open sourced. Um, we are on github all of the code is in on github Including the testing and documentation. So everything is completely open I guess the most common or popular way to reach out to us is over on the kubernetes slack We have the Submariner channel, which is yeah, it's linked there on the left side We also have a user mailing list and a develop mailing list But I think slack is just the more the most popular one And of course we welcome contribution. We are very proud of our user community We have a bunch of users trying Submariner and using Submariner on different kubernetes providers and We are trying to keep direct communication with them, which is awesome And we also have a nice developer community outside of the reddit crew So that's very very nice to to see and we Encourage everyone to talk to us and yeah reach out with any questions Awesome, that is I just I sent the link in chat. So if anyone's curious that's uh, the resources should be there And I know near and steven will be there to say hello We're either of you there for the naming of Submariner though. Where did the name come from? I don't know how long you've been working on the project Yeah, so we joined the Submariner project after after it was created. It was created by, uh By a rancher engineer and he chose the name Following the sort of general kubernetes nautical theme But it's not a greek name because it relies on a concept that doesn't it doesn't exist back in Back in ancient Greece, which is Under undersea cables And so they don't get laid by submarines, but this was the general idea Submariner provides the undersea cables to connect clusters together Okay, that's clever. That's clever. I I've been curious about that for a while Um, that's incredible And also really interesting here. Go ahead joy deep. I'm sorry now finish up your talk learning Oh, I I was just gonna say it's really interesting to hear that you you guys joined the project after the name But you've you've had to tell the story you've inherited the story Yeah, yeah Yeah Yeah, I was gonna say that's what's great about open source really is that uh a project can It isn't necessarily tied to its creator And lots of different people from different communities and different companies can work on it Yeah, amazing That's wonderful. Uh joy deep. Did you have another question? I had another question But before that talking about names gurney, I heard that you gave yourself a name now. I'll remember it penny watcher Gurney keeps You know Given I mean left to us we would create clusters left right center all over the place and burn all our budget Gurney is the one who keeps watch that. Hey, hey, you're consuming too much of You know, you have too many clusters there. It's costing this much of money So he is really the penny watcher for us You're priming for us to have the the cost management cost We need to have the cost management folks on at some point because I we've we've have we're three presentations deep and we've taught people how to make a bunch of hyper shift clusters Spoiler alert. Hopefully we'll teach people how to make a bunch of micro shift clusters soon And then you network them all together and you make them all compliant But now you have a lot of hardware. So that's uh, that's a fun and interesting problem. Yeah Yeah, but the question I was I was going to ask. I mean, I was thinking about this that have you Have you run into situations where people people have clusters? Let's say in amazon and clusters on prem has have you got into debates where people are talking about? Hey, do I use the amazon vp and solution or whether I use a submarine? And I do realize vp and solution would connect to many more other things rather than clusters, right? Yeah, that's a great question. So Yeah, so the hyperscalers they offer their own set of kind of vpn technologies and Like vpn services But typically they are limited to the same provider, right? So like if you want to interconnect different regions of aws, then you can do it with the aws vpn service But then when you when you want to to connect To different public cloud, this is where it becomes more challenging Also, you need to remember that those vpn services Really concerned with connectivity. They don't implement the mcs api or the service discovery layer that steven showed earlier So we try to provide, you know, not only connectivity, but also the service layer But yeah, I mean We're all about choice, right? So if you can use a different service and it works for you, then you know, go for it Yeah, sure. Yeah, and I I guess especially you were saying Jordy when you get to that that like we said earlier the heterogeneous deployment You end up having I can't use aws aws the solution in these three places So I didn't need to engineer my own other solution or you use something that's more agnostic That makes sense that makes perfect sense um But yeah, uh, and let's see. Oh other community question that I had thought of How did you guys get to the submariner project? I Sounds like steven's been in the networking land for for just a little bit. I I would guess but I don't know how you happened upon it. I'm always curious how you found a community Um Yes, I can't I can't actually remember how we find submariner itself might be uh And the gal who came across it but the the the story in how As a team how we ended up working on submariners that before that we used to work on a project called open daylight which with which is was uh It's still sort of alive, but not quite And a big a framework to build software defined networks um and we Decided for a variety of reasons to to stop working on that and so then as a team we looked around at what would be an interesting space to start uh Working in and multi cluster connectivity came up and so we had to look at uh Everything that was happening in the space at the time. So this was the beginning of istio really uh solutions like that and submariner was Uh a brand new project um and it was So if I remember correctly what we liked about it was that it was technically Relatively straightforward. I mean there's a fair amount of complexity in what it does but uh Or rather how it goes about what it does But the what the what part of what it does and the services it renders they're fairly simple to explain uh fairly simple to understand we hope and so it was a a relatively well defined projects that solved an actual problem which was connecting clusters together and so We decided to get involved in it That's really interesting. I'm gonna have to look at what was the name of the other project again I'm gonna have to look it up just out of curiosity. I feel like I could learn a lot Open open daylight so the story behind open daylight is quite interesting Not just in terms of networking but in terms of the whole landscape of open source projects and kubernetes nowadays because Open daylight and was I think the first Collaborative project in the linux foundation after linux so The linux foundation was created to provide a home for linux the kernel Uh, but it's expanded over the years to to encompass a whole variety of things and one of the big one of the big Drivers behind the linux foundations activities over the past few years has been to encourage collaboration between companies working on on big projects that were You know designed to change the world or and so open daylight was one of those and I think it came out of Cisco initially Cisco in juniper perhaps um and The the linux foundation started the linux foundation networking, which was the first host Organization for something other than linux and Open daylight became the model for collaborative projects in the linux foundation with all these companies who were actually competitors working together At an engineering level uh on a common Project and so obviously each company aimed to have products derived from all these projects With their competitive advantages and so on But to get them all working together was uh quite a feat really and that then led to the model behind the cncf the cloud native Computing computing foundation Which which serves a similar role for everything around the clouds nowadays That's incredible. That's an I I should say I I did look up open daylight in the side here I have it open for later I did find the newcomers guide and it starts with the text. What what is garret? Which is which is definitely definitely tells me that's I don't think that stuck around for cncf, but um But garret is always a fun place to start Yeah, that's amazing. I'm gonna have to read up a bit more Well, we are at top of the hour But before we finish up steven near did we miss anything that you wanted to talk about today and all of our questions? Yeah, well, there's tons of things that we could talk about but I don't think yeah I think we've covered the basis is pretty good introduction for a one-hour show Awesome. Okay sounds good. Uh, thanks for thanks for coming along again. I'll go ahead I'm gonna splash up the show contact So we've had a good primer steven. You hit hit the nail on the head. We do have a show contact So if anyone has any questions afterward This email will be live. Um, and I can loop in steven and near If if anyone has any interesting questions or thoughts about submariner Also find them on kubernetes slack under submariner. So that is a place to get involved and participate Thanks again for joining the show steven near we'll let near get uh head off to sleep. It is very late for him So he is our first. I think this is our first fully international guests on the show. So we've kept people on late Thanks everyone for coming to the cloud multiplier. I do not have an outro We're gonna keep using the intro as the outro And I'll see everyone in two weeks. I think I actually have something that I could tease for next week. I think we're uh Jody should we tease? I think we're talking is it talum now? um And it stands for what is it tech topology aware life Manage management manager or something like that. Okay. Well, we'll figure that out But for two weeks, but we'll be back in two weeks to talk about topology aware And we'll figure out if it's manager or management so we don't embarrass ourselves there So thanks for coming everyone and we'll see you in two weeks. Thank you guys