 Yeah, so I'm Yago hi I'm a tech lead and co-founder at King folk. I guess you've heard about King folk being that you are at this conference And there's a logo everywhere But yeah Basically, we are a software consultancy We're a team of software developers that help companies to work on their open source projects when they don't have resources or They don't have this predict expertise needed and But recently we have a bit of a shift So we fork Coro's container Linux and we call that flat car and we are offering support for that and Soon will be announcing some other Product related to Kubernetes. So stay tuned Anyway, the plan for this talk is as follows. I will first introduce a problem that we were trying to solve Then I will give a high-level description of the program that we developed to solve this problem I will talk a bit about the technologies used surf raft and HDp2 And then I will share some gore implementation details that I found interesting. I will try to do a demo Hopefully it will work and then I will talk about what's missing and give some conclusion So the problem is basically there's this Enterprise systems that have a lot of legacy code and they're running, you know in big monoliths or something like that And they want to communicate with a Kima cluster. So Kima is this product by SAP So it basically gives developers ways to easily Deploy their applications in a Kubernetes cluster and it gives Some kind of serverless framework by default so they can use lambda functions and stuff like that But for the purpose of this talk, just think of a regular Kubernetes cluster because I actually haven't even run the Kima cluster I'm just dealing with playing Kubernetes So they want to connect that cluster with some external services and they want to do it in a transparent way So they want to be transparent to client applications and they want to reduce network latency between the cluster and external system And they also want to provide high availability and resilience in a simple way So you can just deploy your application to this external system and you don't need any Kubernetes or any fancy Distributed system. They just need something simple that can provide high availability so Yeah, I will give now a high-level description of the wormhole connector, which is what we call the this project So Basically, it's just a distributed proxy so Being distributed it provides high availability and if I know it goes down it can continue fine It's based on using surf and raft which are two distributed algorithms Which I will talk about a bit more later And yeah, it connects the enterprise systems to Kima and Kima to enterprise systems and it uses to do that It uses an HTTP tunnel between the enterprise system and Kima So there's two Components of this one is the wormhole connector proper that runs on the enterprise solution side And it's just a basic HTTP proxy So serve services running on enterprise solution can talk to the Kima cluster by just setting the proxy variables there So all the connections received from the enterprise solution services that will go through this tunnel to the Kubernetes cluster and you can find a project there The other component is the one that runs on the actual Kubernetes cluster and the What it does is it proxies the received connection from the tunnel to the services inside the cluster But it also does the other way around so it's also an HTTP proxy so services inside the cluster can also talk to the enterprise system and Yeah, this is Yeah, it has some a small example implementation there inside the same repository So I have to say that this project was more of like a proof of concept So don't expect like a finalized software So yeah, just a heads up So I want to talk a bit about the technologies involved serve and raft first so yeah serve is a go-sip distributed protocol and Yeah, it basically Allows you to have some cluster membership and to pass some messages to all the the members of your cluster So you can see there in this give I found on the internet That some Node receives some kind of message and it goes to the other nodes because one node talks to another node And then the other node talks to others So in the end the whole cluster will know this message that is been this is being sent Yeah, so we use this in the wormhole connector to basically add new members So basically you just need to start the wormhole connector and tell it okay there's he here's another member and then it will join the cluster and Yeah, it will also prop it also provides a way to know when the member has disconnected to remove it from the cluster and Then we use draft which is a consensus algorithm. It's used by things like etsy D and console and Basically with draft you have some certain state that you want to have in your cluster and you want to have it consistent So if you query other members of the cluster, they have all the same view of the state and And yeah, it provides a leader election So you can have a leader that is doing all the right operations and then the others are followers who can You know, you can read from them and get the same state so in wormhole connector we use that to elect leader of course and we start some state in the rough SSM which is finite state machine and Yeah, for now it's this this is just a simple array, but this has some application in this context because there's this concept of event bus events and Yeah, basically the This legacy system can send events to the cluster and we have to save the state there Somewhere in the cluster so we can have retries and we can have yeah Things like that In the future you will start those events I was talking about but for now it's just a simple array of strings because as I said It's a proof of concept Yeah, so if you want to learn more about drafts in a more detailed way There's a lot of good resources on the internet There's this page that has some nice visualizations and the explanations and of course you can also read the rough paper Which is pretty good Um, yeah, so as I said, I just implemented a simple queue and You just send an event to one of the members here connector one for example and Then you can read it from another member and you will see that the same event is there You can also delete the events and yeah, that's basically the the toy API that we've implemented So talking about some other technology HTTP 2 Basically, this is just the the main differences between HTTP one and HTTP 2 so HTTP 2 provides header compression so things are more efficient and It's a binary format So it's also a way to improve efficiency and the most important part for us was that It has request response multiplexing that means that with a single TCP connection you can have several connections HTTP connections and they don't need several TCP connections like with HTTP one Yeah, so it's something like this You have some HTTP to a connection which is a TCP connection between the client and the server and then you have different streams So this can be totally different data And it will reuse the TCP connection So you'll have to do with a handshake again or the HTTPS negotiation. It's just a tunnel that stays there So I will talk now about some implementation details of this Yeah, first of all the libraries I've used are the hashicorp ones and They were pretty cool because they were simple to use Basically for example for surf you just have some default config you configure some values like the bind address you are listening on and Stuff like that and you just create a surf instance and then you just get the events when a cluster When a member joins the cluster or leaves the cluster, so that was pretty cool And for rough same thing. So you need some kind of store on disk We've chosen the bold DB store because of simplicity and you just create a new store Create a snapshot store which can be backed by a file. For example You need to choose a transport. We chose a simple TCP and then you need to Give the this finite state machine implementation. I've talked about that in the same next slide and then you just start the raft Yeah object and yeah, that's pretty much it. So it's pretty easy to use And yeah, well documented So this FSM thing. Oops, sorry this FSM Interface is just something some abstraction that the rough library has so you can have a finite state machine of your own data Structure. So in this example, I just have this go structure which has some Slice of events which are just simple strings and then some mutex to avoid. Yeah problems with concurrency And then you have to implement this operations So yeah, you can Change this structure and store whatever you want and this state will be replicated through all the members of your cluster So that was pretty neat, too One thing to mention here is that the right operations are only you can only do it on the leader And that simplifies things for the protocol implementation So what we do basically is just if there's a right request we redirect to the leader and then the request will be written So this is the basic architecture of like the full picture of this Thing I was talking about this project So you see here that there's a wormhole connector and it has several boxes. So it's this distributed part So the part on the left is this enterprise system and the part of the right is the chemo cluster or Kubernetes So basically you see here two connections one in red on top That's between the client and some community services service inside the cluster and Basically the client just sets a proxy to connect through the wormhole connector The wormhole connector will have a tunnel to the wormhole dispatcher running in the cluster And then one whole dispatcher will do the dispatching of the traffic to the services inside the Kubernetes cluster You also have the other way around so you have a Kubernetes spot there in blue. I guess that is also configured with a proxy to the wormhole dispatcher and Then the connection will be routed to through the tunnel again back to the connector and connected to a server in these enterprise system So, yeah, that's pretty much the architecture diagram Yeah, so some other details to be able to do this of only using one TCP connection A Golden has a very simple way of doing this. So you just created your HTTP client This HTTP client content contains some transport and as long as you use the same HTTP client Your requests will be routed through this one connection So this is a pretty neat API and it's easy to use and yeah, it just works We didn't have to care much about how And then tunnels yay, so this is some chunk of code on a slide So I guess it's not so nice, but I can try to go through it. So basically we have here two things One is if the connection going through the proxy is HTTPS. It uses this connect method for HTTP and creates a tunnel to the to the Yeah, to the other endpoint and this is so because we don't want to You know get the unencrypted information in the proxy. So we need to do a tunnel on the other case is just a simple HTTP without encryption. So The proxy will just remove some headers and we'll just pass the connection to the to the other side And then we'll read or response and then yeah, that's pretty much it for HTTPS we create this tunnel and this tunnel we use as the same client So it will reuse the connection for the tunnel And then yeah, you have some details there if it's HTTP one You have to hijack the connection if it's HTTP 2 you cannot do that because if you hijack the whole connection The other streams and the connection will die So you do something else, but yeah, you get the idea It was pretty pretty easy One other interesting thing is we're doing is if we go if I go back to the diagram Yeah, there's a connection from the dispatcher to the connector But we don't want to you know, we don't want the dispatcher to know the address of the connector So what we did here is we did some kind of reverse tunnel So the connector will connect to the dispatcher and then the the roles will be inverted and then The client will be the dispatcher and the server will be the connector So this is on This way and go so basically dispatcher will accept some TCP connection You can create a transport with that and then you can tell the transport to return the connection that was established before to be the the transport and Then you can create a client and then you can just do Get or whatever on the other side. You basically this is the other side so the connector will dial to the dispatcher and Once you have this connection you can just set up a new server and tell it to serve the connection on this TCP connection. So basically you do the yeah a reverse tunnel Right, so now I'm gonna try to do some demos So let's see if they work Let me Change my screen configuration Hopefully I don't break any video Maybe I did. Can you see anything Daniel? Okay, okay, well there's some Yeah, there we go All right, so well, let's start with this other demo So I'm gonna start three instances of the wormhole connector and they're gonna be running in rocket containers So I can have yeah. Yeah, just because I developed a lot of on rocket and it's just easier for me Yeah, I guess this is a very long command, but this and it doesn't really render properly But this basically it sets up a bunch of options starts the the work on connector process and Yeah, that's pretty much it and Now you can see that here it says the node is a follower And then it gets promoted to a leader because there's only one member in the cluster So now we'll start up some a couple more Telling it the address of one of the members So yeah now we have here two followers In the bottom and one leader here So now I was just I'm just gonna show this API that I mentioned of a simple distributed queue So I can just I don't know post some event event one to the first the leader and Yeah, that works fine. I can just get it From the same leader I get here event one or I can get it from another node it's the same result and Yeah, I can just post again another one like event two I can post it to I don't know another node like three and But for this I need to enable redirection. So that shall and now if I get an event from any I posted the same. I know. Yeah, so it still has the old event because I haven't deleted it yet So let's do that And now if we query again, we see event two we can delete that too and yeah, that's the same So that's one demo of these FSM thing Now I want to show another demo of the whole workflow. So I have a mini queue Kubernetes cluster running and Yeah, you can see That it's running This wormhole dispatcher Service I was mentioning So now I can start the wormhole connector Just yeah, I set up some entries in my at city hot Etsy hosts So it's certificate work, but basically I'm connecting to the to the dispatcher and Now you see this connected to wormhole dispatcher. So this means that the reverse connection was established and Now what I'm gonna do is Yeah, I have here an echo server service inside the Kubernetes cluster and this just returns The some information about a request that you made. So it's just a simple like ping pong kind of thing But this is running on the Kubernetes cluster So in theory, I shouldn't be able to access it in my on my host But if I set the proxy to be the connector wormhole.io, this is an address pointing to this wormhole connector and Some certificates so I can trust the CA Then I can just use this URL which is the name of a Kubernetes service with this port and I'll just get a response. You can see here that there's an outgoing HTTP connection And this goes to the cluster and it connects to this service inside Yeah, so I can do this several times and then if I do net stat You still see only two connections Going to that Kubernetes cluster one is the reverse tunnel and the other one is the other tunnel So the other thing I can do is just run some pot Cube CTL run Okay, so I'm running an alpine pot I'm passing the CA here in an m variable So the thing is trusted So hopefully this will start It doesn't want to start still creating I guess I'm not learning the thing on the network. Oh Yeah, and this is a nice bug about QCTL of my version at least that if you have a Already an image of the same name running. I mean a deployment of the same in running. It just crashes. That's neat Okay, this for some reason it doesn't work But what this demo will show is that you can get to a pot inside the Kubernetes cluster Configure the proxy to be the dispatcher and then it will connect to my host I was gonna show it by just running netcat here But this doesn't seem to work. Oh, it's running So hopefully I can attach to it good So Yeah Let me pull some instructions here because I don't want to Yeah Okay, good internet is not really working. Well, yeah, anyway, you could trust me that this works And that's it Okay, so the missing pieces here Yeah One thing that you notice is that there was two TCP connections one on one side and one on the other side I would think that theoretically it's possible to use just one TCP connection And that was the goal in the beginning But I didn't manage to do it in the time that we had to work on this project Yeah, I'm not sure if with the go API you can do this or not Or you have to do some other weird stuff. So if you have any ideas about this, please, let me know Yeah, and the other thing that's missing is this events support that I was mentioning so basically the enterprise Solution will send some event to the cluster and we will be able to store the events and try to Deliver them to the cluster and if that fails we can do three tries and all that stuff. So that's not implemented and And See even though this is a simple The program that you can run to run to create a cluster on the enterprise side We also want to support deploying that in Kubernetes and for that, of course You don't need any serve or raft because Kubernetes already provides the distributed primitives that you need There's some working progress implementation, but yeah, we didn't have much time So yeah in conclusion Yeah, so we've implemented to clients of distributed system libraries to proxies to HTTP tunnels a lot of two We learned a lot of these These things because this is not something we've worked on before and we had a lot of fun. So yeah, I hope Yeah, you enjoy the stock Thanks, and yeah If we have any questions, yeah, so we have about five There are about five minutes for questions. Yeah I assume the proxy side car the proxy pods have to run in host networking Sorry, I assume the proxy pods have to run in host networking. No, they don't. Oh, okay They just connect to this Kubernetes service and they will get to that dispatcher thing and Yeah, that's a good route and then connections into the cluster So from the external cluster into the Kubernetes cluster Okay, so which cluster you were meaning before Somebody's talking outside contacting that target Kubernetes cluster. Okay, that's running on the host. Yeah, that's right So that that services the servers are in the host network and then talking to the okay Yeah, I thought you meant the services in the Kubernetes cluster. It's kind of confusing So there are no There are no other questions. I have one. Yeah, so how do you do you authenticate like how do you? The Kubernetes cluster and the enterprise classic authenticate. It's just my HTTPS certificates So do you have something additional on top of that? No, just HTTPS certificates So basically the authentication between the proxy and the client application is via HTTPS certificates, of course and then you have to specify some CA that you trust for the Kubernetes cluster and So the the certificates for the Kubernetes cluster endpoint has to be signed by desert CA. So it's just so I can regular Yeah, okay Thank you for the talk. Oh, thank you for the talk So you have this thing where you do the reverse proxy and you basically don't accept on a listening socket and How do you make sure that there's a connection ready when you need it? Basically, right? How do you make sure that there's a connection ready to be accepted when you need to? dial out essentially Yeah, so It's basically just a loop that listens on a port and If it gets a connection there then Then yeah, it knows it with some headers It knows that it's from the wormhole connector and then it it does this logic of doing the reverse thing I'm not sure if that makes sense. So it's just a very simple implementation. We didn't do anything fancy Cool. So thanks and yeah, enjoy the conference. Thank you