 Okay, hi, I'm Mike Lowe. I work on the JetStream project at Indiana University. And we've done something a bit weird. So we've gotten rid of Layer 2 networking in this deployment. We've done that by using BGP. So if you'll, I think the best way to show this is just to sort of have you follow along as I type and kind of I'll explain what's going on. We're on a compute node right now, one of the, there's two, there's two metal box, kind of takes five nicks in this thing. Hey there, I'm sitting there. So if we need to note those E3 and E4, because that's gonna pop up a bunch later. So if you look at the routes on this thing, we've got dozens and dozens and dozens of routes. And you can see that they all come in over BGP. Let's see, we use the free range routing, formerly known as Quagga to do all of this. And this is not particularly unusual. You can find this in places. But you slap the address you want on the loopback, bring up the two nicks without IP addresses, set up a BGP router, and more or less, that is sufficient so long as you redistribute some of the, let's see. Yeah, so long as you redistribute your connected interfaces right here. Right, that is enough to get this note up. So that's not particularly unusual or special. But that's just one piece of the puzzle. So when we, oh, what was that? Maybe Newton that started using service types or subnets, I'd never used those before until now, is what I needed to do was conserve my routable IP addresses for floating IPs. In a traditional way, when you use DVR and you have floating IP addresses, that your compute nodes all burn a routable IP address. And that's unacceptable, I figure I'll run 1500 VMs and have 500 compute nodes. So giving up a third of my IP addresses for compute nodes just to sit there is not, that's not acceptable. So what we've done is we've used this service type on the public network. So all of the floating IP gateways are gonna land off in this cider block. And if we look at this guy, one of the other subnets on the public network, it'll catch due to the service type, it'll catch the floating IP and the centralized SNAT gateway that you have on your network node. Okay, so we've got that piece. Let's see, right? So if you aren't familiar with the operation of DVR, it breaks up the classic single network namespace that's on the network node into a bunch of different pieces. The compute nodes all get two, right? So you get a floating IP gateway network namespace and you get the router namespace. And what happens when you assign a floating IP is this router will do the source netting and dump it off into that floating IP network namespace. So the way that's normally supposed to work is you're supposed to have a flat network that you plug that floating IP into. And there's some router off somewhere, real one analogous works. We don't have any of that, there is no layer two. So we have to tell lies and big lies right here. So every single compute node, this provider bridge winds up with the same gateway address because we don't care about East-West traffic. These guys are never gonna talk amongst themselves. We only want them to shovel more self-traffic off into the fabric. So we can just tell lies and put them all on the same site or give them all the same IP address and nobody's the wiser, right? Let's see. So yeah, that's all relatively normal. Everything's in the, this all works the way way it's supposed to according to the documentation, right? So outbound traffic goes out through the compute node. But what the docs say is that when you do this DVR, all of your inbound traffic from the world comes in through the network node. And I don't find that acceptable. So I changed it. So what we did was we took advantage of a feature of free range routing. You can redistribute any route into BGP that the kernel is aware of. So if you IP route add something, it'll just get published into BGP. You can find this right here. It was added. That is one of my floating IPs, the static route. So on this noted static everywhere else, it shows up as a BGP route. So we get all the traffic destined to this guy. It's grabbed by the network fabric and shoved onto here. This guy knows what to do with it. It'll put it right into the floating IP network namespace. It knows what to do with it. It routes all the way in. So that behavior is not normal. And that is what I patched to make all this go. And there's my, if you take out the debugging, it's like a four-line patch. All it does is add a route when the floating IP is assigned and then remove it when it's gone. So let's just go ahead and start pinging it here from my laptop here a couple of miles away. And if we yank that, I don't know, assuming there's no typo. Try server remove floating IP. There we go, thank you. Ping stop, it appears there. No route from the kernel, add it back. It gets published out to the world. This ping is through Comcast, through the commodity internet, up through Chicago, back all the way down. And we should see that route show up there. So the major motivation to do this is with Minoa and native set FS, we want all of these virtual machines to have full bandwidth, or at least as much as they can get to the storage cluster. So that's our major motivation for trying to cut out the network node to the degree possible. I don't happen to have something that does full bandwidth, sitting there running IPerf. But at the minimum, I've got a 10 gig real host sitting on the other side of the machine room. And with protocol overhead, runs queen at 10 gig to a single, this is a single virtual core. So not bad, it's untuned. This is just front of brand IPerf. So, yeah, any questions?