 Okay, so I wanted to talk about NVM-EOF fabric services to kind of cut to the quick of the presentation. I really want to ask the question, you know, how does Linux Plant support the increasingly complex landscape of NVM-EOF discovery, especially when it comes to multi-abender and multi-fabric environments with NVM-EGCP? Personally, I think the NVMe soft target really holds the answer and that's what I'm going to talk about. So by way of introduction and orientation, there's a whole list of NVM-EOF discovery-related TPs that have been coming out of the FMDS working group and NVMeXpress.org. The list up at the top here have all been recently ratified. You can go read these if you're interested, but believe it or not, these things all kind of come together in some interesting ways. There's also upcoming NVM-EOF fabric discovery TPs. There's in particular, there's more than these three, but in particular, what we're working on is TP8016, TP8017, and TP8019. Like I said, there's more than this, but these are the ones that have recently passed phase one and are now in phase two. And I would recommend anybody who is interested, you can go as long as your company is a member at NVMeXpress.org. You can just go in with your email address, you can just log in, create a login, and you can get access to all the draft TPs and TPARs and understand what's going on. There's also, believe it or not, some IETF draft proposals. This is another subject that we're debating at FMDS. So there's a lot of competing ideas and technologies and ideas out there with NVM-EOF discovery. I'm a little bit concerned about this as a systems engineer and as a technologist. I'm a little bit concerned that it's getting to be a little bit big and unwieldy. What I wanted to talk about specifically is just show some real examples of stuff that I'm seeing in the lab. So all of the names here have been changed to protect the innocent. We're not going to be talking about anybody's particular hardware or vendor, but, yeah. Can you increase the font size? Font size is too small. Oh, man. I think that's going to be hard for me to do. The screen is like 30 feet away in here. Really? If I stop the share and what if I move this over to here and I try to share again, I'm not sure how I can fix this problem, guys. The font size is way too small. I don't think I can fix that. You really can't see it, huh? I took it out of the theme background, so it's bigger now. All right. I want to show some examples. Like I said, the names have all been changed. We're not talking about anybody's particular controller. We're just talking about these are things that we're working on. These are examples from what we're seeing in the lab. This is what I'm going to call the discovery controller number one. One of the things we talked about yesterday was we can get these discovery controllers that are going to be returning multiple different types of discovery log page entries. Here's an example here where this one just so happens to be sharing both FC and TCP log page entries. I'm going to look at my local configuration and what I see here is this is what my configuration looks like. I can just see by looking at my host, it's like, well, I've got two basically two networks. I've got a 172.16.0 and a 172.16.1 network. Every time I look at this, what occurs to me is why didn't we simply add a subject mask to the TIADR? Any time you're going to do any time of the layer, again, I'm just a storage engineer trying to do networks. But any time it occurs to me that any time you're going to do anything with a layer three network, you want to know not only what the IP address is, but what the subnet mask is because that is really the only way you're going to be able to figure out how these networks are all fitting together. That's actually something I'm thinking we should actually propose a change to the MDMU of a fabric spec and add that. But let's just take a look here. So I'm going to just kind of fumble around here. So, okay, obviously, if I try to connect to the 16.0 network from my 16.1 network, I can't get there. That makes total sense, right? So I'm going to, does this discovery controller actually support persistent connections? It's actually hard to figure that out, right? I'm just going to try. Well, guess what? There it is. There's my discovery controller. It's now a persistent discovery controller. And you can see that it's using the well-known NQN, right? And so that's what it looks like. So let's look around a little bit more. So how can I figure out where all the different subsystem ports are? So I'm just going to start looking around. I'm going to say, okay, I'm going to grab out all the FC non-sense and I'm just going to try to figure out, okay, well, I've got this 102, I'm going to, for my 16.1 network, I'm going to look at the, that's as much as it tells me. It says I've got these two networks, I've got these two addresses, right? Now I wouldn't really know if it wasn't for the fact that somebody already figured this out for me and actually set up my discovery.conf. So all by myself, just given the protocol and given an IP address or two, I can't really figure this out. So it turns out, yes, actually look at, there's a whole another, there's a whole another discovery controller out there. And so this is kind of how the, right now, the soft target works in the same way that we're talking about, well, we're going to have a discovery service on every, we're going to have a discovery controller on every port, on every subsystem port, right? And that's kind of how this discovery controller is working as well. So let's look at discovery controller example number two. It's like, okay, well, that's a real mess, yeah. There are some TCP, you know, I really consider this to be an implementation problem. As we discussed yesterday, there's nothing in the spec that prevents this from happening. I just don't understand what the utility of it is or why a discovery controller would want to do this. So again, I'm going to look at this neck, well, this network's a little bit different. I've actually got, they're actually the same network. I've actually got two subnet, I've got two layer two networks, two bridge networks that are actually, even though they're two physical, two different devices, they're actually on the same layer three network. And I can again, tell that by the subnet mask. So I'm going to, again, I'm going to start looking around here. It's like, what can I discover? What can I figure out? Well, there's a couple of ports there. There's a couple of ports here. I actually don't have a discovery.conf file for this one. So any place I look, I'm not going to get any failed, you know, I'm not going to get any connection time out because it's all just one big network, apparently, right? And so that's as much as I know. So this is why we invented TP8014. So when TP8014, the idea is that, you know, there is a new discovery log page entry that actually describes, you know, the discovery controller made by John Minnagini. There it is. And so this actually has got four log page entries. And the idea behind TP8014 is not only that we're going to create new log page entries to describe the discovery subsystem, but we're also going to have the requirement there that your discovery subsystem is going to report all subsystem ports and all network addresses. So for the sake of brevity, I kind of truncated the output here. But what I can assure you is that, is that this particular discovery controller that supports TP8014 is actually reporting IP addresses, all four IP addresses. So just like I looked at my discovery.conf in an earlier example, this effectively replaces the discovery.conf. And there are other little entries here called explicit discovery connections and duplicate discovery information. If you go read TP8014, one of the things that duplicate discovery information is going to tell you is that, that what that tells you is that any one of these different IP addresses that you connect to is going to return the exact same discovery information. And so that way the host can, like I said, this is kind of my this is almost like replaces discovery.conf. So that's why we created TP8014. And there are implementations out there. There are working implementations. This is all pre-production stuff, but there is all stuff in process that's coming along to do that. So the other thing about TP8014 is it doesn't really give any specific requirements. And so it's going to also potentially in log page entries four, five, six, and seven, it could also return the actual NVM subsystem log page entries for the host NQN that did the connection to the discovery controller. So that's what TP8014 does. So we're going to look at another use case here, what I call the discovery controller one host NQN ACL. So one of the things that we talk about a lot, one of the things I hear bandied about is, well, you know, NVME over fabrics has virtual subsystems. And virtual subsystems are important because virtual subsystems act as an ACL. I agree, but that's not necessarily the way everybody has implemented it. So just to give you a living example, right? So here's my host NQN. So I'm going to connect with that and I get, well, I get three discovery log records, right? So what happens if I say, well, I'm going to just jig up another host NQN and I'm going to say, I want you to connect to, you know, C2C0 dead beef, right? Which is something that doesn't exist. It's like, well, this discovery controller is saying, no, there's nobody here for that. And this is what we call the ACL capability, right? That the subsystem kind of kind of act. You've got a host NQN and a subsystem NQN. And those two things really have to be programmed together inside of the storage array to be able to allow access to any particular subsystem. However, if I look at discovery controller number two, well, I'm going to try the same trick. And I see, well, it's actually returning, it's returning just as many log records. It actually doesn't care what I give it for an NQN. As long as it matches, you know, a reasonable what an NQN format would look like, it'll let anybody connect and it will return the same log page records to anybody and everybody. As far as I'm concerned, this is the reason for the CDC. So let's talk about the CDC. So TP8013 and TP8010 kind of go together. And so it just so happens that I have a something that looks like a CDC. I'm not going to say this is a CDC. But you know, this is stuff that again, we're working on the lab. And so this is that a different IP address on the same network, on the same fabric. Now right now it's saying I don't have any, it does not have any discovery law records, but that's mostly because that's mostly my fault because I haven't really gotten as far as, you know, getting everything while working and configured, right? Now I would try the same little dead beef, you know, and it's going to also, well, okay, but I would kind of expect that to happen from a CDC because in a CDC, the CDC is supposed to be the centralized kind of clearinghouse. So maybe that's something I would expect the CDC to do, right? So I'm going to see again, how do I figure out if this thing controls, it supports persistent controllers? Well, I'm going to connect to it. There it is. It's the Minnikini CDC. That is, it's up and running. Okay. And how do I tell? Well, I look at the controller type and I'm going to connect to my other discovery controller, which is on the same fabric and it's returning a bunch of records. The idea is that eventually these log records here that are being reported by this discovery controller are going to be kind of exchanged or given to the CDC, the central discovery controller, and then the central discovery controller will then serve those log records. That's kind of the idea behind the CDC. So, so again, and that's a controller type zero, right? That's, and it's using the generic, so the thing I want to point out with the CDC up here is that when I connected to the CDC, I actually connected with the well-known discovery NQN, but what it's actually returning in the identified controller, you know, sub NQN and the identified control data structure is it's returning the actual and the actual, what we call the unique NQN discovery NQN controller, the unique discovery service controller NQN. That's, and that's directly from TP8013. So, other controllers wouldn't do that. If you see a controller that does that, it's because it's implemented TP8013, which is not to be confused with TP8014. So, TP8014 and TP8013 are really like kind of almost different ways of solving, you know, solving a discovery problem. So, the last thing I want to talk about, which is something which has been in my craw, is this whole idea of authenticating discovery controllers. So, Hennis has a bunch of patches upstream right now. He's doing a lot of hard work and I appreciate all that. But my question is, this picture right here, okay, this actually came from an NVME slide deck. And the reason why I'm sharing this is because we've all agreed there is nothing in this picture that you cannot already do with everything that's in the standard, right? So, we're not, I'm not sharing any information, which would actually be inappropriate for me to do. I am not sharing any information about any working proposal in T-PAR. We're working proposal that's in progress. But this is just an example of the types of things that we're talking about in FMDS today. And as I see these ladder diagrams grow more and more complex, I am just saying to myself, this stuff is just going, the more complicated it is, the larger risk of failure is my opinion. So, really what this does is this combines the unique discovery NQN with in-band authenticate and the TLS profile, right? This is fully supported today by NVME CLI 2.0. So, you've actually got all the bits in there. And if you actually have a discovery controller that supports these TPs, you can do this today. So, questions. Okay, what's the utility of doing any kind of DHH MAC CHAP authentication with a discovery controller? What's the threat model? Okay, do we want to support this with the Linux NVME soft target? So, I'm going to talk, this is the end of my presentation, but I want to talk more about these questions. Why do I see the Linux NVME soft target as being key to this? And by the way, some of the patches that are being worked on are actually on the soft target. And there's been lots of going around and around about that, right? So, the company I work for, we have kind of a love-hate relationship with the Linux soft target. So, we don't want to support it, we don't want our customers using it, because that's not really the business that our distribution is in. But on the other hand, it's like, we do support the iSCSI soft target, right? And the NVME soft target has actually saved us many times, right? Just because it provides a software target that we can use to implement things, to debug things, to bring up new capabilities and features and technology. I think pretty much all the work that UN did to bring up ANA was done with the soft target, because at the time, we just didn't have an ANA controller that was working, right? So, I really see, we don't have to implement any or all of this, as we were discussing yesterday. Just because it's something shows up in the spec, doesn't mean that we have to actually support that upstream. And just because something is supported upstream, it doesn't necessarily mean that the distributions are going to want to support it, right? So, these are all, you know, have to be, in my opinion, carefully thought out. So, these are the three things I want to talk about. So, I guess I would ask Hennis, right? I haven't looked at your patches closely. I know that you're struggling basically trying to get, you know, unique discovery NQNs into the soft target. I haven't looked close enough to understand if TP8014 would be supported there. And what are your ideas or other people's ideas about supporting TP8013 and TP8010? And what in the world are we doing with in-band authentication with discovery controllers? My opinion is, you know, I don't understand why we're even doing, there is actually secure channel capability in the NBME spec, right? So, I would think it would be much easier to just say, forget about all this, you know, DHMAC chap, you know, authentication. It's like, let's just implement secure channel. Right? Let's just bring up a secure channel and then it takes care of everything else. So, that's the end of what I wanted to say. Well, so for once, 8014 is the additional discovery log page entry, right? Yeah. So, and that's already implemented in the NBME code. So, that really is... In the target code or in the initiator? I know it's in the issuer code. So, that's true because I already have all the information. 8013, though, that's the unique NQN. That's the one which is being currently discussed for which I've sent the patch set for the target support for the client support. There really is nothing we need to do because that's, well, just something we get back. It's there. Right. It's there. And support for 8013 is already in NBME-CNI. So, both are more or less done. And also, the target support is... Well, it's actually... I personally think it's a nice patch set because it actually extends the capabilities of the current target implementation such that you now finally can configure the discovery controller and what the discovery controller returns. Currently, the discovery controller in Linux is completely internal and you have no way of influencing the behavior. And that, mind you, that the behavior is, well, a very peculiar choice we made when implementing it. So, it's by no means common the way we implement it. We implement it in Linux. So, that's trivial and mostly done. The more contentious things are indeed authentication. But, again, there's the patch set which does authentication and this patch that doesn't really rely on whether it's discovery or not discovery, it's just, well, figuring out whether authentication needs to be done or not. And that is also why we need this patch set because authentication is not really something we can influence. We have to start authentication if the connect command returns a status authentication required. And that does, well, literally nothing we can do to influence that. So, not supporting is not really a choice because that would mean we wouldn't be able to connect to those targets. Right, but there's a difference between. What I'm asking specifically is, you know, what's the utility of authenticating with a discovery controller? Any discovery controller? I can see an NVM subsystem controller. I fully agree with you that the use case is questionable, but then again, what are we supposed to do if we try to connect to a discovery controller and we get an error authentication required? Then you can't. Well, no, rather not. We just ignore you. We can't connect to you. Go configure. No, really not. This is not the user experience which your classical user wants to see. Because I understand that. Getting customer calls saying, oh, I can't connect to that array, which is a sport and cost me loads and loads and loads of money. It's not really a good argument point. And what's more, that's really, so the implementation itself doesn't really change whether the use case is something which we would accept as a valid use case is a different story, but then not really our business. If they think it makes sense, right, by all means, let them do. 8010 on the other hand is something which we really don't need to spend much effort on because thankfully enough, this is primarily CDC behavior, what the CDC does under which circumstances. And from the client side, it just means that you have to support the register command so that there's this specific, whatever, registration commands for the client, for the host. That's the command which you need to support. And I think support for this already went into MDM ECLI. But really, that's all we need to do from the client side. Everything else just, you know, it's. Right. I was more talking about the plans for software target. Right. Is there any idea of emulating. Target similar. So the target will be currently is and I think will always be just the DDC, meaning a normal, just will continue to support just normal discovery control. It will never be anything else because well, why should we? That's none of our business. And whether we need to add support for pull and push records registration there remains to be seen. I would just well, wait for some CDC really to show up and then we can think of whether we would need to add the logic there. Right. It's really questionable because the push model they have is not something which we can easily implement. That's actually my main grudge with that because it means that you temporarily have to turn the target into a client, connect to the CDC, send over your information and then turn around and become a target again. So now it's a daft scenario. No, no. So. Yeah. And that's actually one of the main proposals that's being worked on now is to improve pull registrations because after we, you know, got finished with TP8010, we realized that there weren't a lot of people who wanted to implement it. Yeah, exactly. So if at all, then we might be able to support pull registration and that should be relatively easy because that means you just need to support essentially one or two additional commands. So, okay, yeah, we can do that. No big deal. But really, that's about it. Nothing else. Nothing of the other things like this whole mapping stuff and you name it. Come on. No, we won't be doing it. So if you want to do some weird zoning implemented demon, which handling this weird zoning, we simply couldn't be there. Can't be bothered. Well, I can't because it's far too Well, I would agree with you complex. Are you any other questions or comments? All right. As soon as Tritonian gets here, we can get started with copy offload. Awesome. Okay. Thanks everybody for listening to me. Thanks, John.