 Yeah, reintroducing E1 and Osmo BSC. As most of you know, we started originally with E1-based BTSs, actually, Dita back there, much to his credit. Open BSC and later Osmo need to be started with E1-based BTSs as the Avis interface towards the base station. And until the NITB split, E1 support also remained present and maintained in Osmo NITB. So we could have E1-based BTSs as well as IP-based BTSs. But even other code, even the original Osmo BSC, implementing SCCP light, never had E1-BTS support with double P, of course, apologies for the typos. Nobody looks at the slides until I say, oh, there's a typo. And also, since the new Osmo BSC is sort of a derivative of the original Osmo BSC, also it has no E1-BTS support, which means now in the new split NITB architecture, which is much closer to the GSM specs with all the individual elements of the network, we don't have the support for the original BTSs, which is, of course, a bit of a bummer. So let's look at how things were actually looking in the Classmic Osmo NITB setup with E1-based BTSs. We have a BTS here attaching to an antenna, providing the radio interface. We had an E1-T1 line with 64 kilobit time slots as a backhaul. Then we have the big box here called Osmo NITB, which had an E1 input module that would provide a single time slot to what we call a subchain demultiplexer that splits the 64 kilobit slot into 16-bit subslots. So we have four 16-bit subslots, and on each of those 16-bit subslots, we could have a draw decoder which would parse the draw frames, which is the format of how voice is transmitted over these subslots, and then hand that to the MNCC interface. And on the MNCC interface, we don't only have call control, but we also have the actual voice frames called GSM TCHF frame, or also GSM EFR frame, depending on the codecs, to which you attach an external PBX like LCR or like other PBX-type software. So that's the architecture that we had back in there, and the important part is that the MNCC socket would handle both the signaling as well as the voice in this old setup. Now if we change to RTP, still with an E1-based BTS, basically the entire left-hand side stays the same, if I flip between those two slides. The only difference is that here we have RTP towards the external PBX, but still we have an E1-based interface here, a single time slot, and a sub-gen-D multiplexer with the four subslots, the draw decoder, and here we convert to RTP frames for the voice on the right-hand side. And if we go the evolution even further, that was basically with an IP-based BTS, such as a Nano-BTS or Osmo-BTS. We then have RTP frames even here on the Avis side, we have an RTP socket here in the NITB, which goes to another RTP socket on the other side, which then goes to an external PBX. So the Osmo-NITB would just run an RTP proxy, and that RTP proxy even was optional, so you could disable it, so the RTP basically goes directly to the PBX, which is this picture where Osmo-NITB only handles control, a call control messaging, but the RTP goes directly to whatever external application. That's sort of the latest step in user-plane evolution in Osmo-NITB, which is a very common setup before the NITB split. Now if we go to the post-NITB side, and we look at this, it looks like this. We have the BTS on the left-hand side, we have the signaling basically on the top of this diagram, we have RTP at the bottom, so the signaling Avis signaling goes to the BSE, and the BSE then uses MGCP to control a media gateway associated with it. We have the same over here at the MSC, which again uses MGCP to control another media gateway, and in the end we have an external PBX over here. So that's the post-NITB split architecture that we have today, where the left and the right-hand media gateway, as we have heard earlier today, can be merged into one. If you have a small network, then you can run only a single media gateway here to avoid having to run two instances of that, but that's the architecture. And this is all IP-based. So now the question is how can we attach something E1 like on the left-hand side to this newer network? Well, today we still cannot, but we have a very current feature ticket that we work on to reintroduce this support, and how would it look like? Well, basically here, in this we have IP-based communication, and here we then have E1-based communication. So again, the signaling plane uses E1 with RSL and OML, and of course, LAPD inside. So that terminates an OSMO BSE for the signaling plane, and then we have some other E1 time slots that would be opened or attached to by OSMO media gateway, which can then convert from the, can have the subchain demultiplexer from the 64k to the 16 kilobit subslots, and then do the RTP conversion, like we did in an ITB before, but this functionality would then be in the media gateway. And for this to work, the E1 driver needs to support that a single span or a single line of E1 can have time slots where the RSL and OML time slot, which is a LAPD, well, sorry, LAPD, not LAPDM here, actually, signaling goes into OSMO BSE on the upper half, and that other time slots can be opened by OSMO MGW at the bottom. With Dadi, that definitely works. That's the Digium E1 interface, a driver stack. With MISDN, I'm not entirely sure. I think it should also work, but I would have to study the code again. But for Dadi, definitely, this is an architecture that can work. And actually, the top part should, in theory, still work because none of that code in OSMO BSE has changed. I don't think anyone has tried in a long time, but all the code is still there in Libosmo Avis to open the E1, and it's the same Libosmo Avis that's used by OSMO NITP, so the signaling plane should actually already work today. But what we need to introduce is this opening E1 time slots by OSMO MGW, and then stacking the subchain demultiplexer on it and converting to RTP frames. And this, of course, also will then affect the MGCP signaling here because if you paid attention to Philip's talk earlier on, basically right now we have RTP bridge slash something as an endpoint name because we have RTP on the left and the right-hand side. And now OSMO BSE then basically needs to use an endpoint name, which encodes the line number and the time slot number and the sub slot number. So it would basically be, okay, I don't have it here. Here at the bottom would be something like E1 slash line 1 slash TS4 slash sub slot 2 at MGW. So the endpoint name basically here would differ from an RTP, pure RTP IP-based BTS setup, but the endpoint name has to change. And that's also actually rather simple because the information what to put in here is already present in the OSMO BSE configuration file, just like it was in the OSMO NITP configuration file. Because for every radio interface time slot you have to specify which E1 time slot and sub slot is mapped to that. If you look at that, we can quickly open an example, I suppose, or quickly. An example config file and look at that. Let's make that huge. OSMO BSE, okay, so we don't have actually a config file here, but if we look at an old OSMO NITP, let's say a BSE 11 example, open BSE.CFG. So here you see, well, you see that for each radio time slot here at TCHF as a voice channel, we actually already have this in the VTY. And this part actually is already present in OSMO BSE, so this should even work today. We say time slot 1 and this refers to the air interface time slot of this transceiver. We say E1 line 0, time slot 2, sub slot 1. And that's exactly the mapping which is then used by the OML code to instruct the BTS to connect those two. So in the BTS it will basically do this connection from time slot 1 to a given air interface time slot 1 to a given E1 time slot and sub slot. And this information we need to recycle in OSMO BSE in order to send the MGCP commands here to the OSMO MGW to a given MGCP endpoint. And once that is in place, basically we are done. And the media gateway can then take these audio frames from the E1 side and send it over RTP in the rest of the network. So actually it's very little work to add. And that was also the plan with the new OSMO media gateway and with MGCP and the entire architecture that we can easily reintroduce the E1 setup. Now the question is why would you do that? Well, of course, this is where we're coming from. So we have a historic attachment, of course, or legacy there. I don't think there are so many like actual real world users of OSMO COM that still use E1 based BTSs. Nevertheless, we have plenty of them around. And as we will see in a couple of the other talks following now, there is a quite interesting hardware now again or still actually more again available very inexpensively with E1 interface that makes a low cost and very powerful BTS hardware that we can use with the OSMO COM stack. So I think there is some new interest in E1 or new possibilities in E1 if we reintroduce this. Also, it means that it's again one more user group that we can migrate from Osmonit B to the new BSC and MSC architecture. Okay, that was it for the topic of reintroducing E1 and how that will look like. If we do have questions, we should ask them by using a microphone. I'm not sure where it is. Kevin has it in the back. So if you can quickly grab that. So you were talking about DCHF, what about DCHH? Yeah, okay, so it was recorded. Well, it's fundamentally the same. So in the E1 system, it works the same way. So every radio interface time slot is always connected to a sub slot. And so whether you have AFR or EMR or half rate in there, you have these sub slots. And we do have already this code in place. So with a nit B, you can have a half rate setup with a BS11. And that's not really anything different. It's just you need to configure it. And of course, the trough frames, the frame format for the frame is different. But in Osmonit B, we do have this for EFR, FR and HR. We do not have AMR support for E1 BTSs in Osmonit B. That's because the BTSs that we were using when we did this development, for example, the BS11 does not do AMR. It's too old to do AMR. But with modern Ericsson BTS, for example, we could of course also add AMR support and complete that. But it's basically parsing the trough frames for AMR and converting them to the AMR payload format that we have in RTP and vice versa. So it's again some bit shifting and interposing. And then we have that too. In the previous implementation, I mean, I don't know at all the interface from the process to the actual E1 card. But it's not a problem to have two different processes having to connect to the same. That's what I mentioned basically here, is that the driver needs to support that a single line, one time slot can be opened by one process and another time slot can be opened by another process. And for Dati, for the Digium-based interface, that definitely works. Because every time slot is a separate device node that you open as a file. So it's very easy. Actually, you can just open TS1 from one process and TS2 from another process. For MISDN-based cards, I'm not sure. I've forgotten how the detailed interface looks like. I still remember it's sort of socket-based interface that they implemented. So you have an AF-MISDN socket that you talk to. And I don't remember whether it was one socket per time slot or not. So there might be some difficulties there. But I think even if we only have Dati card support, which is the much more common... So for MISDN, I think the only cards you can find is for E1, is HFC E1-based cards, which are for PCI slots. And I think even for 5V PCI slots and not for 3.3V PCI slots or so, no PCI cards whatsoever. So it's rather difficult to attach them, those cards. But Dati, you get PCIe, low-profile cards and whatnot. So I think if we have Dati support, I would already be happy. I would expect also to be able to do this from MISDN. I just don't know. Okay, any other questions? Good. Then that concludes this talk.