 Okay. That's fine. Okay. So good afternoon, everybody. My name is Raymond Knaub. I'm very happy to be here. Actually, I like this venue a lot. It's much more relaxed than normal presentations. So I work at Uricom in social police which is in the south of France. We have a new name to our lab. It's called the Open 5G Lab. What I'm going to do is talk a little bit about some of the things we're doing in Open Air Interface for network signal processing. I don't know if everybody here is aware of Open Air Interface. It's one of the open-source projects that develops both ENERDB and UE implementations that run on USRPs and on other platforms as well. I'm going to talk about some of the stuff that's been going on in the last six months on how we're changing the architecture a little bit to address some of the research goals that are going on in 5G systems. So what do I mean by network signal processing? There's a trend and I'll show you a picture in a second to start. It's actually funny because 2G systems, if you look at them, they were already split between, you had the BTS and the MSC. Basically, the BTS was running the physical layer and the other was running the Mac layer. In 3G, there was a bit more going on in the Node B, but there was still at the beginning of 3G, all of the Mac layer was running in the RNC. Then when 3G started getting faster with HSPA, and we came along to 4G, they decided to put everything into the EnodeB. So basically, you have the entire layer one and layer two stack in the EnodeB and the rest of it is further along in the network. Well, it turns out that now the operators and the vendors, they would like to start splitting things again, but they're doing it in a different way in a very flexible way. So this talk is looking at how you take a radio system and either split it or there are other terms that are used, slice it, that's more from the perspective of services, or from a computing perspective, distributing the computing in different parts of the network, both in the radio processing and in the protocol stack. There are different reasons for doing this. One of the reasons is to do spatial signal processing. If you want to centralize the signal processing in a radio node that's connected to many radio heads, that's one reason for doing it. Another one is the trend towards doing a lot more network function virtualization. So some of the protocol stack gets put into a virtualized environment and more generally, to do processing of both access networks and the core networks in a data center environment. That's the reason this is happening. So I'll go over a little bit of the current implementation that we have in OpenAir interface, and some notes on how we're actually going to try and test this, and then maybe at the end if there's sometimes some generic information about our community. So this picture shows more or less what the vision of one of the operators. This is actually the vision of China Mobile, the way they see the evolution of radio networks. So basically, they have the concept of what's known as a remote radio system, the RRS. Inside the remote radio system, the base stations have different names. They call them remote radio units. Today in 4G systems, you have the remote radio heads. So it's an evolution if you like the remote radio head. Now inside, there are different kinds of remote radio systems. You have the classical one. This is what we have today in 4G. You have what's called a remote radio system where there's a lot of collaboration between the nodes. So you have big base stations, so trisector base stations with six or 12 antennas, and then you have smaller Pico cells. The signals are all collected at this radio node here. Where this is different from today's networks, these radio sites here will have a fiber optic connection from the radio head down to a control room at the bottom, which is basically where all the baseband processing happening. So you might have at most a 30-40 meter optical link. In the new networks, that 30-40 meters is going to become 10 kilometers. These nodes here are potentially already the ones closer to the central office of the operators are going to be doing a lot of signal processing for a whole network. So that changes the way you actually do things. It changes the way you architect the signal processing system. So you would have some like this. You might have others that are quite similar. This would be the indoor system. So this could be in an airport, in a shopping center, something like that. You would also have one radio aggregation unit, and several remote radio units. Or you would have the massive MIMO, remote radio units or radio aggregation unit. This is if you like, these two things are the same. The only difference is here the antennas are all collected together and here they're distributed in space. But ultimately, this is the way the network seemed to be going if we consider this next-generation frontal interface. This part of the network, which is if you like, where all the signal processing happening, they're calling at the frontal network. The network that goes towards the core network is the back-haul network. Those are the new terminologies. You also see something now called an X-Hall network and I'll explain what that is too. But you see there are basically three entities here. You have the radio units or the remote radio units, the aggregation units, and the radio cloud centers. So this is the terminology that we're actually also using now in open-air interface. So we're trying to take our software, which was written in a monolithic form for an E-Node B, and now splitting it again into pieces. Now, the thing that becomes interesting is when you split it into pieces, you have to interconnect those pieces together with a network and then in the end, you're doing signal processing with a network. That's where the terminology came from. So this splitting in the network, there are different languages. If you ask the NGFI, which is one of the IEEE groups, they label the split points. This is a protocol stack, say of a 4G base station. The radio interface, so this is your IQ streams, like you would be handling in UHD. This is the first stage of the input, so the transfer frequency domain, and they give that name and a number. They call it Interface 4. Then you can go up further in the stack. They define this one, Interface 3, which is just before the bit-level processing, then another one, Interface 2, and then there's that one up there, Interface 1 prime. From what I'm seeing out there, this interface exists today. 4G uses this. They put a fancy label on it, but that's what the C-Prix interface is this today. This one is coming. It's actually interesting because here, this is a split between, if this is a networked interface here, these operations, the 4G transforms are running with the radio, and then this is running somewhere else. That one is definitely one that is going to happen. Now, why is it going to happen? Because basically here, you get a certain amount of compression. I'll show you how that's done later. Another one that would surely happen is this one here, this IF2. Has anybody heard of something called FAPI or Femto-Axis? It's the Femto API. There's a standardization form called the Femto form which became the small cell form. They standardize an API basically for this interface, and at the time it wasn't for a networked system, it was the interface between the system on chip or the baseband processor and the protocol processor. So, base station vendors that were making small cells, they could use this specification here and interface with another chipset. That's basically what it was for. Now, they're extending this to be networked in this sense. They call that NFAPI. The other one that you're going to see is that one all the way up at the top there, because it's very likely, and I'll show you a picture why, that's what's going to allow different systems. 4G, 5G, Wi-Fi, millimeter wave Wi-Fi, millimeter wave 5G all to interconnect in the same data center. So that IF1 prime is also one. So today in open interface, what we're doing, we've already implemented these two, and we're in the process of implementing the one up on the top there. There's also this one that we're in discussion with an industrial partner that would like to see this. So, there will be several splits. Now, what's important in these systems is that the software that you have be flexible enough to change the split, not necessarily dynamically, but semi-statically. Depending on the types of deployments, you would use different strategies. So, I'm going to skip these. It's not really that informative. This is more informative, because this shows you now how you would instantiate these things. So, this is a picture that I sometimes when you go to presentations of companies like Nokia and Erics and they flash a lot of slides, you don't get the slides, and people are always taking pictures. I took a picture of this one last year, because it helped us a little bit understand where people were going. This shows you a little bit the way the big guys, they see the evolution of the network. So, forgetting about the core network, everything seems to be centered around this PDCP that's called the Packet-Directed Invergence Protocol. You'll see different kind of systems eventually connecting to that, the LTE 5G and even Wi-Fi. Here, we'll go in. Very likely also some of the IoT-based air interfaces. But what's important to see is this cloud that they have here. It's a cloud because it's running in essentially the central office of the operator which is, if you went to one of the first presentations this morning in a session on NFV and SDN, they were talking about the evolution of the central office to an architecture like this. So, basically, this cloud, which is running radio processing, both protocol and potentially physical air will be connected to different types of systems with different types of latency requirements. If you're connecting, for instance, to a over a 10 kilometer link to a radio site from the central office, so replacing that 100 meter optical fiber by a 10 kilometer optical fiber, that's a very low latency thing. Then there's physical air happening here in this data center. The other types of systems that are more classical in comparison to very similar to what we have today, these would be hundreds or maybe a gigabit Ethernet link but with a fairly high latency because you're interfacing way up in the protocol stack. So, this is the way things are going. So, how are we approaching this? We define several entities now in our software. We have an entity that we're calling a radio unit. We have entities which we call the actually the EnoBs, which are separate from the radio units. We have the physical layer part of the EnoB and the protocol stack of the EnoB. Now, we can further subdivide this into things which we're calling instances. A radio or a physical layer instance could be several equivalent base stations inside. I'll show you a picture of why you would want to do that. One very clear example, these could be three sectors of a big base station. They're independent protocol entities in some sense, or they could be what you would call a virtual base station. If you have an antenna array, if this is acting as an antenna array, and I purposely chose a different number here. I put six radio units and there are actually four base stations here. So, there are more radio units than base stations. That means that there's some mapping from logical base stations to physical antennas or physical radios. That's also the way the systems are going to be going. For instance, that indoor system I showed you before, inside a building, you might only need three protocol instances, but you might have 20 antennas. The same thing for the massive MIMO, the base station with 256 antennas. You're not going to drive more than the equivalent, say of eight base stations or maybe 16 base stations, but you're using a lot of antennas. So, there's a notion of a mapping here from something that has meaning in the protocol stack to something that has meaning in the air. That's what it means. So, that's how we've divided things up. Now, let me just give you what is a radio unit. The radio unit is something that's managing a physical antenna, which means that any can have either a local radio unit or a remote radio unit. So, what that means is a two-sided thing. The radio units can talk to each other. One is on, let's say, on the protocol side and the other is on the radio side. There implicitly you're putting a network between the two things, a fixed network between the two things. Now, what does it actually do? It performs two operations. It performs what we're calling precoding, and I'll explain what that is. So, basically that's that operation of taking several logical base stations and generating the signals on the antennas. That's the notion of precoding, and it also does OFDM modulation. It does the conversion from frequency to time on the transmit end and from time to frequency on the reception end. The instance of the protocol stack is a separate set of threads and contexts, which implement the procedures of the base station. So, it's a notion of processes. We contain the same MAC and RLC entities on top of the physical area. The component carrier, that is actually the entity that manages the physical area procedures for a particular carrier, which could be a separate frequency carrier, but it could also be a virtual carrier, which is controlled by the antennas. So, that's what these notions actually mean. I'm going to skip that and just show the pictures. So, this is an example of the radio unit that implements the interface five. So, if you remember that the interface five was the one at the bottom of my picture there. That was a time domain signal. So, it's the IQ. This is equivalent to UHD. It's completely equivalent to that. So, what do we do here? Let's look at the receive path. So, you have to see this. These are the IQ signals that are coming in on the receiver. Sorry. These are the IQ signals that are coming from the radio unit on the other side. So, this is a network input. The first thing we do is decompress the signal. So, when we transmit IQ samples, if you want to fit them on ethernet, you compress them a little bit. You can compress them. We actually use ALock compression for this. You get a compression factor of about a half without any signal degradation. So, this is coming from the network. We decompress and then we convert to the frequency domain. So, this is the front end of a base station. In the other direction, when we're transmitting from the protocol context, it goes into a block which does pre-coding. It does conversion from frequency to time and then it compresses. So, this is what happens in the radio unit on the side of the protocol. Now, let's look at the other type of interface, so the 4.5 interface. So, that's a little bit higher up. Okay. Again, if we look at what's coming from the device, we decompress the signal. But actually here, there are two types of signals. There's because in a system like 4G, on the received path, you have the terminal can transmit two types of signals. It can transmit the normal uplink signal and control channels, or it can transmit the random access signal. Those are two different kinds of signals. So, you actually need two kinds of packets to encode those signals and two different types of processing to it when you receive them. So, again, this one, the data signal part is compressed and the other part is not. If we go into the other direction, we have pre-coding and compression, but there's no Fourier transformers, because the Fourier transformers are done on the other side. So, basically, you already see with this simple example, what we're doing is moving around pieces. Here, I have them on the side of the network. On the other one, I have them on the side of the remote end. So, this is the remote end. So, on this side, on one side, we're connected to the network. On the other side, we're connected to an RF device that could be, for instance, the USRP, but it could also be whatever we like. There is a notion here of splitting the processing into the different types of things we have. So, on the transmit side, we do the modulation, and on the reception side, we do the demodulation, and we do a part of the random access channel demodulation in the radio unit itself. I'm gonna skip that, because I'm gonna run out of time. Let's look at another example. So, the ones I just showed you now were the examples of the radio units when we were connecting directly an E node B to a radio unit. In the first set of slides I showed you, there was also an intermediate node. This is an example of what an intermediate node could look like. On one side, you would have a frontal interface, for instance, a little bit higher up in the protocol stack, because this would be a node that might be a couple of kilometers from the radio stations, and it would be connected to a central office, maybe 10 kilometers further away. So, you have an optical link there and an optical link here. But this signal processing is happening somewhere in the middle of the deployed region. So, in this unit I put in two base stations, or two logical base stations, two E node Bs which are running the physical air, signal processing of two base stations, and it's connected to a pre-coder which is driving four radio units. So, here's a mapping of two base stations to four radio units. That would be, for instance, something that you could do if you were in a rural village. You would put maybe four radio units to cover the village. You would have one central location which was doing all the signal processing and then sending the protocol information back to the central office. This would be one example. Another example would be a massive MIMO base station. It's essentially the same thing. On one side you have an ethernet frontal, and on the other side you have a, sorry, an ethernet frontal to the radio units, and on the other side you would have a protocol interconnection. This was an example where you have eight base stations or eight logical base stations driving 64 antennas. So, this is essentially the same thing, and the key difference here with respect to the previous example is this block is much more complicated. The pre-coder is a much more complicated block. So, let me skip these things and talk a little bit just how the processing is actually done. The radio unit itself, if you think about a 4G system, it has to do the transmission and reception in parallel. And in a system like LTE, the signal that you're receiving at time N is used to generate the signal that you're transmitting a little bit in the future. So, there's a dependence between the signal that you transmit in the future and the one you're receiving at time N. So, the way we decided to implement this was to, that fundamental thing in LTE, it's at time N plus four milliseconds. So, the signal you receive at time N is necessary to generate the signal that you transmit at time N plus four milliseconds. That's the way it works. So, basically every processing thread does the following thing. It reads from what's below, the south, whether it's a networked interface or an RF interface. It does the processing for subframe N. It wakes up all of the base station processes that are waiting for it. It waits for them to finish their completion. Then they have all of their information necessary to generate the signal at time N plus four. And then we send it out at N plus four. So, this is quite simple. That's the way it works. We have another thread that's there just for the random access channel by itself because that's a completely independent thing. And then there's also one other thread which is there when you receive on the frontal interface. So, for instance, if you're in a radio unit, the signal you receive from the frontal interface is what you use to transmit. Now, if you're on an ethernet network, there's a lot of jitter on an ethernet network. It's not a real time thing. It could be real time, but if it's very long distances, there is a significant amount of jitter. So, you need some sort of an asynchronous process to handle that. That's why there's that kind of a thread. So, this is really the basic mechanism of the thing is this. And you can see here the overall picture of the way it works. Up on top there you see the timing of the different subframes. And underneath you see how processes are scheduled. Yellow means receive, red means transmit. So, we're always receiving and then transmitting, receiving and transmitting. Now, it turns out that you can parallelize this by a factor two in order to improve the performance. So, if anybody's interested in the inner workings, that's the way it works. So, let me just switch over to the other part. Yes. This is the one that you just- Yeah, that's the one I just did. It's the other one. Yes, that's it. Okay, that should be okay. So, this is basically what we're building here at Urocon to test this kind of stuff. So, we have a little data center here, which is basically five Xeon servers. So, these are multi-core servers and we have 20 cores in each machine. Some of them are protocol stacks, some of them are for the physical layer. And then there's a network here of switches. These are standard Cisco switches and radio units. Okay, so those remote radio units I was talking about. So, basically the open interface is going to be deployed on this data center and segregated and split and do some of the processing on the radio units. So, just to give you an idea what the radio units themselves are, they're using USRPs. They're using actually a V-200 mini. We want it to have a very cheap hardware. That we have some external RF on top so we can transmit in an indoor environment up to 15 dBm. And each remote radio unit is driven by an upboard. I don't know if people know what that is. That's a very small Intel architecture it's the size of a Raspberry Pi. And that's running enough signal processing to implement the radio unit. And the rest of the stuff is running in the cloud. So this is something we're in the process of deploying now. We have an order with edits for 15 B200 minis. So just very quickly, general information. For those that are interested, our software now will support Linus DR. We're still testing it with the most recent board we got from them. But it will work up to 10 megahertz FDD today. We've stabilized the E-node be quite a bit in the last year. We have full uplink downlink throughput and 10 megahertz bandwidth. And full downlink on 20 megahertz bandwidth. We're starting to test our scheduler or loaded scheduler so load it with terminals. And soon we'll have the MIMO modes running as well. Another thing interesting for some people here, there's been a lot of development on the terminal side. So today, OpenAir interface will run on the terminals at full throughput up to 10 megahertz bandwidth. So it's 35 megabit per second downlink. And we're testing the MIMO modes now as well. And just finally, for the core network, there's work in the community today testing with commercial E-nodes in order to robustify our control time. We've integrated the dedicated bearer support for those that know what that is, that's the support voiceover LTE. That's in there now too. I'm very happy also to say we've defaulted now to the OsmoCom GTPU module, which is very good. That helped a lot. And now with some of the partners in the community we're integrating some of the missing procedures in the core network, which will make something much closer to a commercial core. And so that's it. I'm sorry, there was a lot of information here. You will have the slides in any case. So four minutes on the requirements. Yeah. I remember some now looking into cleaning this down through something like one minute. Yeah, well, not in 4G, for 5G, yes. Yes, yes, the evolution, yes, the evolution. So, can you study typical agencies of the octagonal and your fibers that you have there? Well, okay, there are two things, there are two things. They want to bring it down, but they also want to relax it at the same time. So there will be some services where it does have to go way down. And in that case, those very low latency front halls are gonna have to be even lower latency to support the new waveforms. I don't know, I don't have numbers for you here. But I think you're now... Oh, that's doable, we haven't tested longer than 100 meter, it's all copper. So basically the way the network goes, I didn't have time to explain it, but these links here are copper links maximum of 50 meters. So it's standard gigabit ethernet, right? So the throughput that we need is on the order of 200 megabits per second. So it's more than feasible there. And then the link between here and here, that's optical fiber. But we're not gonna be doing 10 kilometers in our lab. Okay, if you go to 10 kilometers, then there are issues. But at the same time, there is one group in our community that has done the same experiment with 20 kilometer fiber. And the only thing they had to do was adjust the timing a little bit in the radio unit itself. So it was an FPGA-based radio unit. They had to advance the signal a little bit in order to handle the latency of the fiber over 20 kilometers. So it's feasible, but you have to tune it. Now the question for 5G is another one. We need to start teaching. We can catch math. Oh, sorry. We can email. Yeah, we do need to start.