 Okay, so thank you for staying over way to the last presentation. So actually this year, it's not going to be so technical or so mathematical as last year or the previous years. It's going to be a bit of a summary of what we're using, why we're using software-defined radio in the lab, and especially how it enables experiment that you would not be able to do without software-defined radio. So that's a bit the outline. So at the moment, if you look a bit in the literature, I was just looking at the latest articles in the review of science instruments. It seems like every scientist or everyone who's doing software-defined radio has to publish either a network analyzer or a lock-in amplifier or whatever you want. It's basically at the moment you see that more and more software-defined radio are getting into some of these experiments. Actually some quite fancy experiments. So what kind of science instruments would I address with software-defined radio? Of course, this is not exhaustive, it's just the topic that we address in our lab. Everything that's related to radio frequency signals. So that's spectrum analyzers, vector network analyzers. Basically here, you've got very basic instruments, but what I'm going to tell you later is, whenever you use one of these lab equipments, some of your colleagues, if you leave your instrument for too long on an experiment, someone will say, this is not used, I can take it. You know that at some point, your experiment is going to be dismantled because someone needs the instrument. So with software-defined radio, you're the only one to know how to use your stuff. So no one's going to steal it from you. But more seriously then, locking detectors, that's improvement of bandwidth. I don't know if you read scientific journal like Phoenix today, but every time in Phoenix today, you've got the first advertisement. So Stanford Instruments showing, we have analog locking amplifier, and just on the page afterwards, you've got Zurich Instruments saying, we've got this full digital locking amplifier, and everyone is of course saying, mine is better, Stanford says, we have lower noise level, Zurich says, we have higher bandwidth. So depending on what you need. So beautiful thing with software-defined radio, you tune your hardware to your needs. Then I'm going to show you a bit about basic physics and scanning for microscopy. I couldn't find the reference. I know there is a published Magnetic Force Microscope that's been done with USRP, but I couldn't find the reference when preparing this. So actually, as I was starting this presentation, my original presentation, you remember, maybe a couple of years ago, I presented a passive radar. So when I was visiting in Japan, Professor Sato's laboratory in Sendai, I discovered how you can use oscilloscope for doing radar measurement, because you've got a broad band, radio frequency receivers. So now, most oscilloscope will have two five-yahears bandwidths. So this rod and schwarz must be five-yahears bandwidth measurement. Initially, I was playing with my colleagues trying to do some software for acquisition, but it took me a bit of time to make some usable new radio interface for oscilloscope. So now, what we're looking at is collecting data from the oscilloscope. So you know that the schedule of the new radio is working with kilobyte packets. All of these oscilloscopes have mega-sample memory depth. So what we're doing is we're collecting a big chunk of data. Of course, for experiments, you don't claim to be continuously streaming data. You just need chunks of data, but if you have high bandwidth measurement, you still can do quite a few other things. The obvious presentation beyond radar is time-of-flight measurements. So time-of-flight measurements is measuring length of coaxial cable. So here we have our software-defined radio source. Of course, you cannot treat it, but we've got the four channels from the oscilloscope, and these four channels are fed by noise generator. This noise generator is the reference signal, and if you correlate this noise source with the received copies on the other channels, you've got the time delay. So you see here 177 nanosecond, 187 nanosecond, 202 nanosecond, and that's actually the time-of-flight in my two-meter or three-meter coaxial cable. So very easy range resolution is velocity-of-flight in coaxial cable 200 meter per microsecond divided by twice the bandwidth. If we have something like a gigahertz bandwidth, we have 15 centimeter resolution. If we have five gigahertz bandwidth, we've got three centimeter resolution. So that's one of the classical ways of measuring defective optical fiber or coaxial cable. So another obvious application which does not need continuous streaming is displaying the spectrum. So when you're just displaying the spectrum, you want to know who is sending information on the channel. You don't need necessarily to decode this information. And here, the upper limit is given by the bandwidth of your oscilloscope. The lower limit is given by the sampling size. So for example, if you have a 2.5-yard sample per second, oscilloscope and you have an 8000 Fourier transform, then you start at 600 kilohertz. So that's the kind of application that's easily applicable. And of course, the drawback of this radio frequency oscilloscope is very low number of bits. They claim 12-bit resolution. If you can get 10-bit NOB, you're already good. And so this limits the ratio of the strongest signal to the weakest signal that you can detect. So that's one of the big drawbacks of your software-defined radio with respect to analog front-end. So actually, so this work was started a couple of years ago and in the meantime, as I was getting ready to publish some of the presentations, Marcus released radio 3.8. So everything that I did on 3.7 had to be started again. And so it took another six months. So basically, what is this GR oscilloscope out of three module that we released is a demonstration, an example, of how to interface new radio with other communication protocol in this example. I'm using VXI, which is a GPIB over Ethernet. And the user-configurable oscilloscope allows you to tune IP address, sample rate, measurement duration, variable number of sources. So basically, I take this as a tutorial. So first of all, a tutorial on how to write an OOT block. And now it has become, so actually it's just been released for the French reading audience, just been released in the latest news magazine. I take it also as a tutorial on how to translate a new radio 3.7 out of three module to a new radio 3.8. As I was about to upload the English translation of the article, I realized that I had not updated to the 3.8, so if anyone is interested, I'll be happy to translate it at the moment on the FOSDEM site. You only have the French version of the manuscript. So why would you even bother looking at an oscilloscope as a radio frequency source? So this is actually the experiment that happened to us as we're playing with the oscilloscope in Japan is we had sent, at the time we're not using a random generator, we're just using a sine wave. And the sine wave was collected by two channels. And you think that when you buy one of these high-grade oscilloscopes, if they claim that the two channels are synchronous, well, they must be synchronous. And this is absolutely not the case. If you look at the phase between these two channels, of course, this is an 800 megahertz signal fed to a 10 gigahertz oscilloscope. So we have about a ratio of one to 10 between sampling frequency and carrier frequency. Well, sometimes you see some of these jumps. So even someone as clever as Agilent has some missing packets or some missing data between the multiple channels. So actually, the funny thing is we realized we did not have these kind of jumps between channel one and four, but between channel one and two, we are losing some samples. Don't ask me why. But that's way out of here. Why is it? Well, this is a 10 megahertz sampling. Gigahertz, sorry, I must have misspelled. This is a 10 gigahertz oscilloscope with an 800 megahertz oscilloscope, sine wave output. These are experimental. Actually, this is one that I captured, but this is the kind of thing that you see jumping. If you trigger on the phase, you see this jumping every once, every second. It's not periodic, but you see if you're doing radar and you're doing coherent measurements where you want to do direction of arrival, if every second your phase is jumping, of course, you're gonna have trouble collecting this kind of information. So again, this is the kind of data that we collected, oscilloscope source. Either you do the Hilbert transform to get the complex output, which gives you a phase in magnitude, or you take the Fourier transform and multiply, conjugate to correlate. And in both cases, of course, you have consistent phase jumps because it's really physically your oscilloscope that is missing some samples. So that was the first step. And then I thought, okay, let's try to show a little bit what we're doing with other front ends. So doesn't this look to you like a coherent four-channel locking detector? I mean, I take two B210s, I take the one PPS to try to lock their input. And as long as I keep on continuously streaming the data to not lose a phase condition, then I have here a four-channel locking detector. But with the added advantage that I don't have to go for VXI or GPIB to collect data, I have streaming data. So what I'm gonna show you later is that the bandwidth, the communication bandwidth, of course, in a lock-in, I am doing a very narrow measurement, but because the data are streamed quite quickly, so I can do some nice measurement. Of course, I've got the flexibility which I'm not gonna convince this audience, that I can tune the functionalities of my instrument depending on my needs. And here in the example of the PlutoSDR here, this is something that we're using in the lab, but in our lab we're doing some time transfer, or sorry, I should say, some frequency transfer between high stability oscillators. So if you try to create feedback loops using a crappy temperature control crystal oscillators and you're trying to lock two atomic clocks, that's not gonna work very well. So in this example, for example, we can clock the PlutoSDR from hydrogen maser. Unfortunately, hydrogen maser generate 10 and 100 megahertz signal and somehow when you plug the phase noise of your PlutoSDR or your AD 9363, you realize that you need to feed it something above 40 megahertz, so maximum seems to be 60 megahertz, don't exactly show the specs, but 80 megahertz top, but below, it starts working at 10 megahertz, but between 10 and 40 megahertz, you have a very poor or degraded phase noise, so you cannot just take the hydrogen maser, you need to trick a little bit of the synthesis, but you feed it with your 40 megahertz input and then you can do some sort of feedback loop on your control. So this is the kind of thing that software-defined radio allows you to do, so if I take just an oscilloscope or actually any rod and truss instrument, or actually rod and truss is a good example because I can feed it with a 10 megahertz external clock, but oscilloscope, I've never seen an oscilloscope which I could give external clock to clock my A2D converters. So here is an example of a practical demonstration. I'm not the guy doing the frequency transfer. You're all using quarts. Quarts is a little piece of material that creates the rate at which your microprocessor is clocked, and if you take a quarts, actually it's kind of funny that 300 years ago, your clock was, the time of a second was defined by the motion of your clock, and actually today, your frequency is still given by the thickness of a piece of quarts between which you put your two electrodes and everyone will go to Farnel by a quarts and say if I just put the right capacitor, it will oscillate at eight megahertz. I just want to show you here that it oscillates at eight or 16 megahertz because there's been a huge body of work in tuning the shape of these surfaces, of tuning the electrodes. This is just a device that a colleague of mine gave me to characterize the acoustic mode. This is kind of a thing I do as research. You can map the acoustic field. So remember, it's an electrical component from your perspective. It's just an electrical dipole, but the physics is that actually it's a piece of quarts vibrating. And I'm going to show you later how you measure this, but this is the electrode. And if I look at the field of displacement, I can detect the magnitude of the displacement and the phase of the displacement. And if you're not careful in the way you pattern the electrodes on your piece of quarts, on top here you've got the spectrum. So that's the real part of the admittance. And all these peaks are the various modes. So if you were just to put this on a microprocessor or microtour feedback loop, well, you would just wouldn't know if it starts on 445 on 465 megahertz. It would just start on one of these modes. And actually I like this picture because it shows you that on low frequency modes it's not really very exciting. You've got this shape, you've got this shape. But when you start looking at high frequency modes, sorry, this is a frequency. So this is five megahertz. There you go to eight megahertz. You start seeing this is a different propagation mode. But then when you go to 12 megahertz you start seeing some beautiful patterns. So this is all the standing waves that you can find on a piece of quarts. And then of course, the higher the frequency, the higher the modes and the nicer the patterns. So how do you collect this kind of data? That's your basics of scanning probe microscopy. Actually we've got as an expert on scanning probe microscopy here in France. So scanning probe microscopy is the idea that you'd have a scalar probe and for each position of your sample you collect a measurement. And then you move a little bit, your sample collect a new information. So you only have a single probe and it's because you move the sample under your probe that you can map the physical quantity. In this particular case, the physical quantity that we're interested in is the out of plane motion of a piece of quarts. And how do you measure an out of plane motion? You make an interferometer. So an interferometer is you send the laser, you split the laser, you illuminate your sample, you compare the distance between the reference arm. So actually in this particular case the reference arm is given by the reflection of a laser in front of this helium neon. It's not the right way of doing it but that's how we do it. And then you look at the reflected signal and this reflected signal will be at a phase which is dependent on the path difference between this part and this part. If you just, anyone who's done basic physics courses and who's assembled the Michelson interferometers know that this fringe pattern is extremely sensitive to environment. If you just heat a bit of the air, if you just blow a little bit some hot air in front of the reference arm you're gonna see the fringe moving. So if we were to do this kind of measurement with a basic Michelson interferometer we would have a static phase varying. So what we do is we frequency shift from baseband to radio frequency band using an acoustic-optic modulator. And this acoustic-optic modulator is clocked at 80 megahertz. This is a narrowband device. So we send the laser, the laser is modulated at 80 megahertz, it goes to the sample, goes back to the acoustic-optic modulator. So two times 80 megahertz, 160 megahertz and this is what is going to the radio frequency detector which is a high-frequency photodiode. So this is the basics of heterodyne approach. If you don't understand the basic physics of what I just mentioned, just remember that what you get here is 160 megahertz twice the acoustic modulator frequency plus your acoustic device here and typically this will be a quart at 16 megahertz for example. So you intrinsically have a radio frequency measurement here and what you want is to collect these two channels the reference channel, the measurement channel using one of these signal processing systems. So we heard multiple times zero MQ. I just want to emphasize we use zero, I fell in love with zero MQ when I was in Japan because I could separate what new radio was for which is collecting streaming radio frequency data and what someone else was doing in this case it will be a new octave which is asynchronous operations. In this case, for example, I want to steer an antenna or I want to move this position under the sample that's not the job of new radio. New radio is streaming radio frequency data. It has other things to do than to program a position or to tell it to move to a new position. So what we're doing here is we have a B210. This B210 is collecting on two channels radio frequency signals. We want to know the ratio of the reference channel of the measurement channels of the reference channel. So we divide these devices low pass filter because we have far too many samples per second and we stream this data to a first zero MQ sync. And then I would like to make sure that my experiment is working properly. So I want to get the magnitude of each one of these channels just to make sure that my signal to another ratio is sufficient. So I take these two magnitudes low pass again flow to complex and these two complex of course it's not real and imaginary but it's two magnitude. I stream to a second zero MQ stream. And these two zero MQ streams are sent to new octave and new octave will make the best on the one hand of zero MQ on the other hand of the instrument control new octave provides functionalities for communicating RS232 GPIB all these common communication modes. So in this particular case, my instrument control will talk through a serial port to this positioner. So this positioning system that moves my sample under the probe laser probe. So I can just send messages. And that's again, I think not the job of new radio to stop it streaming activity to send comments to a positioner that is moved once every second. And then as my new octave has moved the positioner, I open the socket zero MQ socket. So actually since I have two streams I have to open two zero MQ socket in a published subscribes that analogous to UDP. So either data are sunk and good. I process them or they're just lost but there is no TCPIP transaction. So I subscribe to my socket. I collect data because zero MQ is sending kilobyte size. If you want more than a few kilobyte you need to collect multiple packets. And the only way I found to make sure that I'm synchronous between my positioner motion and my zero MQ stream is to close the socket and open it again when I'm at a new position. So in this example, for each position X and Y I collect 20 radio frequency samples. And what's really important to me is I never stop the radio frequency stream because the B210, if I have two B210 they start being synchronous but you don't know what is the phase offset. So if at some point I stop the stream and I start it again, I start with a new phase offset and I cannot compensate for it. So in here in this case I keep on streaming a radio frequency data and at least if I calibrate my phase offset I know it's the same throughout the data I collect. So this is a kind of experimental setup we have. If you take your basic experiment you're gonna see what it looked like initially. You had this positioner. So this is what I call positioner is this little sliding stage. You've got one in X, one in Y and you want to position the sample with a sub micrometer accuracy. So it actually takes a bit of time for this servo to settle. Actually what you realize it takes about 10 milliseconds for the servo to settle. So when you have this positioning system you've got this laser beam. Laser beam is illuminating the sample for this optical microscope lens here. And if you take general purpose instruments remember what I needed is two magnitudes to make sure that my signal to noise ratio is good and one phase information from a locking amplifier. If you do this with general purpose lab grade instruments you need four GPIB communication. That takes about one second per sample to stream data. And if you have about 100 by 100 samples that's 10,000 seconds. So you just wait for about three hours because your GPIB communication is so slow even over VXI 11, even if you do ethernet it's intrinsic. Here you have to trigger the measurement, collect the measurement, fetch the measurements and start again. So you see here we start with 10,000 seconds for one measurement. If you go to streaming from view radio from software different radio you remove the bottleneck you're only at 10 millisecond per sample. So you see that you go from 10,000 seconds to about 15 minutes. So here is where software different radio enables something that would not be done previously it's high throughput data. But then you see that these 10 milliseconds they're not really related to streaming the data it's just because I'm wasting time waiting for this projector to sell. So what about removing the positioner and moving the optical beam? So what we did is we removed the positioner that takes 10 milliseconds to settle and we just put two rotating mirrors and these rotating mirrors they have very small inertia. So they will move at 100 Hertz instead of 10 milliseconds for settling each position I can get a whole line in 10 milliseconds. And there you see that you take the best out of the streaming of software different radio because now you're not wasting you're just limited by the data stream for the software different radio. Notice that we started with 10,000 second per measurement and we reached two second per measurement. This is what I call what you could not do without software different radio. With 10,000 second measurements my colleague were starting an experiment leaving the lab in the evening and they would collect the data the next day. Now I can tune my experiment I can focus I can change the focal length here just by looking at the output of the images because it's refreshed every two seconds. So here is what I'm calling enabling experiment that would not be done otherwise. And again you see here this is one of the vibration mode of your course where you've got one node and two maximum amplitude displacements which are out of phase. So it's actually a motion that looks like this on my piece of course. And this is actually what you gain from software different radio initial experiment was actually the locking amplifier has already been stolen by someone. You had two spectrum analyzer you have a network analyzer and all this stuff is replaced by an E310 here because I wanted to be completely autonomous and a couple of passive radio frequency. So Gwen presented to you the OSCIM digital framework. All this stuff here has been uploaded on OSCIM digital and where we upload our custom firmware is in triggering the measurement from the E310. So the stream is triggered by an external signal and this is the novelty where we change the original firmware to have external trigger of the data stream. To conclude this presentation another thing that we're doing is pulse radar. Pulse radar is I'll go very quickly through it but again here software different radio allows you to sequence pulses using an updated firmware in this case it's a red pitaya. The red pitaya is taking care of sequencing all the pulses needed for radar signal. Transposition to radio frequency band is done by feeding a switch with a voltage control oscillator and an IQ demodulator of course the red pitaya is only taking care of baseband and this is the kind of processing stream that we have in the red pitaya generating the various start of set, stop of set to generate the pulses that will be streaming pulse radar. So in this example we can have a radar system with a pulse repetition rate of about 250 kilohertz and this is again the kind of update rate that would not be feasible with an oscilloscope collecting data to be processed. Why would you need such a high pulse repetition rate? In this particular case we are probing acoustic sensors so these are radio frequency sensors acting as cooperative targets. What kind of sensor would need 250 kilo sample per second? Well actually vibration sensor if you're looking at strain gauge in this case we have a strain gauge glue to a tuning fork and if you hit the tuning fork with a hammer of course you've got the 440 Hertz main mode but thanks to the high sampling frequency you can see modes up to 40 kilohertz our bandwidth is up to 125 kilo sample per second. Now I'm not saying that pulse radar is the right way of doing the radar of course frequency swept radar are much more elegant and much more beautiful but a frequency swept radar or correlated radar with noise radar will never achieve 250 kilohertz update rate only pulse radar to my knowledge will achieve such high pulse repetition rate. So in this particular case where we wanted high measurement bandwidth we went for this pulse radar approach. And finally to the last presentation the last demonstration is phase noise measurement phase noise is the characteristics of the stability so if you remember that phase is the integral of frequency if you look at the phase fluctuation of your oscillator over a given bandwidth so that's phase fluctuation over one Hertz bin size then you characterize how your oscillator how stable your oscillator is so let's say you have a reference oscillator you have a device under test you don't care about the carrier you just want to know what is happening around the carrier so you need to mix it and because this guy is going to continuously move respect to this guy you need a feedback to control and keep your reference oscillator running after the device under test this is your analog approach to cancel the carrier it's got a lot of problems my colleagues are experts in this as you've got fluctuations due to the mixer here so we do a digital approach so it's exactly the same story we collect A, we collect B no feedback loop after A to D conversion everything is digital so no bias, no drift we have the mixer with a local oscillator digital implementation decimation, arc tangent, unwrapping linear regression, free transform and we send the phase fluctuation so this is actually what we do with E310 the X310 unfortunately X310 always got two times two A to D converter it can only stream complex value so we need two X310 to do this kind of measurement and here again is what we implement in the X310 to get this kind of phase so the curve here looks smooth it's actually very consistent with professional hardware from Agilent or Rodin-Schwarz and what you have here is the phase fluctuation as a function of the offset to the carrier frequency so again this is a kind of measurement where you have a 50K instrument from Rodin-Schwarz or Agilent which is pretty well implemented with two USRP here so just to show you a little bit what we're doing in our custom hardware and what I would like to emphasize is all these guys here we did a presentation last year at International Frequency Control Symposium if you look at the Rodin-Schwarz system they will hide all of these for you so whatever is these spikes they consider them noise and they will just smooth it for you Agilent-Schwarz is not that bad because they have a mode in which they still give you the raw data but the displayed data has been spoofed so you always see these very beautiful charts but phase noise always has this previous information so that concludes my presentation I wanted to show a little bit how SDR could be used beyond radio frequency communication especially digital radio frequency communication I wanted to show how you could use readily available hardware radio frequency oscilloscopes I wanted to show you how we could enable measurements that would not be feasible without software-defined radio hardware and all these topics will be expanded a bit of advertisement although there's been a plenty of it during this session during the European radio days which will be held in Poitiers June 22nd, 23rd call for contribution April 1st subscription registration is mandatory and free of charge for organization purpose and registration will be deadline made first and the keynote speech which was given last year by Marcus on the internals of the schedule of new radio this year will be given by the author of GNSSDR Carlos Fernandez from the technical university on communication of Catalonia of Spain so he will be given the GNSSDR is one of a complex software for global navigation satellite system processing heavily relying on new radio so if you're interested OSCIM Digital was already presented new radio GR oscilloscope and it's 3.8 port and for the French speaking audience you can find the details of the article in the current Linux magazine and with that I thank you for your attention