 So we're starting our new presentation. So it's a whole new area, now we're in use cases, and we'll see who have two speakers, so Nick will start. Nick, please give a warm welcome to Nick. So hello all. First of all, I'm Nick, he's George, and we're members of the University of Crete's radio station. It's a team of around 20 people, mostly comprised of students and alumni of the university, but we also have people outside the community. We do the team from having live shows and playing music to library management, and of course we also have engineers that do hardware stuff and software engineering as well. So why did we do that? So first of all we all like, it's fun. It's fun, it brings us together, and it's very interesting both in social, it's a challenge, it's a social experiment for us. It's a social challenge for all these people to work together on different things, and it's also a technical challenge. We didn't expect this to be such an interesting project, because it's an old technology, the radio station, FM and stuff. Turns out it's very interesting and there's lots of room there to do development and improve it. So we also want to play with an experiment, learn from each other, from different music genius and different technologies that each one brings to the table, from different knowledge. And we want also to use this as a medium for groups to express themselves, freedom of speech. There are groups in our city that are not expressed from mainstream media, so we wanted to have a medium for them to express themselves. Since we operate in the university, one of our goals also is to bridge the gap between the scientific knowledge and the community of the city. So we want to have shows that promote science or explain things. And of course, during all this, in the end, we want to make a difference. So how do we do it? What's our main, we have some rules. Most radio stations and TV stations, they have advertisement, that's how they make money. We do not have advertisements and we do not have any sponsors. This is a choice. We want to be independent. We want to build and sell. We'll end the rent. This is always good from the university and the rest of the money, which we use for maintaining our hardware and upgrading it, we get it from organizing events such as live shows, DJ sets, parties, and donations from our listeners, not from any company or something. So there are no elite members. We are all equal. We have a general assembly. There is no board of directors or anything. And we maintain everything ourselves. We operate in the principles of do-it-yourself and open source. So we maintain our hardware ourselves. We build it. We teach each other how to do it. And we also write software, our own software, when we need to. So off to George. So, yeah, this is our studio. It's all a quick and dirty approach of making a studio. We bought some cheap carpet to put on the walls and whatever else we could find. And yeah, pretty much does the job. We have an old analog console for radio stations over there with a lot of channels. And we just bought a computer and the microphones and the computer. Because we have an analog console, we wanted to do the mixing in that. And we just bought a sound card with a lot of channels to be able to output different channels and different faders of the console. That is a diagram of pretty much how the audio goes from the studio, from the microphone and the computer to our listeners. So we have, as I said, the analog console. Audio goes out from that analog console to a PC we call Mastering, where we post-process the audio and then code it and just make it ready for streaming. We have two parts of streaming. One is on the FM, broadcast on FM, and the other is on the web. And of course, yeah, part of the setup is also the network storage where we store our music library and the computers we have to be able to interact with the library without being inside the studio, actually. We use a lot of open source technologies. As we said, we are in the spirit of do everything ourselves and what's the best way to do that is to reuse what's already out there. We use Gen2 Linux as our base distribution for all of the computers in the station. And we actually maintain our own overlay of packages for Gen2 with the extra software that we write. We use Jack for audio routing internally in all of the computers. We use a lot of this streamer for the software we write ourselves and of course, Yagnom, Firefox, VLC as a player. I also mix very nice DJ-like audio player. We use MusicBrains, Picard for audio tagging. We also contribute to the MusicBrains of the base, etc. Let's now see what's really under the hood but what we do the interesting stuff, let's say. So, as George said, after the sound goes in this diagram, after the sound comes from the console, it goes to this guy who is the mastering PC. Now, this is where most of the magic happens. No, the other way. So, this is what you would see on the mastering PC. This is what happens there. First we go to the audio. We get the audio. The audio comes in different gain levels, different volume levels. We want that to be when someone listens to music or a show, they want to hear the specific gain range. You do not want it to be one time very loud and the other time very silent. You want to maintain a dynamic range. So, to maintain a specific dynamic range, we use what's called compression. It's not only compression, it's like file compression, it's audio compression. For that, we use KALF. KALF Studio Gear is a set of tools that are used for post-processing for sound mastering, which is pretty cool stuff. The software does the job. It has a really nice GUI. It took us some time to calibrate it properly to get it how we want it to, but really cool stuff. So, post-processing chains. One chain is for the web, because on the web we can have better quality because the restriction there is... We can have a wider dynamic range, but on FM, because it's going out in the analog, it's going out in the air, there is a lot of noise. So, you want the audio to be even louder. You have a smaller dynamic range available there. So, we have a different chain for preparing the sound to go to the FM. Now, once our audio gets processed and it's ready to be broadcasted from the web and the FM, we have... It goes to another channel to go out on Icecast. So, we had Dark Ice initially, which didn't work. It's not maintained anymore. It was very... It took a lot of memory and resources. It didn't do the job as well as Gstreamer did in a few lines of code. So, we have Icestreamer, our own Dark Ice alternative. And we use that to send the audio after it comes from the post-processing chain, the main chain, and we send it out to the web through this Icestreamer thing. Another thing we send the audio to after it comes from the main chain is it's our audiologger. So, when you have a radio station, you are required by the law to keep everything in archive. It doesn't matter. They want this in order to come to you and ask you for... in case of disputes and stuff. So, we didn't find a proper audiologging and we couldn't find a proper audio recorder. We wanted some simple recorder to just record our shows. So, we wrote our own. It's called Audio Coffin. And we use it to bury our stuff deep in the archives properly. So, it supports different qualities. And it uses it's directly on Jack and it uses Libes and Defile to do the encoding. Okay, enough about the web and the good quality thing. How about the FM? Now, once you have your audio signal, it's not enough to just put your audio signal to the RF to the exciter, which is the device that modulates your signal to the F that does the frequency modulation. You have to encode it somehow. If it's on its own, it's just one channel. Okay? So, one channel is mono. In order to encode stereo, you need a device that generates another signal which is called FM Multiplex, what I call their MPX. This guy takes audio and encodes it to a signal that looks like this. So, this is left plus right, which makes it compatible with mono receivers. And then you have to encode the stereo, the FM stereo. So, we can only hear the 22 kHz, which is right here. So, they put the other channel higher than this. So, you cannot hear this, but your radio receiver can and it can decode it. And from there it retrieves the stereo, the other channel. So, here we have left plus right, here we have left minus right. Add them and remove them and you get left and right and this is how you get stereo. Now, in order to sync these two, you also need a pilot tone, which is this guy over here. And they use this to phase lock the two signals. Anyway, this is a bit complicated. And after that you want also to add RDS. RDS is the text you see on your receiver. This is also encoded in this MPX thing and it's this guy over here. It's in 57 kHz. So, usually this is done by hardware and the hardware is pretty expensive. It goes to couple of thousands of euros. Because we don't have money and because these things can be done in software, we wrote it in software ourselves. This is the GUI to our application which is called JMPXRDX. It's a boring name, I know. And it takes the audio and generates this multiplex signal. This multiplex signal is now one audio channel and since it can be, it's up to 57 kHz, you can output it from a sound card that operates in 192 kHz. So, if you have a sound card that can do 192 kHz you can use our program to broadcast FM. You get the output and you put it to the exciter and there you go. We also have RDS support and here you see we have our radio station name. This is what you see on your receiver in your car. This is the radio text. Some receivers also support this. Here is where we put the title of the song. And it works just fine. We also support some modes of operation. There you see the SSB single sideman. It means this signal that's symmetric. Hopefully it goes to the RF amplifier and we can cover the whole city. For now we are around 20 watts. We only operate with the exciter, which is, it's decent but it cannot infiltrate inside your home or in the basement. Most of the time. So, Nick told you about what's going on with the FM transmission. I'm going to talk about what happens on the studio. Other interesting stuff that happened there. Another thing that we wanted to have in our station is to be able to have 24-7 music playing and we want to play interesting music that is not played by others and maintaining playlists by hand to be played by a player like VLC for example is a bit hard. So we thought, why not write our own scheduler? Well, not exactly. We initially tried these bread image solutions. We tried for airtime first. It didn't really work for us. So we wrote our own after that. We wrote something simple. We have the concept of zones. Each zone is a time slot in the day and each zone can support, has a playlist that are edited by hand by people. They put music in the zone. For example, we have a jazz zone. There's a playlist file with jazz songs on it and then we tell the scheduler to read music from this file and shuffle it and we also have intermediate playlists that can be inserted inside the zone. After a few minutes the scheduler can pick up a song from an intermediate playlist. So that's how we can support spots, for example, spots, jingles. Well, because adding spots and jingles inside the day is kind of hard to do by hand. And I forgot to say that this is also using a distreamer as a playback engine. Then we have the problem of propagating the metadata from the players to the listeners. We want the listeners to be able to see what song is playing right now. So we are working on a system. It's not fully done, but it's a work in progress that uses crossbar I.O. WebSockets. There are several producers, metadata producers that run in different places inside the system. Like there is a producer that takes metadata from the audio scheduler. There is another producer that takes metadata from the player that is currently playing through the MPRIS Dbass interface. And all of these producers send the metadata to the server which runs crossbar and then this propagates them to the clients through WebSockets. And we support web clients at the moment. We support the website, I mean. But we also want to add support for mobile apps and the RDS client. And also for IceCast as well. Another thing that is interesting is the music library. We have a big music library with 2,500 artists with their music. Well, one of the things we try to do is to promote local bands. So we take a lot of music from them and put it in our library. And as we said we use music brains for the tagging and for local bands we also contribute the metadata to the music brains database. And of course we are not done yet. We are always working on it. There is always some work to do. So some things we want to do in the future. We want to develop some applications for better user interaction. Like we want to develop a desktop for better users to be able to make a show more easily. We want them to be able to select, for example, which metadata source they should be using. Or to be able to receive messages from the listeners with a plug-in on the website. We also want the desktop for managing the audio scheduler, which is currently done by hand with configuration files. We want to put information of zones and zones in the database and make a website out of it a proper website because the one we have is just a static page with a player. It's not really fancy. One thing we would like to do as well is one idea was to be able to, for listeners to contribute on the playlist through the website. Ideally listeners should be able to see the songs that are in a certain zone and be able to comment and send us a message saying maybe you can add this artist maybe you don't know it but he's nice and he plays that kind of music and you should add him on that playlist. Another thing we would like to do is at some point ditch the analog console and do it all in software. That is going to cost a lot of money because we want to have faders and we would like to buy a MIDI console for that. But as soon as we find that money from events as I said that's our only funding we will try to do that. Another thing that we would like to add in CALF is a real-time declipper because right now that's not supported. That's all I had. Thank you. You can find information about us on these links on our github page. You can find the software that we wrote on github like the FM encoder and everything and the other scheduler and everything else. Any questions? Hi Dary, sounds very cool. What's the license you're using for a project? It's all GPL, yeah. Other questions over there? Hi, you mentioned you wrote an RDS client. Is that based on an open standard? Was that easy to implement? So the RDS you mentioned. So it's part of the modulator, the tool that creates the multiplex signal. It also generates the signal for RDS. It encodes the RDS. So it's open source and it's based on an open standard. RDS is an open standard. You can download it. You will not find it straight away. You might have to send a mail to the consortium but they will give it to you without fee. So we just do what the standard says and it works just fine. We tested it on multiple receivers. I have a question in the back. Just on question, you have a project with Revendel for automation for our autopilot. You have also Revendel or so? This is a program. Sorry, I'm not sure. Do we have what? Revendel. Revendel. No. Because what we want is when music plays in the autopilot to just put sports and jingles in specific time. So when you broadcast, it's required by the law at least every 30 minutes to have a spot for your radio station. None of the automation software did that for us. It allowed to schedule new live shows but it wasn't flexible enough for us and that's why we wrote our own. And it was also very over-engineered. Most of these are over-engineered. That's what we introduced the concept of the intermediate playlist that can play every X minutes which is configurable Last question? Oh, yes. I'm coming. Chris doesn't be very fit at the end of today. Quick question. Do you plan to do DAB plus? Do you plan to do digital radio, DAB? Oh, DAB. Yeah, well... Whatever. So DAB is a digital audio broadcasting. It's another standard. It's not FM. It also operates on another band. We could do it on software with SDR with software-defined radio and it would need a different hardware for the output though because the bandwidth of the channel is not like a couple of kilohertz what you have here which you can output from a sound card. It's a bit wider so I haven't tried it yet. It might be possible to do it with a 384 kilohertz we have on Android but maybe you would need something like a Hakare for something with a wider output channel. The problem is there is nobody and no receiver that actually... I mean, in real life, in our city there is nobody with a receiver that does DAB so... Okay, thank you very much. Thank you.