 Hi. Am I audible? Yeah, I guess so. I'm Atul and this is my second FOSDM. And I just got lucky to get the opportunity to do a lightning talk. Earlier in the day, Ron gave a fascinating talk about Go Bot, where he mentioned reasons for using Golang, which is a concurrency performance and much more. I work for Minyo and for us, choosing Go as a language of choice was very easy because of you, us, and the vibrant Go community. We got all the support and help and the love we needed to build this project. I would like to give a big round of applause to the organizer of the Go Dev room for making this event so well. I mean, can we give a round of applause to our organizers for this? OK. So what I'm going to do in next 30 to 40 second is walk you through the list of associated projects, which we have written in Go, which powers Minyo. Concert is a console-based certificate generation tool for let's encrypt. We had a community of users using let's encrypt. And that gave us a reason to write a tool call concert. Desync, Blake to be SIMD, SH256 SIMD, and ASM to plan 9S. I won't be saying much about it because my colleague Frank earlier in the day already spoke on this. Similarly, we also have a project, MinFS and S3 Verify. MinFS is a mountable file system for Minyo as well as Amazon S3. I don't know if you know, but Minyo is an open source AWS S3-compatible object storage solution. So this is pretty much what my presentation of my lightning talk is. As I said, I'll be done in 30 seconds. I think I'm done with it. Thank you, everyone. And if you have any question, we have a Slack channel. Feel free to join in and ask. And I'll be here throughout the day. Just ask any question you have. I'll be here happily answer it. Thank you. Hello, yeah. So basically, I just wanted to do a very quick talk about keeping packages backwards compatible. So this is going to be just a few tips, the examples that we're going to see in the source codes, like it's a bit of a circle. I'll be tweeting the slides later if you want to go through that. The main thing in terms of keeping backwards compatibility is that you want to do your packages to have very small APIs because anything that you export, any interface, any variable function, you are committing to it. So it's something that you can never change because it will break the code from people that use your API. I maintain Testify, and I've sometimes broken the test from the AWS SDK, which is not fun. So you need to be very careful with these things. So one of the things that, like, exporting fewer interfaces, and I see many people who do something like this, where they have a package example and they have a new, there's return a service and service and interface, which has the methods that you want to implement in your service that you're not exporting. This is that then, if you want to do something new in your package, this is going to break any other package that was implemented in your interface because they no longer satisfy your interface. So as a rule of thumb, receive interfaces but return extracts. Because if you are returning interfaces, then you are committing to not having more functionality ever to that interface. So in this example, instead of doing, like, we'll have the service, which in this case is extract instead of an interface, and we return the point into the extract. And if you want to encapsulate things, like, don't export fields and make sure that works properly, but don't try to encapsulate your code by returning an interface. Similarly, when you should be exporting your input interfaces, there are interfaces that you're accepting. It's like, I sometimes see people who do something like this, where they have a function that takes an input that is an interface and it's not exported. Which means that, like, looking for what that interface actually is, it's a bit of, like, where's Waldo? You go to the Godoc, it's like, OK, so what does this interface need to be satisfied? You don't know. You need to go to the source code and figure that out. So instead of doing that, you should be, like, exporting the interfaces that you are accepting in your exported functions. So that for documentation, people need what they need to know, what they need to implement to satisfy those interfaces. But the thing is that, like, there's sometimes where you are not ready to commit to that interface, and you want to keep that interface to yourself, and you don't want other people to be, like, implemented on their inputs. So one example is, for example, in the standard library, in the testing package, you have, like, two structs, the T and W struct, T is used for tests and B for benchmarks, but they have, like, a TV interface, which basically is, like, all the methods, like error or fail and all the stuff. But they're exporting this interface. And as we said, like, once you export an interface, if you cannot no longer add anything to that interface because that breaks backwards compatibility. So if you really want to do something you need, not really want to, if you really need to do something like this, one thing that they do is that they have, like, their interface with all the methods, and they have, like, this method private, which is not exported, which means that only their package is able to satisfy that interface. So you can go through, like, the source code and, like, the standard library and see, like, why you might need to do something like this. But if you need to do it, like, there's a way to do that. And with that, you're saying, like, OK, this is an interface I know exporting. You can document, like, what that interface requires. But you are not committing to not making any changes to that interface, to not adding more methods. One thing, like, one place that I've seen, like, that some of these things play really nicely is with, like, doing, like, nice optional params. So this pattern I've seen from GRPC. So for example, we have a crawler here that you initialize, and you can crawl a website and, like, call a function on every website that you're crawling. And, like, you call it without any options. And then you can do, like, hey, but you only allow, like, only crawl websites with this host, or only allow, like, up to a max depth of four. And you have, like, optional parameters. The way GRPC or this crawler does it is, like, you have new, which is a very big function that takes option. An option is nothing but, like, a function that takes an unexported options and returns an error. And options is unexported because you are the only one who is going to be able to implement those configuration options. So you do, like, OK, with max depth, it's as simple as this. You return the function and, like, do all the things. You are the only one who can implement these options. Then you have, like, everything documented in Godox. So you have, like, option. And despite not being able to, like, implement other options, you have, like, all the documentation for all the options that you've got. So be backwards compatible. And I'll be tweeting now the slides. And if you need anything, just ping me. OK, so Sigrush and Anigans in Go. For those who are not aware what Anigans means is just doing crazy things. And this is a part of a bigger system that we build at work. So what we want to do, essentially, is SQL-like querying over a stream of JSON objects. So we have some JSON objects coming in. And we want to be able to filter them. Sometimes you would do those kind of things with systems like Storm. But we just wanted to build our own. So we said, why not? So SQL-like querying. Fortunately, there is this wonderful library written by the YouTube guys called Vitas. And they have a really nice SQL parser. So suddenly, you can parse SQL, which is a daunting task, but it's done for us over a stream. So we can read data from, say, NATs, a WebSocket, a Go channel. It doesn't really matter. And JSON objects. So let's say something that looks like that. So how do we find SQL over streams? Well, there's a patchy project for that. Like, why not? And the CalSight project has defined a new syntax for doing aggregations and operations of our streams. In this case, it would mean, over an interval of one hour, do that query with the results or the data that has come in in between. So how do we build it? Well, we take a string. Doesn't really matter. We parse it into a map of string empty interface. Horrible. Then we apply our new created SQL engine. And we return something that is a map of string interface. So the only thing that you would change in case, if instead of doing JSON, you want to do protobufs, well, you are not doing strings. You are just taking something else, and you convert it into a map of string empty interfaces. And then the time. So I've created something that generates a person object every five milliseconds. It marshals it. It dumps it onto a NAT server. Then I'm taking the, in another process, the byte array, converting to a string. I marshal it into a map string empty interface. I do the SQL querying, and I return a channel of my filtered results. So in this window, I have a process that is generating all those objects and dumping them into NATs. And here is where we are hopefully going to see it. So what does the code look like? OK. So we are connecting to NATs. We are doing a query. So we define the new query. It selects star from, and we forget what from means, because it doesn't have a meaning in this particular example. Then we are going to be having a channel where we are going to be dumping our input strings. And we will return them. We evaluate our queries. And this is how we dump the data from NATs into our query parser. Let's see it working. Yes. So we query with everything. And there's all our messages. We can query by filtering for a field called address country. And then suddenly we will have a flood of data, but it's a map of country, whatever we want. But then we can do fancy things, like select address country as country, count star. And then we are going to group by country. And we are going to meet a result set every second. So if I run this, and I save before. So every second we are going to have a bunch of things going out. And just for the finale, before they get me out of stage, we can also do order by, say, descendant. And we want to limit to five results. Then we can run it. And every second we will be seeing five, not the first time, but you see you only have five results. And they happen to be quite high numbers instead of the ones that you would see before. So it's also sorting. This is going to be open sourced. I just had it ready. But if you follow me on Twitter, I will tweet when it's available. Thank you. OK. OK, that sounds good. All right, so I'm quickly talking about a little bit about voice over IP, audio, low latency, communication, and go. I'll give you a quick background, because this is not the typical sky kind of clone. This is for a very particular use case. My big hobby is amateur radio. And together with some friends, I built up this station in Spain. We have about 2,000 to 3,000 kgs of aluminum and steel in the air. And inside the shack, we have plenty of technology, lots of computers, radios, amplifiers, and so on. Unfortunately, my radio station is about 150 kilometers away from Madrid, where I live. And it's a little pain in the ass driving there every now and then. So the idea, obviously, came up. Hey, why don't we automate and remote control the entire station? And one of the fundamental ingredients for that is obviously the audio. So this is a basic schematic I have on the left side in Madrid, a small remote panel. I did some breakout boxes for the radio control. I wanted to have just a small Raspberry Pi and on the other side, again, a little bit simplified. Again, Raspberry Pi doing some stuff and then putting the audio and the control into the radio. I will skip the remote control stuff and just purely talk about how to push the audio from left to right and right to left. So the basic requirements where we need very low latency because we're controlling a radio and each millisecond counts because it's not like talking to somebody where you already know that you're going to have, he has to think about something before he responds to you. You expect immediately respond. So latency was probably the main concern. It had to be multi-user compatible. So we wanted to have several people being able to listen on the radio, but only, obviously, one can transmit. I wanted to have the software being operational for in the server mode, the one which is connected to the radio and the client, which is obviously then where the operators have it at home. It should be file-walled-friendly. I'll talk a little bit later about that. It should be a command line application. Obviously, low CPU, low memory profile because it should run on a Raspberry Pi. So the application is already done. The first version is out. I have several guys testing it. And so far, the results are good. This is a little bit of a software stack. Just to give you a quick overview, they're really great libraries. One of my favorite ones, SPF13 Viper and Cobra, which helps you enormously to bootstrap your source code to get started, to have a nice flag handling and config file handlings. The serialization is done with protocol buffers. And the transportation layer is not UDP. What everybody tells you, use UDP, it is actually MQTT. I was very afraid that this might not work, but actually we have fantastic results. The audio is done with Opus, the encoding, and obviously Raspbian. So this is a quick overview. On the left side, you see a small web interface where you can change the volume, adjust the volume, open and close the streams. On the left side, you see plenty of configuration parameters. You can use Opus or PCM and whatever kind of combination you want to have. So basically, the audio flows from the left to the right. We take the audio, we push it through Opus to reduce the bandwidth. We serialize it with protocol buffers. We put a little bit of metadata in it. We push it to MQTT. And on the other side, we basically extract everything again. The reason why we took MQTT is because it allows this PubSub feature. So several people can actually subscribe to two different topics, which in this case makes audio streaming to several people actually super easy. How well does it perform? So this is a recent snapshot, which took a friend of mine from the Czech Republic. He has a professional setup with two Cisco phones. And he connected remote audio, the application with the Raspberry Pi in parallel. And he took his oscilloscope and just triggered a message. And here you see it was actually 20 milliseconds even faster than the Cisco phones. So that's it. Source code is there. If you're interested, check it out. Thank you. Can I use the blackboard? Can I use the blackboard? Yeah. There you go. Is there any talk? Hi. Well, I'm not that prepared. And I wanted to talk about securing your RESTful API. So let's say you have this web application. It's awesome. That's all the stuff you wanted. Now you want to secure it. One Google and you know about OAuth 2. One next Google, you know about OpenID Connect. So how do you enable OpenID Connect in your web application? So OpenID Connect is just a bunch of endpoints that you would have to implement. But there's this project called Hydra that you can use. And you can connect your web application and talk to Hydra and make your clients also talk to Hydra. And Hydra is going to handle all the token exchange stuff. So how does it work? You have your mobile client, let's say, talking to Hydra. So do you? No, you don't. You have your mobile client talk to you. Sorry. Client talks to your web application. Your web application gets a token from the client. A token that was granted by Hydra. And your web application is going to ask Hydra if the token is OK. Is it valid? And it does that via a restful API that Hydra provides. So one last thing that you would have to implement when you work with Hydra is an ID provider that when your user is logged in to your web application or whatever, it goes to Hydra. Then Hydra redirects to your ID Connect. Then this guy authenticates the user via LDAP or log-in form. Then it redirects back user back to Hydra. And now Hydra knows that it can grant some access and can give back the token. Yeah. You can Google all the stuff. OK. Sorry. One Google and you know everything. So normally we had another talk that is not going to happen. So Alex, sorry. But if you're curious about it, I built a video game in Go. Flappy Gopher using just Go and SEL2. And it's fun. And he did the same. And it's fun. And if you want to know more, come talk to us. And again, thank you so much for organizing this. I don't know whether I say anything else. Thank you everybody for coming to Go Room. I hope you enjoyed the talks we had. And hopefully see you next year again.