 Hello. Cool. So hi, everybody. I'm Francesca Ampoi. I'm a developer advocate at Google and I work with the Go team. And this is my fourth year in a row at FOSDOM. And this room keeps on getting more crowded, which is cool. So if anyone from the organization is listening, we want a bigger room. Thank you. Anyway, I'm going to be talking about the state of Go. This is the talk we give every single year. And every single year, we're about to release any version. This year is Go 1.8. So Go 1.6 is already one year old. And it's the version that added things like context. So it's kind of a big thing on how we do stuff. Go 1.7 is already six months old. And Go 1.8 will be released soon. We've released Candidate 1, then 2, now 3 last week. And probably next week, we'll have either Release Candidate 4 or Go 1.8. In any of those cases, you should be testing them. We need people to test it and make sure that there's no bugs, because there might be bugs. Anyway, the slides are online. But if you go online, they might not work. It is because they use the Playground online. And the Playground still runs Go 1.7. And what I'm talking about are the new things, which by default are new, so they don't work. So no need to file issues about those. We're going to be talking about a bunch of things. I have a lot of slides and not that much time. So I will not get too much into detail of the things. But just to give you an idea of what are the cool things that are coming on, we're going to be talking about all the changes to the language, this one, then all the changes on the standard library, the runtime. There's a lot of cool things in the runtime going on. The tooling and final to the community. So changes to the language. There's only one, but I really like it. So I don't know how many of you have written code like this, where you have maybe type person that has those three fields, name, years, and social security number. And you want to parse that from a JSON object. And the JSON contains different fields. So you define those struct tags. That's what you call them. And then you're like, oh, I want to return those values, the arcs. But as a person, I want to do a conversion there. So you do this, right? And that is not my favorite piece of code. So now you can write this. The big change is that we keep on checking that the fields are exactly the same. They have the same types. They are in the same order, all the same things. But we do ignore the struct tags. So we move from this from the specification of the language. We move from here that says a value can be converted to a type T. It's either assignable or they're identical on the line types. Same thing with pointers. What we added is ignoring struct tags. That's it. So that is the only thing that we change in the language. And it makes your code a little bit simpler, which is always nice. That's it. No more changes to the language. There's a bunch of ports to different platforms. And I'm sure that most of you know about these platforms better than me. So I'm not going to get too much into detail. But now we support a beginning little only in 32-bit MIPS. Dave Cheney told me that I had to say that little Indian requires a floating point unit. And if you don't have that, you should enable hardware virtualization. If you understand what that means, you should do it. And there's also plan 90's better and open BSD and bragon fly BSD require newer versions of OS. Also, the important part that I think it's the part that impact the most people soon, not today, is that Go 1 supports OS X 10.8, but Go 1.9 will not. Go 1.9 will require having OS X 10.9, which is important soon. So if you're not able to migrate to a newer version of OS X, you may have issues with next version. For now, you're good. Similarly for ARM, we will stop supporting two CPUs, two processors. The ARM V5 and ARM V6 will not be supported in Go 1.9. To know if what you're running will be supported or not, there's actually a Go tool list with that flag checked. And if it complains, then you have a problem. If not, you're good. Cool, let's talk about tools. And there's a lot of really cool things in here. There's a new fix for Go tool. So there's a new recipe for Go fix. How many of you have used Go fix? OK, mostly no one, which is normal. Go fix, we used to use it quite often when Go was not stable yet. And basically what it does is you can say, change this thing into this other thing. And the pattern matching that it does is actually pretty powerful. If you change your own API and you want everybody in your company to change automatically, you could write something like that. We did it. So instead of import going.org slash x slash net slash context, now if you run that, it will say that now that is import context. You could also do SCD if you know how to do it, but Go fix will do that for you too. GoVet keeps on getting better. GoVet is the tool that we use to have compiler warnings. They're not really compiler warnings because the compiler does not have warnings, but they will tell you about things that are slightly wrong. Does anyone recognize what's wrong in this code? So we're deferring rest.body.close before we check for the error. So if the error is not nil, rest will be nil, and that will panic. And it will panic only when it fails, so everything will fail even worse. So it's not a good thing to have. If you run GoVet in that, you will see that it says you're using rest before checking for errors, which is pretty nice. I wrote that piece of code, not the one with bugs, but the one that checks that. So that's why this lies here. Also, we have SSA everywhere. And I'm not going to get into detail of what SSA is, a single static assignment. It is just a way of writing code. And we're using that form to generate Go code. Now, all our backends use SSA, which means that our code is more compact. It's faster, and it is easier to implement a bunch of algorithms. One of them is, for instance, dead code elimination. But there's also many other algorithms that are going to make that the compiler is going to get better quite fast now that we have SSA everywhere. Also, if you were running on 32-bit ARM, you were not using SSA until now. Now you will be using it. And you will see speed ups to up to 30%, which is really nice. If you were using SSA already, it's a little bit faster because it's Go 1.8. But the Go compiler is also faster. Running Go build on a bunch of really big projects, it is way faster than Go 1.7, but not as fast as Go 1.4. And we'd like to get there one day. But we're working on it. We keep on saying, oh, the Go compiler is so fast. And then we're losing that a little bit, so we're working on that too. Then there's also the full Go path. And the Go path is something that, once you start working doing Go after a couple months, you're like, that's obvious. But the first day, it's kind of hard. So what we did is remove the friction of having to define that variable. If the variable is not defined, we'll have a default directory for that. That's it. Go bug. It is very, very simple. You can run it. And what it does, it opens a bug, depending on the Wi-Fi. Oh, also you need to be logged in. That's sad. But I'm logged in. That's silly. Anyway, if you were able to see that, it generates a bug, but also especially at the end of the bug. It includes all the information about what platform you're running, what is your environment variables related to Go, and stuff like that. Like Go, M, nothing private, don't worry. So it makes it much easier for the Go maintainers to fix your bugs when we have all of that information. So if you have bugs to file, use Go bug. I'm going really fast, but that's cool because I have a lot of cool things to say, so. Cool, okay, so we're gonna talk about runtime now. And runtime, there's a couple things. In Go 1.6, we added the fact that we do data-raised detection even when you don't ask for it. If you're accessing a map in an unsafe way, like in here where there's a bunch of different Go routines, and they're all adding stuff to this map, and there's no mutex or anything. We're just doing that. That is a bad thing. That's something that should fail. And if you run it, it fails, and it says, concur, map, read, and map, write. And it gives you basically the same output as when you use the data-raised detector. The change is that this now is better, and it detects more cases, which means that you might get more panics. It is because your code is wrong. And you should be running your test with data-raised detection enabled, basically every single time. That is my piece of advice. And then something really cool. We have mutex content, mutexed? I fixed that, mutex contention profiling. So when you do a Go test-bench to run your benchmarks, you can add dash mutex profile. We had dash CPU profile, dash main profile. Now you have dash mutex profile that will generate a report of how often your Go routines are locked in a mutex. You also have other ways to set this up, but I think this is the best way with benchmarks is very simple. It works really well. So for now it doesn't work with read write mutexes, but it will at some point. So let's write some code. Imagine that I tell you that I want you to find the factorizations from all the numbers to two, from two to n, and count how many times every factor appears. So if you do from two to 10, you can complete the factors and you go like, well, five appears here and here, so there's twice that thing. So I wrote that code. And there's a moment where you're like, okay, so when I want to access the map, I have two options. Either I lock, then I add all my factors and then I unlock, or for every single one of the factors, I lock, add, unlock, right? Who thinks the one on top is faster? Who thinks the one on the bottom is faster? I like it because it's like half and half and nobody's sure, which that's why this is important. So the one on top is incredibly slower, way, way, way slower. And this is because of the contention, because you're actually calling the function lock and unlock less. So technically, you should be faster, but you're actually locking and having contention way longer than that. So when you run go test that bench, you can see that. With go test that bench mutex profile equals mutex.out, you will see that everything is a little bit slower. That's normal. That's why you should be using it just for profiling contention, not profiling all the things. And you will have some information when you do the tool peep rough. And you will see that here, we have the one where we lock for a shorter time has five seconds of contention. Well, the other one has almost seven seconds of contention. And that feels like that might not be that much, but it is actually a huge amount of contention. So there's so much contention. Why don't I use less go routines? So I use less CPUs. I should get less contention because there's less people fighting for the mutex. So what I did is compared to the sequential version two, the yellow is sequential. And when you have one CPU, the sequential and the wide are around the same. Other than that, then the narrow is way faster. The interesting thing is that sequential, it is actually better than the wide one where we're locking and then doing the follow-up and then unlocking. So profiling is important. Sometimes you think that your code is good and sometimes you're wrong. The interesting thing about this is that you might say, oh, I'm gonna use narrow section for everything, which would be wrong because up to 750 elements, the other one is faster. So do use these kind of tools. Performance analysis in Go is actually quite powerful right now. For the graphics, by the way, is Google spreadsheets. There's probably something better. Talking about performance, let's continue on performance a little bit more. So the garbage collector. We talk about this every single version and every single version. I have two sides. I show all the official benchmarks and then what people say on the internet. I'm gonna drop the official things and just go to this guy that has been tweeting for every single Go version, how his servers behave. He started with Go 1.5 and the garbage collector, take it into account that these servers run around, I think it's one gigabyte of RAM, which is a decent server. The process went from around 300 milliseconds to 40, which is, you know, quite impressive. Then Go 1.6, they went from 40 milliseconds to around four. Then Go 1.7 went from four milliseconds to around two and Go 1.8 cannot go lower, but it does. So now, yeah, Go 1.8 is actually even faster. The expected pause for the garbage collector is around 100 milliseconds, sorry, microseconds, which is really short. So, yeah, that's nice. And the CPU spent for that change is actually not that much. It's around one to 2% spending more CPU to get a way shorter garbage collection pause, which also compensates by the fact that most of the other things are also faster, so your program will still be faster. Defer is faster. So, that doesn't mean you should use Defer on a, like the hot path of your program where you have like a crazy loop with a bunch of the first. Probably not. Defer is faster, but it's something that I would say it helps a lot your code, but it's the first thing I would remove if you have a performance issue. I would move it somewhere else. But now it's faster. It's depending on the cases in between 11 and 34% faster than it was before, which is nice. But I was, as I was saying, Defer is not the fastest thing you can do. So, we found a place where we're using Defer all the time, which turns out to be CGO. How many of you have used CGO? Or any binding, to be honest, like if you've used a binding, you've used CGO. So, if you're doing that, then you will see that the function calls have a cost associated to it, because CGO, you know, you're doing a system call and stuff like that. So, we removed that Defer, and now it's half of the time. So, it's around 50% of the time it used to take. So, that is pretty good. And if you look at the change, it is literally moving at the first statement, saying, oh, we're gonna remove this, the first statement, and moving to other places where it should be called. So, if you have Defer's in your code that you execute like millions of times per second, think about that. All of these, by the way, comes from Dave Cheney, who's awesome and wrote a very good blog post. The link on the slides work. Cool. Change is to understand the library now. Sorting. How many of you found that sorting a slice was slightly painful? Yeah. When you want to sort a slice, imagine that I tell you have a slice of person the same time as before with name, age, and social security number, and I tell you you should sort it first by name, then by social security number, and then by name, by age. And then you write that code. Sort that sort by name, and you're like, done. Almost. You still need to write this, right? Which is so your slice of type satisfies the sort interface that has three methods, so you need to define those. What we did is we defined it a new method in the sort package. It's called sort.slice, and you pass it a slice and a function which is the equivalent of the less function. So basically how to compare stuff. So now your code can be simply this. So you're saying sort it by this. Can make it a little bit bigger, maybe. Because I think this is kinda cool. So you can say sort slice that slice p and the function that I'm using compares name or compares age or compares social security number. The first thing I saw is, that is not what I wanted to do, okay. The first thing I saw is, my first reaction was like, yes, but is this reflection, how slow is this? And I did benchmarks and then I used Google Cloud, Google Cloud. I work for Google Cloud by the way, but completely related, Google spreadsheets and it is slower. But if you do some mathematics to show it statistics, you can show the data you want. You will see that it is actually not that much slower. There's a little bit of an overhead, but it is not crazy. So if you're writing code where, again, it is a place where defer would totally go there because it's just code that is not that important that runs so crazy fast, I would use it, but otherwise I will probably still define those types. If you really are performance, if the performance is actually needed for that specific sorting algorithm. And then we have plugins and for plugins, I'm actually gonna do a little demo because we have time, we have lots of time, that's cool. Okay, so plugins and go, basically what they allow you is, you can define a package like this, package main, that defines two exported identifiers, identifiers, V and F, and then you can do go build dash build mode equals plugin. When you do this, what it generates, it's a .so file, shared object that you can then load. So that is one of the things, build mode equals plugin is new, and it only works for Linux. So you cannot run it on Mac or Windows yet. It will be coming sometime soon. Once you have this, we also added a new package in the standard library called plugin. And what that allows you is to do a plugin that open, given the path of an .so file, it loads it, and then you can look up symbols, look up identifiers, and then if you do a cast to whatever you know it is, then you should be able to call it. So with this in mind, I was like, wouldn't it be awesome to implement something that would do hot code swapping in Go? So basically I just have my code in production, and then I save my file and production changes. That is awful idea, by the way. That is not a good idea. But I tried to do it and I did it yesterday, so I'm gonna show it real quick. The code basically, it does a bunch of things, and these open source by the way, so you can go see it on Github.com for a goal on plugins. So if you wanna see the code, everything is there. So, if you do the occur run of that thing, and then go run mean.go. What if this is, there's this plugin directory where I have a bunch of different Go files, it compiles it, that's why it's pretty slow. Compiling it, it's loading and executing is actually quite fast. So it just goes every second, and goes compiles everything and calls it. So I go to that plugins directory, and I come here and I'm like, whoop. I'm gonna change this, and change this to, that's this one. Okay, I'm gonna save, and go run mean.go is not doing anything magic. It's just recompiling and loading the plugin as new. So I save, come here, and boom. So now what you have is code, I never remember what is the actual name, hot cut swapping of Go, which is an awful idea for many things in production, but if you're playing with things like video games or stuff like that where you need to load extra things, this could be really cool. So, there you go. Thank you. So yeah, there's a demo that you just saw, and I tweeted about it, and that is the repo if you wanna go see it. It is experimental, don't use this in production and send me an issue, please, don't do that, that's awful idea. Okay, one more thing that we added is HTTP shutdown. As you might all have used HTTP package, you do HTTP listener serve, and that starts the server. And if you want to stop it, it is hard. It is doable, but it's hard. And what we want to do is we want to implement this thing called lame duck, lame duck mode, where a server stops receiving new requests. So if you get a new request, it will say, sorry, no, but it will not just die and drop all the requests that were ongoing. And there's a new way of doing this, which is actually much simpler. So in this code here, what I'm doing is I'm actually listening for control C, so that's how you do that in Go. And as soon as I get a control C, I will call the shutdown method in the server that I defined. So listener serve will not work, but you need to define a server value. And then from there, you call listener serve, and that will return something. So the important thing is that server.shaddown, and you pass a context if you want to, but you can also pass background. And then on the other side, you do listener serve, and now the thing that is new is that until now you were able to do log fatal of that, because if that listener serve returns something, it will be a bad error. Now that actually kind of changed, because it could be htp.err server closed, which is what you get when the server stops, because you told it to. So that is a little bit of a difference, but that is actually something pretty cool. If you have something in production and you really care about your SLIs, SLOs, this could be a good way to increase them a little bit. Then, htp2. htp2, we all talked about, htp2 is amazing because let me show you the little demo, http2.golling.org, tiles, go for tiles. So, I don't know if you've seen this demo, but the cool thing about this is that you can do, so let's say that I'm gonna, I don't know if I need to simulate latency being here, but I'm gonna do it. So, this is with 200 milliseconds of latency in htp1. I'm getting each request, each image one after the other. With htp2, what you can do is, you can just say, give me all the things, and you start receiving them directly, right? So, the difference is that with the same latency, htp2 looks like this. So, that's pretty good. There's another thing that is really cool, which is htp2 allows you to push stuff from the server, which means that if I'm an htp server now, and you ask me for a home.xtml, I'm gonna say, this home.xtml and have home.css and a scripts.js, because you're gonna need it later. With the same idea that you will not need to wait to ask for it, I know you're gonna need it, so I'll give it to you. So, that is exactly what push does. There's a new interface, the pusher interface, and what it does is when you call push and you pass a URL, it will just fake an htp request there. So, if you have your server already serving the CSM and the JavaScript, you don't need to add anything else. It will just work. The only thing you need to do is check here that is a type conversion. Now, it's a type assertion. So, basically what you're saying is, I want my response writer to add as a pusher, and that could be the case or not. If it's htp1, it will not be the case, for instance. If you're using something slightly different, it will not be the case, but sometimes you will have an htp2, which is a pusher, and then you call push, and that's it. Now, you have it. So, I run htp2 and htp1 at the same time from the same server, and let's see how that looks. No, wait, I've changed so many things. That is sad. Let me fix that in a second. So, did I refresh this? Kind of find package go like, okay, let me do something. State of go, state of the live, and htp2, go run, go build, and run it here. What? Okay, every single time I come to a conference, I break something, so that's not shocking. Okay, I shouldn't be doing this, but whatever. Okay, so that is the problem. State of go, htp2, let's try again. Yay, it works, okay. So, now I'm gonna come here, refresh, and run it, and you didn't see anything. Cool, okay. So, if I run localhost, please, I'm good. Look at my CSS skills. The cool thing is that you can do the network, and when you refresh, you can see that there's a localhost style CSS. Those are the two that we care about, and you can see that the way it works is that here, we first load localhost, which is the HTML, then we parse it, then we realize, oh, we need the CSS, so then we send the request, and then we get it, okay, which is how htp1 works. The same thing with htp2 is actually quite different. Wait, let me refresh. You can see here that push, that is because the server told the browser to go get that already. Basically, not go get that, but this is for you, you're gonna need it later. So, this faster, because we finished receiving the event before we actually end up parsing the HTML, which is faster. If you have a bigger webpage, this actually, you will see the difference. If you're importing something like AngularJS, and you have a lot of CSS stuff, the fact that you're able to push it before will make your server much faster. So, that was in case it didn't work, which it almost didn't, so. Then we also have all the context support. Context was added in go1.6, in go1.7, we added support for the net package and the htp package and OSXX, and now with go1.8, we added to server shutdown, so that's what we saw. You can pass the context there, but also to SQL package, so you can add context, so you can cancel ongoing requests and stuff like that, and also to NetResolver. There's a couple more changes too. This slide, there's actually, if I did the whole thing, there's three slides of something with these fonts. So, there's a lot of different changes going on. If you do something in go, you should read this thing, because maybe there's something that I consider not to be that important, but that will make your life much easier. So, it's definitely worth going there and just searching for whatever you do, SQL, whatever, and just find it there and see if there's anything new that impacts you. And finally, we're gonna be talking about the community. First, one of my favorite organizations, Women Who Go, keeps on growing all around the world, which is awesome, and now they have 16 chapters, and it is pretty awesome because they're all around the world. So, if you're interested in that, they're awesome, and I don't know, I was gonna say there's not one in Brussels, but there might be, I don't know. I cannot see me in there. But if you're considering doing it, there's a very good, it is very easy to create a new chapter and you have a lot of support from all the other chapters. So, think about it. We also have Go Meetups and Go Meetups. These are webpages that I maintain myself and I added a map, and I'm not good at HTML as you already saw. So, my first iteration was I had a little pin per Go Meetup, and it was really ugly, so you cannot see the world. There's a lot of Meetups everywhere around the world. You can play with it. It's go-meetups.appsport.com. Wow, the Wi-Fi, there you go. So, you can go click around and see that around here in Belgium. There is nothing. There's only Antwerp and no Brussels. So, if you're from Brussels, you should create a Go Meetup. There you go. And finally, all the conferences. We keep on having a bunch of conferences. So, there's one which is fast and you might have heard about it. It's pretty good. Then, after this, we have Goforcon India. Is there anyone that is going to Goforcon India? You are going to Goforcon, you're helping organizing it. So, there you go. That is normal. We also, I will be there too. We also have Goforcon Denver. Anyone going to Goforcon Denver? Cool. One, two people. I'm also going there. I hope my talk is accepted. If not, I still will go. But whatever. Go in UK, which is closer. It's just a train away. And then, Doggo, which is in Paris. Doggo is a very cool conference. They do something cool, which is the talks are only 18 minutes. So, the speakers kind of rumble like I do. And finally, we are celebrating Go 1.8. Regardless of if it happens or not, we're going to celebrate it. We decided on the dates. So, you can go see there's a release party and release parties happens all around the world. I know that you have a slide later about it. We'll show it in a minute with like the closest ones around here. Go see it and go have fun. Guilds probably learn basically the same things you learn here. But there's extra talks and lots of cool people. So, go see them. And with that, that's all. Thank you.