 My name is Stefan Wintermeier, my Twitter handle is here. First of all, thank you for the invitation, it's a great conference and thanks for having me. Today, I'm going to talk about three main topics, Wutuf, web performance, which is very connected to Wutuf, and feelings for newbies, which is related a little bit to the last talk of Chris, where I would like to give my input about stuff, which is, in my opinion, important. Wutuf is a business network, so think of it as a LinkedIn, but open source, free, fast, and less annoying emails. Chris here, please set up, he's responsible for the beautiful code. If you see actually code, that was me. So if you need to hire a good adexier developer, he's the guy you want to talk to. Free is a big challenge for us, because free means as little hardware as possible, and right now, our growth rate is better than expected. Within the first three months, we have 200,000 accounts, and I hope after this conference at least, I don't know how many people are here, okay, let's say 130 more. James already gave me feedback yesterday that it's his favorite web page ever. So there are many different ways of using Phoenix. There are two extremes. Chris is using all the fancy stuff, like two million connections, outgoing on a single server, JavaScript, heavy load, speed. We use no message pushing, no JavaScript, heavy load, speed, and no money. We differentiate ourselves from LinkedIn and Sing, which is big in Germany, mainly by speed. The web performance, these are typical pages. This is the index page, if you haven't logged in yet, and this is a typical profile page. Both pages will load within 700 milliseconds in Germany if you use DSL. Once we are not in Germany, this is the time you have here. It's in Recording, Wutuf against Sing against LinkedIn, and you see because our servers in Germany, our servers are in Germany, it takes a little bit longer here than in Germany. Why? I will describe in a couple of slides. We are still faster than LinkedIn, which server are, I don't know, where are they? Does anybody know that? Probably on this continent. So why is web performance important? These are old numbers, 1993. It's an experiment about humans, how they react to user interface. And you can see everything below one second is fast. So you want to be below one second. Everything above 10 seconds for a web page and for any other interaction is like suicide. Google did this very interesting experiment. They can take groups of their users without asking them and make experiments with them. So they created that group of users and they said, why not let the search become slower for them and see what's happening. And I'm not talking seconds or so. They tried it with 100 millisecond, 200 millisecond, 400 millisecond. And this is the time it took to see the difference, and this is the difference. So even with 100 milliseconds, which is nothing. You see a decrease in daily searches. But you have to, it takes time for humans to adjust. So if your own web page gets slower tomorrow, you will not see a difference in Google Analytics the day after. But in four, six, and eight weeks, you will see. You will have a decreased conversion rate. And the conversion rate is, that's pretty impressive, like these are three examples of companies who did that experiment. For Walmart, one second means 2% increase in conversion rate. Which is, for me, that would be ordering in new S-knows. Maybe two. For Staples, it's even more aggressive. One second of improvement meant 10% increase in conversion rate. So we are talking about numbers which really make a difference in your revenue or in the revenue of your customers. For mobile users, these numbers are important, too. Like 74% leave a page when it's not loaded within five seconds, and we all know that. In every hotel you are, normally Wi-Fi is shitty, and it's a nightmare. Because latency is the biggest problem. Why? It's because of TCP. We are using HTTP. HTTP is using TCP. TCP is a very old protocol. And their inventors of it had to find a way to get the maximum of bandwidth. They didn't know the bandwidth. So they said, okay, we start small, and then we double every time. That's called slow start. And that's a typical scenario. If I download a 58 kilobyte file in Frankfurt from a US East Coast server, this is what happens. First, you see the three-way handshake. It's always the server and the client are communicating via fiber through the Atlantic. And after the three-way handshake has been done, the client says, okay, I want to get the index.html file. And then the server says, no worries. I sent you the first 10 segments, which is about 14K. And once I received that, I said, acknowledge the server, and the server said, okay, let's double that. And so on and so on. So for a 50K file, which is nothing, it takes 320 milliseconds to transfer that. And if you take one thing from this talk, it's the knowledge that bandwidth for normal web pages is totally unimportant. It's the same time if I had fiber or analog, it's not the problem of the bandwidth. It's the problem of the latency of the connection. It's a different story if you're watching Netflix. That's a bandwidth thing. But for a normal web page, latency is a problem. So to give you an idea about the numbers, here for the first round trip, we transfer 14K, then 28, 57, 114. Problem is today everybody is using SSL. So SSL takes away the first round trip, because we need that for the SSL cert exchange. So if I get this one, we start with 28K. And for Wotufi, we want everything above the fold. So everything you see on the screen without scrolling, within the first 28K. Again, latency is king, bandwidth is not important for this. So to show you the difference, I want to show you the same page, which is the Wotuf page. Left side in Frankfurt, right side in Sydney, Australia. Same page, same browser, everything's same. Just Australia is a little bit farther away. That's the main reason why web surfing in Australia is less fun than here. Same thing, but with a 3G connection, which is even worse. Again, it's not the bandwidth. It's not that 3G has less bandwidth than DSL. The problem is that the cell tower and the mobile phone, that last mile, is a very, very slow connection latency was. So that adds up to the rest. So how do we tackle web performance? The first thing is you have to set up a time budget. Our time budget is one second within Germany. So this is the waterfall, and this is the time where the page starts to render, and this is the time where the document is completed. You see it's about 600 milliseconds. It's important to accept that you cannot control the network. No way you can control the network. Don't even try it. But we can control it. We can control the transfer protocol. So obviously, you want to use HTTP2. You can control the compression, like GZIP, SAPTHI, or POTI. You can control the amount of files, which was even more important with HTTP1, but still is important for HTTP2. The file size, obviously, it takes longer for 100MB than 1MB. And the time the server needs to generate the HTML. That's the part where Phoenix comes in. Obviously, the content is very important. JavaScript, for example, takes time to compute on the browser, and mobile devices are less fast or have less CPU power than desktops. So you always want to have in mind that JavaScript is a performance killer initially. So the waterfall. This is the time which is used on a server. That's the time we can control by good programming. That's the time that the file size, which needs, which the file needs to be transferred from the server to the client. So the server. We're using EngineX, MySQL, and Phoenix, obviously. We use bare metal and DBN Linux. Everybody knows it. Phoenix is about 10 times faster, and, yes, the very first, like, two versions of Vutuf I wrote in Ruby and Rails because I know just forever. And it was pretty soon foreseeable that I couldn't do this with Rails simply because of the money problem. But just using fast programming language is not good enough. We avoid serving freshly generated HTML whenever possible. Hard drive space is much cheaper than CPU. So EngineX is our gatekeeper. It checks for cookies and routes accordingly. I played around with Varnish, but then hard drive is cheaper than RAM. So what do we do? Like the index page. Obviously, it's a page rendered by Phoenix. But that's a page which doesn't change very often. So what we do is we just save that page in the cached file system of the server and say, and tells the server that if somebody tries to connect to the page and doesn't have a cookie already, he's a new user. We can give him the static copy of the page. You just have to take care of the CSRF because we are using a form here. It's a little bit tricky, but it's no magic. Same here. If somebody comes through a search engine and is not logged in so doesn't have a cookie, he gets a static copy of the file. Many people are asking how much space do we need for that. Because we are using very optimized HTML, a file normally is about 28K. So it's an easy calculation. Let's assume we would have one million users, that's 26 gigabyte. That's nothing today. So it's not a problem at all to do this, but you have to have small files to get it done within the limitations. What do we get from that? The rendering of the user profile takes about 15 milliseconds. Serving the static file way less than one millisecond. So that makes it understandable why there's such a big benefit for us. This rates these numbers like 15 milliseconds a couple of years ago that would be enough already. The avatars. Of course we are using ARC to store the avatars and to handle them, but it's a little bit more complicated. It's a social network. So avatars are very important. Here, here, here, here. We have avatars everywhere. So an avatar is a circle image. It's a JPEG. Unfortunately, there are no circle JPEGs. So it's a square one. We have a square JPEG on the server. We deliver a square JPEG and the CSS is going to market as a circle. So we are wasting this part here. It's about 20%. So if you do a little bit of research about JPEG, you see that one JPEG is divided into many squares. The compression rate of each square can be done separately. So you can say, like, I want higher compression here and here. And in the middle I want the good compression, or you know what I mean, like high quality here, less quality here. We do that with these two lines of code. Goodsley is a new compression tool from Google for JPEGs. You can use Convert2. So we do a maximum compression version and a normal compression version, and we merge them with Convert. By that we save about 15% to 20% file size, which is for us a big deal. Of course, we use Arc for uploading and the initial work, because that has to be done fast. And then in times where the system is just idling around, we have a crown drop, which does the optimization. The problem is there, one avatar optimization takes more than one second, which is for us a very long period of time. So we really have to make that of peaks times. Compression. You already saw that we are saving static files here whenever it's possible. We don't save the static files, we also do the compression right away. Like there's always a GZip version there, so that the EngineX doesn't even have to do that job. Always to maximize the deliverer of the page. Before that, we optimized the HTML. And the interesting part there is, like we saw the attributes and the class names, of how GZip works. If you have the same pattern on the page, in the HTML file, it's easier to compress. We try to do it in our code too, but we are just humans. And so we do this additionally to be on a safe side. Thus, we use top three instead of GZip. It compresses like 5% better than GZip, but it's GZip compatible. It's a Google tool. Again, we are doing this offline in non-peak times. For Phoenix, stuff which comes from Phoenix, you should use Pro-Tly, which is a better compression than GZip, but can be used for live pages. Inlining, HTTP caching is very good. I'm a big, big fan of it. But on our time budget, it's not good enough. So what we have to do is we have to inline and we inline a lot. So obviously, we inline CSS. That means we can't use Twitter Bootstrap because that's alone bigger than our 28 cap. We have highly optimized CSS. We inline many images. For example, this page, this is inline, this is inline, this is inline. And guess what? The background is inline too, all the 28K. Same here, every Red Cross means inline image. You see, these two, they are not inline because they are above the 28K. So you always want to fill the gaps, the steps. This step is a step for our above the fold content. Once we are at the edge of that step, it doesn't make sense to inline more content. Because we are using HTTP2, we just can deliver additional files in the same stream and then use HTTP caching. So you always have to think in these steps here. When we can, we try to prefetch. So prefetching is telling the browser that if he has time, he can prefetch stuff for the future. So you want to organize your work, though, and analyze it. And then do stuff like this. This is more of a copy of the code we have. Where we have in the header of the HTML file, tell the browser, okay, we probably need that file in the future. Go and fetch it. And you see this on, that's a, unfortunately, it's a German version, sorry, the index page. And this page here, that's the list of 100 most popular users. And that's a page very often used by the users. And therefore, in this waterfall of the index HTML, you see here's a time where the browser needs time to render the page. It's lost time. But we have the network connection. So we use that time to pre-fetch that file here. So if you click on that, it's there right away. And again, we are talking about small file sizes, so it's not wasting a lot of bandwidth or storage on the client either. So now I'd like to talk about Phoenix. As a Rails developer, it took me forever to even get the idea of Phoenix. Phoenix does not offer quick rewards for newbie. And I hope that you by now understand why it's important to get quick responses. Humans are wired in the way that you need to get a quick feedback to have a positive feeling for that. If you train animals, like a dog, you wanna give a reward to a dog within the first 700 milliseconds. Otherwise, it's just a treat for him. There's no connection. If you wanna know how they do it with Dolphins, they use a bridge that's these devices where they click. You can extend the links with that bridge. So we are talking the same ideas here. We need fast, positive feedback to like something. It's just human. So Rails offers these quick rewards. Let's create a mini doc application on macOS. Do Rails new block, CD block, Rails generate scaffold, post, and defeats. Or Rails DB migrate and Rails server. And we're done. That's quick. Somebody who has a break, wants to learn something new, gets very fast positive feeling. It takes, this video took 30 seconds, and it's the same command on all OSs. So I did it on a Mac, but you could do the same thing on Linux. Let's compare that with Phoenix. Same application. I'm not talking about this time here. I'm okay that this takes a little bit longer. It's a cheap shot. I'm not talking about this. I'm talking about the why. Why do 99% of people have to enter a why there? But it's getting better. Okay, here we are. That's CD into the directory. Set up the database connection. Easiest Pi, and every newbie would know what to do. Probably we need to install the database. But that's quick too. After that we have to Google how to create that user account. First start Postgres. Do the Google in a different window. Create that user. Don't forget that the user has the right rights to create the database. Set the password. More or less intuitive, everything. Create the database. And again, I'm not talking about this time. That's fine for me. So now I do the scaffolding. Edit the routes, obviously. Again, 99% of people probably need it. Do a migration. And then start server. That took about three minutes, 30. And the big problem is that like the create user or how to create a database, et cetera, et cetera, how to set up the database initially for any newbie who just has maybe an hour time or two or even half a day is a major issue. I truly believe that we are losing a majority of newbies right here because they don't get to the next step. Once you are hooked, you take the effort to dive deeper into everything. You maybe even buy a book or read some more documentation. But this stuff here, like the, editing the router and setting up the database, et cetera, et cetera, that's just too complicated for the average user. Using a different operating system, obviously you have to install a database differently. You probably have a different editor, et cetera. So that makes it even more complicated to set up one, how to for newbie. Rates just uses SQLite. Obviously that's not a perform database. But who cares? We're talking about the very first step. In my opinion, Phoenix should do the same. Rates sets the routes for scaffolding. We are talking about scaffolding. Again, 99% of people will need the routes the same way. We can just give them a hand and we just can do it for them. In my opinion, this should be it. Like six signs of code. Thank you.