 So, good morning. Hope everybody of you has coffee and is ready for the second day. Welcome to my presentation. It's about a case study or a showcase that I would like to show you, what we've done for a national TV station. I could directly talk about like four to ten hours about that, and I don't know like what you're really interested in. So, it could be more tech, it could be more business, it could be more front-end, it could be more hardware, whatever. So, I'm going to go through it. It probably will take half an hour, and then we can do maybe a bit in extended QA, because I really want to answer your questions that you have, because it's a whole project, and I said I could talk a lot about it. So, maybe first about me. I'm Michael, or people call me Schnitzel. I'm the guy that yelled at you yesterday morning to wave at me, because I did the group picture. Yes, I'm head technology at MSU Labs. We are a company in Zurich and Austin, Texas. We are 20 people in Zurich and five people in Austin. We try to be rather smaller, so there are a lot of other big agencies in Switzerland with two to three, four hundred people, but we try to keep small to be agile, and that's also one of the reasons why we actually got chosen for that project. So, who's the client? The client was Schweizer Radio in Fernsehen. It's the Swiss National Television and Radio Station, which the project, which is actually multiple ones, that you probably know, so they do wide labeling solutions of the voice of Switzerland and Degres and Schweizer Talente, which is like Switzerland got talent, so it's a talent show. And these sites, they have pretty special requirements, and so basically, it first of all has to be a news site. So, they post through the week before a show happens, they post news, they post updates, what's happening, what's gonna happen, interviews, just stuff beside. But then, during the actual show, the second screen gets on, so that's what the site is actually built for. And there are roughly around six to ten shows which are pre-captured. So, they are captured in a studio, they are shoot, everything, and what basically just happened, somebody presses play while the show is running, and we're sitting in the same room and measuring all the stuff on the website. But the last three shows, they're actually live. So, there are people calling in and saying, okay, I want like, they can vote for people, and the whole live shows are much harder because you cannot really prepare for them. You don't know who is getting to the next round and stuff like that. And also, the website is basically built for like 13 times two hours, which is a show. So, we build a site that is, really, you're actually working only for 26 hours. But it has to run during these 26 hours. If you lose one show because you deploy something wrong or whatever, yeah, you're basically screwed. So, there's no way that you can go back and fix it. Of course, if during the week something happens, yeah, but no, there's no traffic on it. So, it's completely different from any website that we have built before because it's so narrow on these time slots and everything just has to work during the time. So, what do we mean with second screen? It's basically, it has to be mobile because people are sitting with their iPads, with their mobile phones in front of the TV. So, the first screen is the TV and the second screen is the device that you have in your hand. And the idea is basically because people are anyway tweeting, Facebooking and whatever during the show. So, why not provide them an experience while the show is running that the TV screen cannot do? So, it's about that. It has to be really fast because we have a lot of data, a lot of people at the same time that watch the TV show. So, it's rather important that the site performs and you have to push. You don't want the people to reload all the time. What we basically want is the person goes to the site, once they watch the TV show and the device updates automatically during the show. So, there's no need to press any button during the whole show. It's like a second screen of the TV it automatically updates. And all these things make it a bit special or different. So, the requirement of mobile is actually pretty easy. I mean it's Drupal. We have a Drupal, it's all responsive, no problem. We actually did it in mobile first because during the show we have more mobile traffic than desktop traffic. So, 75% of the traffic is mobile which from a design point of view why should you design a desktop website when most of the people are actually watching it mobile. So, we did a complete only mobile design and we just adapted that through the desktop which actually was the first time that we as an agency were able to do that because we could argue that people are looking it on a mobile screen anyway. So, that was easy. We do that with Omega-4. We don't use any of the layout systems of Omega-4. We just basically use it as a reset to have it HTML5, to have it more clean views as with Compass and Suzy for grid systems. Nothing special here based on technologies we use. So, the mobile part is actually not something crazy for us. We do a responsive web design all the time. It wasn't anything special and such. What is interesting is the live mode. So, the live mode is the second screen mode that we enable while the live show is running and we said we didn't want to go to people like a website URL.com slash live or something. So, we just wanted the people to go back to the home page. So, we had to have a home page that adapts based on what if the show is currently on or not. And we did that with panels. So, you just have a panel with a lot of different paints and some of the paints are either active or not based on a variable. Pretty straightforward and easy, but we had the people just have to go to the URL they already know and based on if it's a show or not, we showed them a bit different content and stuff which worked pretty well in the end. Yes. And we have an admin interface that is completely stripped down to only what the people need during the show. You can imagine there's all like, we're updating the site like every 60 seconds. So, like the home page changes because the show goes on. So, you need an admin interface that is completely stripped down to what the people need at that specific point. So, we did a lot of testing with them to before we actually started, we let the people use the site and we saw like how they moved around or what they needed, like what do they need change at the same time. So, it was really important for us to have an admin interface that is super slick and super easy. We can actually look at it later. Node ID, node ID, node ID. So, there was a big discussion that we had. There are multiple people at the same time working on the site and there was a lot of, in the past, is it like can you upload the poll, you know that was about that person and so. So, there was a lot of discussion like how do we talk about Drupal things, like internals and we realized that the easiest is the node ID. So, everywhere where we need to select something or we need to delete something or whatever, it's about the node ID. So, the editors, they know, can you upload me or can you put node ID 110 to the homepage because there's people talking at the same time that use the website. And it just was the easiest to use the node ID so we adapted like all the views in Drupal, we use admin views to have an actually view for the content view and stuff and we just added the node ID in there which is also different. Usually we don't really tell client what the node ID is but there you just need a unique identifier for whatever you have on the site and the node ID was the easiest to push. So, as I said, we want to have the people that go to the website and just visit the page and basically just get a stream of information. We had an initial idea to do, there is an HX Paying Reload, some module on Drupal.org that basically just like from the client side refreshes every 10 seconds or so and then you just change the content and it will update it. We realized that actually creates a huge traffic on your side and it's 10 seconds and in a TV show, 10 seconds can be a long time so we needed something else so we found a way to push. So basically whenever the editors change something on their site and we save it, we push it to all the clients immediately. It takes like half a second or so and we use PubNup for that. It's a service, it's not cheap but it works really well. You can build it your own with WebSockguts and SocoDI over there but what we really like with PubNup they handle you almost everything. You can use it in different systems into a lot of different systems and also they handle you latency and like if you close on your phone your browser and you open it again it will automatically connect again and see what has changed in the past you have the statistics about it and it's from a programmer's point of view it's super easy. You have like message streams so on the server side you just push something at the message stream and on the client side and whenever something is pushed on JavaScript you get like if you use a JavaScript client you get basically a message and you can handle that so it's super easy communication it's fast, you don't need to do anything which we ended up in liking a lot. So yes so the browser scripts to the messages and whenever there is a message we have our own JavaScript that then replaces the necessary parts so there it's just custom JavaScript that knows we just update the top right text and it's the top right text so we replace elements in the DOM of the browser we talked a lot about using Angular for that because it's basically what it would be really great of the problem was that the first site we did was three and a half years ago and there was no Angular there yet so we are still a bit stuck in using custom JavaScript and we just rebuilt or refactored the site and we didn't unfortunately have the budget to go completely to Angular so we are talking now with not the next shows to actually use an Angular front and for that because it would be much easier to just like replace stuff and update things and such then we wanted to have people voting so that was the main thing. Whenever a talent is on the TV screen you can vote on your phone if you like the performance or not and it will show you immediately in percentages how much that voting is so we had to figure out a lot of things how we do that we ended up in using just a core poll module that we never used before in any other projects but it worked and the problem is that you need session cookies for polls so there is a module called poll anon which basically allows you to vote on polls anonymously and instead of session cookies it creates a voting cookie so for every vote you do it creates a cookie that says you voted for poll ID 77 and then the next time when you refresh instead of the voting form it will show you the results so you send the results and it decides if you voted or not the problem is that whole thing is hackable it is completely outside of anything if you we never had a case that happened because we measure IP addresses and the whole site is only on for two hours so people have to be fast but I gave it to people that had enough time and they said it was easy and we actually used it for load testing so I have a small script on my PC or my Mac that I can run and I do 1 million votes in 10 minutes but the client knew that we talked to the client that this will happen the votes itself didn't count it was more for the entertainment of the viewers to sit there and vote but what we also had is a live show and we were shown back to the TV screen so on the first screen we could actually show the results of the second screen and what we saw is because during the live shows there is actually a real voting via telephone that you can call in and we did exactly same polls as well online that the people can call in the results were exactly the same so it was like deciding if the person gets the next votes or so but because it is hackable we said okay we have to find something else what is also important is that every time somebody votes it pushes to new results to all devices so because we'll see later I have some videos that you see how it works so we want to have a real time stats that somebody votes clicks the vote button to yes or no or I like that I don't like that it pushes again to all the devices the newest results and with that you get some really cool stats which we'll see later but as said we have to find another way to vote we wanted to have a vote that can be used for getting people to the next round they wanted to have an online voting so basically during the live shows and also the shows before the jury only chose two talents that were like equally good and then they let the online the viewers vote who should go into the next round and we did that with an SMS or text voting and that's unhackable because you only have one single device so we send the people basically we use a service we have a web SMS which is like any other SMS we just really liked the APIs from them and it's just like you can send an SMS so it basically works with that the user submits the poll which one he likes and then the user receives a text SMS with a code so he enters also his phone number and he receives a code the user enters the code on the website and he sends it and if he tries to vote again with the same phone number we know his phone number we can basically tell him sorry you already voted you cannot of course if somebody has multiple phone numbers of course that works but the amount of phone numbers is not that crazy per person so maybe people can vote two or three times and that actually worked really well we didn't run that during the show you don't want to overflow them with like now I have to get an SMS and enter code and stuff so we did that after the show so after the show for the next 48 hours people can go to the website and vote and that worked really well and like we could then announce during the next show we announced okay who won the online voting and they actually then came into the finals of the show so that was really cool I talked about fast we had in max times 8000 requests per second and that's not easy to handle if you don't have the hardware behind to handle such a system so this is basically what we used first of all we use Redis for caching so all the internal Drupal caches are not in a database they're completely in Redis it's memory key value storage which allows you to load caches much faster then varnish which is basically a reverse proxy which handles like 99.99% of all traffic only goes into varnish and it's freaking fast if you configure it correctly and then there is monitoring and that's basically one of the most important you can have all servers in front but you have to look at the stats what is really happening so that was the hardware we had we had two varnish servers two NFS servers and two Redis servers and again you see everything is twice in there we didn't want to have a single point of failure because if the hardware dies during the show the show has to continue there was no chance that we could have any the only thing that could have happened was that the whole data center blows up we didn't have a second data center it was all in the same room so if you have a fire in the data center well we would have shot down the site and beside of that we just said whatever hardware issue broken power supply hard drives going crazy memories exploding whatever we want to have the whole thing continuously running so we had to build a whole cluster that supports like auto failovers and stuff and we luckily we never had to use it during the show but it was just good to know like whenever my school dies there is another one that takes over and does stuff and it's called Redis one of our learnings which is basically you can configure Redis to say you get 10 gigabytes of memory and whenever I fill more than 10 gigabytes into you you're going to automatically delete old keys because we have a lot of caches of like old poles like a pole that the whole HTML is cached to the site to the user like an hour ago it's not shown anymore so we want to get to remove that and instead of the Drupal actually going there and removing stuff from the cache that the cache doesn't overfill you can configure Redis to tell himself like okay go and check which keys are not used in the last time the problem is what we learned is that is pretty costly like during the voting you have a lot of cache form entries into the cache and because if the Redis is already full during the time it takes Redis a time to first figure out oh I have to clear some space before I can get new data and because that happened at the same time when the show started that was actually it took sometimes like 2 to 3 seconds to get a new data into Redis because he had to clear first like all its tables and empty some things so we ended up in basically just empty the Redis before which was pretty easy because during the whole show it never filled these 10 gigabytes but because we had the the Redis running through the whole week it slowly filled it up and then during the show it was full and we had the issue so that was one of the things we definitely learned but beside of that it works super fast the sites are like 30% faster just because you use Redis as a cache backend then Varnish one other thing that we learned was usually you configure Varnish that whenever you change something the Varnish module in Drupal connects to the Varnish and tells him like purge these URLs or do whatever that the problem is because we have so many requests at the same second to the front page and what Varnish does is when you purge let's say the slash so the front page and then you have 400 requests at the same time that want to visit the front page Varnish prevents you from not sending all the 400 requests to the backend it only sends one and the others basically just are in a loop and are waiting for the answer and the site is under heavy load that can take couple of seconds to rebuild the front page because you have a lot of people voting so basically whenever you purge you stop traffic for like 4 to 5 seconds and I said that's a long time in like and people like they realize that the site is not reloading or like they're coming tweets in and the website is broken or whatever so instead of purging we started to do refreshing and we used that for that specific client so it basically works like that that's the Varnish VCL and what you do is you have a normal request that is instead of get or post or whatever you define a new HTTP code that is refresh if the client IP is not part of the purge so that these are our IP addresses we say like you're not allowed if they are part of the IP address you change the request type to get and you tell him and hash always miss which basically means that request will go through the Varnish as it is a miss will go to the backend refresh the page goes back to Varnish and Varnish will update its cache so the cache site will be updated with that request while at the same time if you have a normal or all the browsers that do not have this refresh which are like normal users they get still the cached site so with that you basically just create a cron job that runs every 2 seconds or we had it in every 3 seconds that refreshes the home page automatically for you and if one of the requests takes 5 seconds it doesn't matter because all normal clients that's 5 seconds the request goes by Varnish again the cache is updated and it's there and with that we were just able to handle also in case like the site goes down and whatever so like no Drupal tells Varnish now please perch which generates more load on the backend and stuff like that but I said that is specifically only for these cases so we monitored a lot of different things in Varnish so first of all you want to see all the client requests that come in and you also want to see all the backend requests so these are basically the requests that actually go to the backend and we just monitored them and then we also monitored the most requested URLs in the last 60 seconds because you can see like some strange stuff usually that should be the home page or the slash side but on the home page you realize okay maybe something is wrong and then we just had an overall stats like the hits per second that come in and these kind of stuff the servers we just monitored with htop so we just like see them you see the load you see spikes that happen and whatever and then we watched Drupal itself so we use syslog instead of the watchdog entry in the database and then we just have syslog which Drupal basically throws there shouldn't be any error but if one is you can already see it and then we also have New Relic working on the servers that also gives you like nice stats of like if the site suddenly goes slower or faster or whatever and that's how it looked like so we have on the left side we have the actual site that just updates here we have all the htop for the six servers that we just look at and then we have an iPad with all the stuff that like update in real time and that was basically like the monitoring and we were actually in the studio so we couldn't do that in the office so we had to be a bit mobile with the whole thing and that's what we basically use and it's a lot of like just looking at data that flows through and like maybe try to see if there is something wrong or not so what learnings did we have and first of all there isn't really interesting thing and if the Drupal image cache generates you an image that image is sent without cache headers even though cache is enabled so what that means is that if the first request to a image cache picture goes through varnish varnish will get that image without cache headers and internally varnish says that's a hit for pass which basically means that the next 60 seconds that image will be sent to the back end all the time without looking at the cache that's like an internal varnish thing but what it basically means is that for the next 60 seconds Drupal or Apache or whatever web server you use will deliver all the images so we posted a new picture on the homepage the picture was never created in the image cache so the first normal browser can generate it and after that we had for 60 seconds we had a huge amount of traffic on the back end and after 6 seconds it's gone so what we did we actually had to hack a core to also send generated image with cache headers it's now fixed in Drupal 8 there is a patch for Drupal 7 I don't know if it will happen to get in but that's basically one of the things we had to learn from tech then we had a lot of HX stuff like we had like views load more or we had like HX things that are regular loaded so everything in Drupal HX is post requests and varnish doesn't like post requests because varnish says if post I don't care like I don't but you can cache them so there is a module that allows you that is used HX get or something that basically replaces the post with get the fun is the most used varnish VCL is like one that is done to buy a lot of different people that is on Drupal.org that actually excludes slash HX so if you add the views HX get you have get request, yeah but it's the URL is slash HX so varnish again will say I'm not caching that so we actually had to rewrite the varnish VCL to be able to do that so if you use get HX and you're worried why varnish is not caching that's probably the reason because the configuration is not the best but the thing that also almost destroyed our site is 404 and 403 usually you don't cache them because if there is a 404 you don't want to cache it because maybe like in the future that 404 will actually be not a 404 anymore the problem is we had people that like deleted images on like editors that deleted images and because they were still on the home page the picture should been updated and the picture was not available on the backend server anymore so suddenly varnish said like I don't have the picture I give it to your backend and the backend says like I have 404 I don't have that and you generate a huge load on your backend because Drupal tries to grab a picture that doesn't exist and stuff and so and that almost killed one of our sites at one point or we had like a node unpublished that was sent out over a tweet and then we have like huge backend requests and what we basically do is we just cache them not for a long time but we cache them for like 20 seconds so during that time if somebody requests something that doesn't exist varnish will reply with a cached 404 which prevents you your backend servers from dying so that was I guess like on second show like suddenly all my servers were burning then you ask the editors and they tell you I just deleted the picture is that bad so maybe don't delete anything during the show learnings from processes so one really big learning be there so we were in the studio at the same time where the whole thing was shown or like was captured because there is no better thing to actually be there with the people that use it we tried to do it remote because we had to travel like an hour or two to get there and it doesn't really work in situations like that where the site has to run for two hours and it really has to run trying to do google hangouts nah it doesn't work like you have to sit in front of the screen and actually look there you have to be flexible a lot we had the main rehearsals on Saturday at 3 o'clock and the show started at 8 o'clock in the evening the show took two hours so till 5 so if we realized we have to change something we actually had three hours to implement something test it deploy it to be ready till like 7.30 where like everything had to be ready for 8 o'clock and sometimes we realized during main rehearsal that we completely forgot something like we forgot what's shown during the commercial breaks happened in the first show so we had like backend developers in front and developers sitting there which is like against any other like project method and whatever but what do you want to do it's you only know three hours before you go live what your requirements are so you have to do it be proactive as I said we did a lot of monitoring so like proactively looking at things looking at stats trying to learn from the stats if there is something going different than last time it's a lot about these things like all the issues that I told you before we basically realized that while the request are changing behavior because if you're not proactive your servers will be dead and most probably first you're screwed and second you don't really have the data anymore so it's better to be proactive there and then also prepare as much as possible so as I said the first six to ten shows were pre captured so we knew exactly what's happening in the show so we created all the polls we created all the stuff all the notes, all the images were cropped everything was done before that during the show we could actually only like press a button and say publish put on homepage, put there and stuff and without that it wouldn't really be possible and at the end testing, testing, testing we as I said we were there we had different test devices at the place so we tested in a lot of different devices we used browser stack to create screenshots of a lot of different because if you change the homepage you don't have a lot of time within three hours to test so you need like a testing system that gives you screenshots of all different browsers in all different versions if the change that you just did to responsive still works or not so let's look at some results let's look at the site I can show you the site first by the way that's not the live site that's our testing site so that's now let me disable so basically you can really easily of the dashboard you can just enable or disable the second screen so I say second screen mode is off and with that let me go back that was like the normal site so we have a huge picture on top it's like a live stream of some state so that's how the site looked during the week without and you see here that we have views load more that just loads more and you can scroll forever down to get new things it's a pretty standard new site nothing special and then you have the second screen mode that you can say okay I want to add the polls and now it asks you which polls you would like to show we had two places to show polls and you see here again that's the node ID stuff so we use a lot of the node IDs and the user just can like now if you try to find something based on the text you have no chance so we use the node IDs so the people knew okay at the beginning of the show I have to upload or I have to push node ID 888 or so so we're gonna save that and here you see now how it changes so we have on the left side we have a vote on the right side we have a poll and up here is the livestream so that's the livestream of the television and if I now gonna change something and I can show you that here so on the left side I'm the admin on the right side I'm locked out and that will now push over there so I can select now I don't know the 912 and I save it and it will automatically without doing anything it will update me on the right side and download the picture so the users do not have to do anything at all I just have to change it on the right side it will update and it will actually also update on a mobile device so I have here an animulator of iOS and I just save them and you see both of them automatically update and that's basically now pushing to pop up the devices are adjusted with pop-up they get a JavaScript and they replace whatever needs to happen so that was like the whole special thing there and then you can vote so if I gonna vote here some questions and I say okay I'm for that and it's now 33% and now I gonna vote on my iPhone and you should see what happens with that number here if I push here it automatically updated so that also like when I clicked here it sent to Drupal Drupal realizes there is a new poll and it pushes automatically the results at the same time basically the whole real-time voting system and stuff and you can see now here I gonna show you how to hack my site and so in here you have a cookie that basically just says the PA is PaulAnon893 when I reload now here you see that I still see the results so I cannot vote anymore if I remove that cookie here I just removed and I reload again if the internet wants to work with me now I can vote again so it's that easy but as I said that never actually happened but we had the specification that we should deliver something that also can do that we can actually vote for real so that's the SMS voting system so you can as an admin you would like they're all prepared so I say I don't know I take that one and now for that I have to refresh the page but I said that didn't happen during the site was in live mode so whenever something now joined now you see the SMS voting and I can say okay I wanna vote for Swiss Domino or I wanna vote for that so I wanna vote for them opens me an A-checks and now I have to enter my phone number so I enter my phone number here and hopefully I should get a text if it's gonna make it from Switzerland to here yes it got a text so I enter that number I can enter it wrong for a sec so you can actually see like it tells me it's wrong so when I enter the right one again demo effect get a number I enter that I finish okay doesn't work ha it worked before last one try no doesn't work well doesn't matter so yeah so basically it would tell me yes you voted and when I go back and add my phone number again it will tell me you cannot vote and that was basically the secure voting but in the end it's the same Drupal core poll system so it was exactly the same yes that's basically the site as I said it was pretty easy from a user interface you can like we had all like HXC way we did a lot of SEO HX stuff and such things but I think what would be interesting is that we actually see how the people used it so have some videos of people or how we use the site so the first one that's as you see I'm going into the studio so you see like the people, the talents are here, there's the whole crowd sitting here, you have like a spider-cam that captures the whole thing and so we were actually sitting in the studio and could hear everything what's happening and we just like had our special place at the at the atrobit and where I go now with some doors and you see now the whole team sitting here so it's like a team of like eight people that work at the same time you see like everybody has a screen in front of them we see all the things and now I'm trying to push a button with my finger but now you see how it updates so you see we had a lot of different devices there that we tried at the same time and you see now here all the poll HX vote these are the actual votes that are coming in so you see them coming in and that was all captured during the first show so there's not a lot of traffic unfortunately because during the high traffic we didn't really had time to take videos of the whole thing because it was rather like okay now it's yes so that was one of them then we see how it actually so that was another one I unfortunately made it up like in portrait mode so I think we can so what you see that's the voice show so what's happening is they announced who who wins like who is the winner of the site and so we're sitting we don't know ourselves so we at that point we don't know because they didn't tell us like even if we are one of the backstage team they didn't tell us who wins because they're worried that we tweeted or whatever so they announce it now the moderator announces who wins people go crazy in the studio and and now we turn around and now we see because at that point she just knew so she saves a poll and now goes to the admin screen that we saw before changes to the poll that she just created and I'm going to my screen and she's now pushing that into the site and I can vote and now you can see how the votes are like jumping in and so that was basically the whole thing that's what we did all the time and you can see like people like voting and tweeting about oh no that's all going to vote different and whatever so there was a lot of yes then we have some some voting edits so that's actually the backend so I'm just like on the poll edit and I refresh then you can see how the polls are coming in so we knew how many votes exactly came in but on the side you just see the percentages so you don't really see exactly what how many votes that there are yes and they are like slowly update in the second show we actually only had percentages which wasn't a lot of fun the bars were much more fun because you can see like a graphical element we had the crazy idea to make like these buttons bigger and smaller but yeah we really had time so yes then for the another one that we did so here you see that how it updates so the people actually like I'm voting because that was the first one I had 100% but now it starts to like slowly get in and you see how at the same time the show is actually running so we also had a TV screen because the fun is these days let's tell you later so now we see like the polls coming in that's the user traffic so that's a bit crazy and you see like the amount of traffic that we had at that point that's the request per seconds and so the fun was that these days because all the TV stuff goes via IPTV and like HDB live streaming and stuff it's not real real-time anymore the stuff has a delay of like 20 seconds so like when you capture something in the past with analog TV it was like immediately at the users but now you have so many delays because of like encoders and stuff so it's like 20 seconds later so what actually happened during the first show we announced who won earlier online then people could see it on the television which generated some interesting discussions and because I just said like well it's cool no and they didn't like it at all so what we had was a TV screen that just had 20 seconds delay so we could hear downstairs because we were in the studio what's happening but we used a TV that had like 20 seconds delay to know what probably people see because if they push it takes like a tenth of a second to push it to the devices so the problem was basically our website was too fast well it's not a problem I'm not happy to have yes then some servers that so these that's the varnish that so we see a lot of different I don't know can we get a better so we see a lot of like different requests so we see like how many cash hits how many client requests these are like per second how many misses you definitely want to make sure that the misses are low and stuff like that then we have these on the left side the crazy stuff is the requests that actually come in so that's each single request that is pretty crazy of updating and on the right side we have the B so these are the back end requests so the left ones are the ones that just go to varnish and they can deliver the right ones are the ones that actually go to the back end do you want to make sure that these are low because if they go a bit crazy your back end servers will probably have an issue and you see that in there we only have like some HX stuff that goes through and we see like the load is like a two zero point three two one so it's all like eight cores server so they don't have any load but we were just preparing for the worst whatever can happen so it's basically us just sitting in front of screens and looking at stats and that's actually pretty cool you see the varnish here and the varnish has a load of zero point three two even though the end was like four thousand requests at the second right now so without that thing we couldn't survive um yes um yeah so basically these are the results we did it three times now and we're going to continue doing it because they really like that second screen environment and actually called it the first screen now so our website is the first screen because they're going to look more on the device than actually on the TV screen which is interesting how it shifts so we're going to continue doing it we're also thinking of actually using it at real events because you're not like only fixed to the environment of sitting in front of a TV you could also do it at an event like where people are like at a Drupalcon or whatever you could do the voting during so there's a lot of different ways of using that system and we're like actively talking to a lot of different people love like okay how could we use that um in other environments yes that's it if you have any questions we have like 15 minutes can you go to the microphone because of the recording thank you did uh people in the actual live audience during the live taping have access to the site to vote as well you mean that the audience of the show um yes the problem was that the wife that was there wasn't really handling the whole thing so um and the actual cellular network also was a bit overwhelmed with the amount of people um so they had but I didn't really know if people were using it or not and then what happened when the show ended did the site push to a new state or did the user have to refresh um the site just kept we kept the site running for like or the live stuff for another 10 minutes um just to like do some end votes how did you like it whatever and um and then to go actually to the state back we first thought that we actually have to push but we did we did we did analysis of sessions and people anyway like go away like with the devices so no if somebody would have stayed the whole week and never refresh the page they would have seen the last second of the of the second screen all the time but there was nobody that actually did that um did you have to deal with the show airing at different times in different areas of the country at all no we luckily only have one time so that's something that we deal with in second screen experience here in the US where we have different airings where something might be live on the east coast at this time and tape delayed on the west coast that's something that you may consider as you build out this platform whether you are planning on supporting other countries good point yes we actually had an issue that um the whole thing is geoblocked so you cannot watch the live stream from other countries in Switzerland because of licensing issues so yeah we were lucky but definitely yes it could be an interesting thing that if you need to know in which times on your first and then finally where did you host it was it self hosted or was it hosted with the broadcast we hosted everything ourselves because in that stage I need so much control over the servers and all the things that I just have a hoster that makes sure that the hardware works and we're going to do the whole environment setup and stuff and we we have DevOps people ourselves so we can do that and also there is none of the Drupal hosters that exist have any servers in Switzerland and the client told us we have two hoses in Switzerland so that was the requirement that nobody could meet anyway well congratulations it was great thank you so a technical question you were talking about the image caching not being cached because of the lack of headers yes um so did you look at adding those headers in the VCL rather than doing it from code and is there a specific reason why you didn't do that sorry I can understand add the headers in the VCL yeah so have VCL detect that it was an image cache and then just add a header at that point rather than of course which also be a possibility I trust the more so that it's a broken Drupal so I'm going to fix it in the Drupal but of course in the VCL you can do whatever you want and yeah okay thanks so why did you use redis instead of memcache so there is one big problem with memcache is in Drupal caches you have wild cache flush or wild cart flushes so let's say if you run it rush cc all it clears all cache page flushes and the request to the cache backend goes in cache on the line page column asterix so it basically says clear all cache entries that start with cache on the line page memcache cannot do a search in the current existing cache so you cannot like in SQL delete from where key starts with that and that cache is not able to do that so what Drupal does it creates a semaphore with a timestamp when it was deleted the last time and when you request a cache page like that was deleted or purged it's still loaded from memcache because memcache had no way to remove it and then after it's loaded it also loads the semaphore, compares timestamp realizes that the semaphore is earlier than the semaphore and basically delivers you with an empty result because it was an old one and also deleted in memcache so it has to work around the issue that Drupal needs a cache that has wildcard or searchability in the cache keys but does not have and Redis has that so if you tell Redis delete me everything that starts with cache on the line page it goes through its index and removes it really so you don't need to create any semaphores it's really important and you just run at rush cc all maybe 10 minutes before the show to like reschedule everything we just saw a lot of traffic of memcache and we completely switched everything to Redis it's maybe on like one or two percentage faster but if you do stuff like that you will definitely see that you can see it in New Relic you saw the switch between memcache and Redis I'm just curious about PubNub how it handles it on its side for the client side is that like a polling option that you just add some JavaScript that pulls PubNub if you look at it they do a lot of different things so if you don't have a lot of updates they do a long pull a long pull I guess it's called so you just have like a request that is like never ending and when there's actually something it ends and reloads but they also use web sockets at some points like depending on the device depending on the environment how many streams you have register to they do their own thing so you don't know exactly what they do but they do it really well awesome alright hi by the way thanks for great talk I mean think about it it seems like the voting aspect is the one that is which aspect? the voting part is the poll that you would count for a lot of the traffic is it so I would assume that you're not caching the request in varnish or the request in varnish or how exactly how exactly do you manage the load for so the actual voting that has to go to Drupal like the post request of clicking on a button that goes to Drupal and that has to bootstrap with Drupal and everything so you cannot do that just with varnish because it has to enter in a data into the MySQL and the request that is sent so after voting it also reloads the newest results and that result is automatically used to push to all the devices again to update so yes you have one post and that's why we have sick servers because during TV breaks actually the most traffic happens because we don't like to see TV commercial breaks so they're just going to look at the phone and vote and during that time we had a lot of traffic and that's what we actually had sick servers for because there's no way to handle that so is it one of the aspects where you guys use Stratis to correct correct how did you guys get this client do they already know you did they want Drupal or did you have to respond to some RFP that we've built for them so it's like a longer relationship interesting is they have their own internal CMS team but they are so used with the CMS building their own website that we they asked us if we can do something and because we are so small and can actually like do stuff really fast that's why they chose us and they didn't came with that from the beginning these are the requirements of that like has happened over actually years of like working to them and then also trusting like before we never did any SMS voting because they said like we don't trust and so it's a lot of trust building and trying out and suggesting a lot of things so it's definitely something you cannot trust in one day of like discussing with them or so thank you and so instead of something like Socket.io instead of Socket.io it was just easier so we tried to first use Socket.io but we somehow found about PubNob and we tried it out and especially like in the edge cases where you like let's say you're in the train and you don't have a lot of connection and it says like did I maybe miss one and it updates it's also like on the phone if you close it and you open it again it will reload and connect to the server again so there's a lot of things that happen outside of that that Socket.io is like built for like okay I have a stable connection there and that's just what we liked at the end so but from a technical point of view there is no need to do that thank you I'm working on a very similar project for telemundo.com a lot of crossover with what you're talking about and I was interested to know a little bit more about the editorial testing that you did with the editors because that's one of our huge pain points yes it's basically we just let them work on it and looking at their behavior how they worked so because we had the first sessions they were all pre-captured like rehearsals so we just sit in a room let the show run and they were all doing it as it is a real thing and we just had me and other people from the team being with the team and looking like okay what are you doing and then you realize like every time I don't know they replace like one of the things they need was they they created all the polls without pictures because they didn't have the pictures yet the photographers didn't provide the pictures yet so they created the polls with dummy pictures because the picture was a mandatory field so we realized okay we removed the mandatoryness but then they had the problem that they pushed a poll on the live show without the picture so then we did like we did a small view that showed them which polls have no picture yet so it's basically just with being with them at the shows and working with them together as a triple site builder that can make it much easier for them but there was no real like I don't know like a process of doing that we just worked with them together and most of the things we learned ourselves okay thank you I guess this is a good follow up question for that is how did you guys manage the tape shows was everything done live still or did you kind of schedule things out and watch it go with hands off what which shows the ones that were taped in advance or were the content editors live pushing content during the broadcast then yes yes so we were just sitting in the room and we watched a TV show and while that we were like pushing buttons based on the TV that came in so we could have basically everything orchestrated before like you could do crazy scheduling stuff and you just could sit there and do things but it's still like we decided even the whole thing was pre-captured sometimes you sit there and say like now it feels like kind of like slow let's create another poll and somebody created a poll and put it on okay did you find that people did that a lot okay and did you take any data from social also to filter in with that yes so we had people monitoring Twitter we had people monitoring Facebook we had sometimes we had a discuss chat on the page itself there were people chatting on like the there's a lot of things that like influence you what do we do now polls based on so I would say like 95% of all the polls were pre-created and 5% we did on ad hoc on the fly thank you how are you able to calculate the number of web servers you had like you had six as opposed to like let's say ten so the whole cluster we use is not only specific for that client and so we host other stuff as well on the cluster and we just basically looked at the load so and we went six was too much at the end so we just said the problem is if you ask these clients like how many people do you expect the value I don't know and I don't know can be 20 or 20 thousand like it's so we shot way too big and then we looked at the at the load during the first show and then we started to remove servers the problem is that with the shows the people realize that the website exists because they're like go now to the website and vote and things so people the traffic goes up through the show so we actually ended up in having six servers at the beginning thought okay it's too much thought about downscaling said now okay now let's wait just two three more days and more shows and then we saw okay actually it's okay so it's just a guessing it's a lot of that thank you okay I think we're anyway over time thanks a lot if you have more questions I'm around yeah hit me thank you