 Hey everyone, good evening. My name is Shyam and I work with the Mozilla IT team and we're just going to take a look at how Mozilla does handles our infrastructure. Feel free to stop me for questions in between or if you want to hold it at the end, that's fine too. That's the TLDR of this thing. A lot of the stuff that he spoke about was, I'm sorry I didn't catch your name, but a lot of the stuff we do is pretty similar and the tools we use are pretty similar, so I thought that would be slightly apt. A little history or a little snippet about Mozilla. We help make Firefox which used to be the second most used browser and we're now between second and third, I guess. We've got about 500 million users as of last year or made this year. I'll switch over to the history of the IT team just to give you some sense of scale. Back in 2005 when the Mozilla Corporation was actually formed, they had one Mrs. Admin who's still with us today. His name is just Dave and he was the only guy who ran all of Mozilla IT. When I joined Mozilla in June 2009, I was the 10th member and the first one outside of the US to be part of the team. Today we have 45 people and there's another colleague of mine here, Rashish, who's based in India. We're 45 guys handling all of the IT stuff. What are the various teams we have and how we split up responsibilities? Desktop basically handles all the office support, you know, day-to-day stuff, imaging, laptops, handing out stuff to email, fixing, hey, my disk doesn't hold or my laptop got stolen, kind of issues. Systems is the team I'm part of and it's primarily responsible for pretty much all the website, web front-end operations of Mozilla. We also are involved in a lot of the architectural decisions on how to scale websites, databases and so on. Operations contain a bunch of our site reliability engineers who are basically the on-call guys. They are doing the bug queue, making sure everything's fixed if it's broken. NetOps, as the name suggests, is network operations. We've got a team of four or five guys who handle network operations across our seven data centers and multiple offices. Services operations was spun off from the systems team to basically handle sync. That's an entire team that looks after sync. Now they're also starting to look after services like browser IT and stuff. And special operations, we're a little jealous about that team that they picked that. Basically, they interface with the release engineering team and they're primarily responsible for helping build Firefox, make sure they've got enough hardware to test and stuff like that. What exactly do we do? Quite a bit, it seems. We have a pretty large and complex environment and we're responsible for getting that all set up and going without any issues. These are where we have our data centers. San Jose was, I think, our first data center. We still have two cages of equipment on two different floors. Because Phoenix is the data center, we moved into the beginning of 2010. This was to enable us to have a failover place in case San Jose went down. So the little funny story there was every time, I don't know if any of you people know what market was down. It's a pretty famous data center in San Jose. And every time the city of San Jose had a function or a public festival or something of that sort, the pressure on the water main lines dropped below a certain level. So they couldn't pump enough water up to the 16th floor. So all the servers would overheat and shut down one after another. And this happened, I think, at least twice in a year. And we said, okay, we've had enough of this, we're gonna move out of that. That's how Phoenix came along. And Santa Clara, we have two data centers in operation. One is being built. The two data centers in operation almost are solely responsible for all of the build stuff. The little bit of branching into what building Firefox means is basically every time there's a check-in to the Firefox registry, it's collected and it's sent to a build slave. And it gets built across multiple platforms, starting from Mac, Windows, Linux, 32-bit, 64-bit, Fedora. I think they do want to, but I'm not sure about that. So all of these builds or a bunch of these builds happen on Mac minis. The reason being you can't run Mac legally on anything else where you can run other operating systems on a Mac. I unfortunately forgot to put in pictures. So what we do is, you've got a bunch of builds happening and then you've got performance tests that run on these machines. And then developers get an email if the current build is x percentage slower than the previous build. So they know that their check-in actually caused a performance issue. So a bunch of those machines are sitting in the two data centers in Santa Clara. The second one is actually all of sync. All of your bunch of that sync stuff is in Santa Clara and Phoenix. And the third data center that will go live next year is our biggest data center so far when we probably move completely out of San Jose into that place. Amsterdam is basically a proxy area for us to serve your users in Europe. So it's not much. Paging exists because we have to serve content to China from within China. So it's pretty expensive. Bandwidth is to the order of about $250 per ME. But you don't really have a choice. You have to serve content from there. These are also the list of offices that we support. There was only one new view paging in Paris until last year. A bank cover as well. Then Toronto moved into a new place. We have this new concept called spaces where we allow a portion of the office to be given away to the community. You can come in and work on open source projects or on most of the projects and stuff that's still in the works. Some quick question. Sure. You didn't say that all your build machines are a cluster of Mac minis. Yeah, not exactly a cluster. Interesting. How many of them? Yeah, I will answer this. Yeah, just how many of them and what's the performance? Just try and order. Could you go? Try and order paging. No, just try and order paging. Okay, so... It's gone off over there. Put it on. Just press the light. Give it a second. It'll come out. Okay, so Jayce's question was you said your build machines are Mac minis. How many of them and what's the performance like? How many of them and what is the performance like? Off the top of my head I would say we have about anywhere between 300 to 150 to 200 Mac minis. I think I have a picture on my phone actually. Yeah, pick it up. That would be a fabulous picture. Yes, I definitely have a picture on my phone. I'll show it to you after this. And when you mean performance, what exactly are you talking about? It looks like it dials out some questions too. Yes. When you talk about performance, what exactly are you talking about? So if you're using a Mac minis to build for Windows and Linux, are you getting the equivalent performance of using a server? So I think the reason they do that is because it's the same hardware. So the baseline is the same. Okay. Which means if you have a dual-core machine with that processor, that much of RAM across three operating systems, you have a baseline freedom of performance. It might not be super accurate, but it's one way of measuring how that works. So a bunch of numbers just to throw in some numbers. We've got about, we definitely have over 4,000 machines. A good majority of these are hardware. There are also, this also includes the minis and the VMs that we have. HP is what most of our hardware is. We have also an interesting thing called the C-micro, which is basically a 10-new box. But we've got one version of it in production, which is basically 512 Atoms, and they can all be used as individual machines. So a bunch of our websites are on 100 Atom nodes, and that's it. We just assign 100 nodes and say, you serve these 30 websites, and it pretty much does a pretty good job. Your websites might be a little slow, but for sites that will not attract that much traffic, that works just fine. IX, we have a few machines that are used for release engineering work. We also have VMs, a bunch of those that we embed. We also started a new VM at Cluster in Phoenix. A bunch of that is also KVM-based, so we do both of that, and it all depends, of course, Mac minis are the release engineering stuff. And just randomly, we do about 40,000 Magus checks, hosts and services included across data centers, and not sure offices are included in that number, so might not be super accurate. Our operating system is RHEL, as much as the don't like RBMs. The biggest reason for going with RHEL was hardware support, in case we run into issues. I think we've run into maybe one issue or one-and-a-half issue in the last five, six years that we really had to be on HP and had to fix. And one of them was related to a network driver on one of the load balancers and stuff, so it does help at times to have that kind of support. So that's the reason we stick to it. Most of our machines are on six already, and the rest of them are five. There is a smattering of a few Moto machines and a few CentOS machines. I think a lot of the sync stuff runs CentOS, but we're not really sure. So what are these machines? Why do we need so many machines? So a bunch of these machines are used for web heads. Adons is right up there on the list because that's the site that gets the most traffic for us. We get about 25,000 Hits per second on the site. On a normal day, this goes up. Then we have five Fox releases. And so this is served by a cluster of about, if I remember right, 30 machines in Phoenix. And it's fronted by seven load balancers. I'll talk about the load balancers on the next slide and what exactly we do with them. Bugzilla is another pretty high-class site for us. We visit a lot and a lot of the community interactions and stuff happen on Bugzilla. That is, I think, four web heads. So just to give you a scale comparison, it's pretty much the same hardware on both, but Bugzilla doesn't see that much traffic. Support is again, I think, three or four web heads. Input is, it's a very beta product thing for feedback. That's a native product URL, so people can just click and submit stuff. We also have, ooh, there's a typo. Webtoons, not webtoons. NXR is this interesting thing called, it's basically, it stands for Mozilla Cross Reference. It basically indexes every line of code we have. It basically indexes, I think they want us to index Chromium code as well and other processes, other code that's available. It also indexes all of add-ons code and if you go to the site mxr.mozilla.org, you can enter a string that's available in code and it'll give you all the source files that have that in them. So you can quickly look up, hey, where's this function being called across what properties. So if you make a change, you can quickly see what it affects. Another important tool that we use is Sakura. That's what it's called in totally. It's that tool that pops up when your Firefox crashes. And the internal joke in IT is, we wouldn't have to spend so much money on that if we built a product that wouldn't crash. But yeah, we all know how that goes. It's a pretty awesome site and it's easily the stuff we have the most hardware behind. We have about 70 Hadoop nodes behind this. We've got a bunch of, yeah, crash stats, yes. A lot of the, he asked if it was for crash stats, yes. A lot of the crashes are stored on the HEFS and then pulled out for processing and then pushed back in. And we also have our biggest database cluster which is like a 24 core 72 gig machine. Two of those running Postgres for Sakura. And yeah, most of the web heads, they've got, I think they've got six web heads, each of them are 12 GB RAM machines, they're beasts. And the main thing that we look at is we don't lose any crashes. It doesn't matter if the processing stops, it doesn't matter if something else fails, but if an user is not able to submit a crash report to us, that's a big problem. So we even have Nagios checks automatically submitting crashes. So if something doesn't submit, then we get a lot of it and look at it. Most of our web servers are machine. I'd say 99.99% is machine. We've recently, our web dev teams have started, well, somewhere in the middle of last year, or the year before, early 2010, they started moving away from the HP to Django and Python. So we use a lot of modvers. Mod per FCGI is only the effort, it works a lot pretty much. I don't think any of us, and me, DMXR, and some of the web tools, PSP, there's still add-ons, the website is still covered, it's still running half Django, half PSP, so it's still in the mode of getting plans put over to Python, so that's why we have modded PSP. IngenX is there because Hsheed or Mozilla.org are mercurial repository, all the hits on Hsheed, me and Hsheed, yes, on that code directly to IngenX, which then launches out when it has descended. It was just set up that way, it'll probably change, and we will probably not move IngenX in the future, but we'll be architecting that. I'll go from the bottom here. We used to use Netscalers when I joined. We quickly moved away from that to Zeus. The reason being, well, it's now called Stingray, which we don't like, Zeus is very cooler. But the reason for that was you could run, it's basically, it runs on Linux, so you just put stock hardware, unlike the Netscaler, which you had to buy the appliance. You just put this on stock hardware, and you scale hardware, you scale Zeus, performs pretty well. And so, yeah, the Netscaler was what we moved away from. The initial installs of this Zeus needed the Cisco Ace in front of it because it had some issues with connections and how it handled them, so we had to put the application, whatever, engine from Cisco. We had enough bugs with that as well, and it would randomly fail over, there were a lot of issues. And we recently, is there a question over there? We recently had issues with this Zeus as well. That was because of the hardware we were running on and we hit limits on that, on the network interfaces, so we sort of figured out that we need 10 game links and stuff. And so, it's pretty nice to configure a news. It's pretty simple, and it just pretty much works most of the time. I just lumped in LDAP there because it was easier to talk about it. A lot of times, let's just say company information, logins, et cetera, depending on LDAP. So LDAP is something that's available in all our data centers and offices. My SQL is what we use, most of our websites use. Like I would say at this point, more than 95% of our websites use MySQL. We've got various clusters and different SLAs. Right now, they're all pretty much master slave configurations, but we recently hired Sheehy Kabral, who's a pretty good person with MySQL, so we're going to start moving stuff to master setups and things like that. Postgres SQL, our internal confluence wiki uses Postgres SQL, as well as, yeah, of course, presentation volume uses Postgres SQL. And of course, we have Hadoop. I think our metrics team uses Hadoop for doing log analysis. Again, I think our usage of Hadoop is limited to HDFS. I'm not really sure about that, so I'll be still here for that part. So, yeah, it's them and the class that scheme that use Hadoop. Right, this is another favorite topic. As much as we hate having a bunch of source control systems, the developers just love having different source control systems. So, yes, we have one machine on an IT intro box that uses RCS. Yes, still. Who uses RCS? Well, someone said it out. Exactly. Good question. Who uses RCS? Who remembers what RCS is? But, yeah, I think it was a simple mail server with an aliases file that is checked in with RCS for some reason. It's just been that way, and it changed. And then we have CVS. We still have two repositories in CVS that are active, and we're trying to get them to move to something else so we can shut down CVS. We have Subversion, which actually still has a lot of stuff going on in it, like a lot of the sidecodes and stuff, Lips and Subversion. It's slowly moving to Git. Mercurial is where the main Mozilla source resides. So that's not going anywhere. Although they'd like to move it to Git, there's a good faction that says we want Mercurial, and there's a good faction that says move it to Git. So, till they fight that out, we're just happy doing what. And Mozilla is the odd one out that Lips and Buzz are for some reason. So, yeah, we have all this. And if you have any other sessions, the developers will be happy to use that too. So, standard data, data-signal infrastructure. DNS is P-down-bind. We run 9.7 plus for DNSSEC. That's the only reason. I think the default bind in RGL5 is still not 9.7, but 6 is already 9.7. So we run that. DHCP, standard, let's have DHCP installed. But what we have done is for the Phoenix Data Center and everything else moving forward, we've hooked this up to our inventory. So, you don't have to hand edit anything when you add a box to the inventory, which I'll probably speak about in the next slide, I'm not sure. So, let me just say that here. We have a little inventory system that keeps track of all our servers and stuff. So, when you assign a new machine in inventory, you assign an IP to it. And it does all your sanity checks. You make sure everything's okay. So, the DHCP config commits it into source control. DHCP servers will go from source control. Everything just works like magic. You don't have to hand edit anything. Anything is standard for 9. VPN Posts, we use OpenVPN again. Our jump posts have got SSH access on them. Duckax plus for the network equipment, which is a mix of Cisco and Juniper. We've moved away from Cisco for all our code routing stuff. That'll be the case in the near future. Cisco simply wasn't scaling fast enough for our needs or for the price we were willing to pay. So, it's all Juniper at this point. We also have the NetApp for storage. I'm not sure about the total size of the production. It's probably somewhere between 15 terabytes if I remember per day in the center. But I'm not sure. We also have Ecologic, Dell's Ecologic, but I don't think we're moving, but adding any more to that. The office infrastructure is pretty much the same, but the main difference is LDAP well, so we have wireless we have Aruba wireless control as in the office. So, as soon as you try to log the wireless, it talks to the radio server which then talks to LDAP to authenticate you and then give you access. That's pretty much the only difference. We use Simba for our copper email and December has not been a fun month because of massive email orders we had for two days if you're more interested I can talk about that a little later. And the rest of it is pretty much standard. DNS, Slaves, DNCV, Dell Apps. So, managing all of this we started our machines since most of them are Red Hat. We've got a custom Xevoopano, so it displays a matrix with the hardware and the OS. So, you just say number 5 would install rel6 for a 32 bit machine and you can specify other information. Inventory that I spoke about is basically there for three things, one source of truth. All the information in Inventory, if you see a machine on the network and it's not in Inventory the networks will react to that machine from the network. Because we ran into problems with that. The SCP information goes into Inventory as well and Pop-It basically checks Inventory to make sure that this machine should exist before it does anything. So, the machine is not in Inventory Pop-It's not going to do anything. It's going to error out and say I can't find this machine. And once you've solved the machine then Pop-It is pretty much what we use to manage everything. We've written over 100 modules which the three things there just are the tip of the iceberg does my package management, conflicts users, etc. But pretty much what we aim to do is manage Xevoopano pretty much everything with Pop-It. So, all the new stuff that we're bringing up we write Pop-It manifests and nothing gets done by hand. So, the big the big advantage that we've found with that is when you have a new sysadmin trying to clean up a posi log for his sorry jail as if you ever see the second thing. And runs a find that then starts deleting slash. It's very simple if you have everything configured in Pop-It, you just bring up a new machine you run Pop-It, you've got another host there within 20 minutes to do your stuff. How do you check if the machine is in humidity or not? So, Jason's question was how do you check if the machine is an inventory or not? What we do is inventory has its own MySQL database and I don't know, I forget what it's called but Pop-It is allowed to run a script when it starts and we have a script that runs in SQL query to the database and sees if we can find the hostname when we're trying to Pop-Itize. So, if it doesn't find that it then bails out. I forget the exact technical term there's something that Pop-It runs so that's the first script Pop-It runs anytime, you can ask it to do that when you run this first. You don't have a problem with rogue machines on your network, right? He said you don't have problems with rogue machines on your network, that is correct. It is a private data center but sometimes people make mistakes not supposed to or plug-in machines and take an IP address without assigning it to DNS at which point net ops might have a problem on their hands when they're doing stuff. It's happened before which is why we have these. How do they figure it out? How do they figure out it's a rogue machine? I assume it took them an hour and a half but they had to go look at which machine. Well, they looked at the cables and could see that there was a machine and there was another machine that was trying to claim the same IP so I think that's how they finally locked it up. So, monitoring, we've set up all this and of course you have to have people and automated monitoring to keep all this running so we've got the operations team which have Cycle Lab where each of them is on call 24-7 thanks to them because the rest of us can sleep and they can have the cater. Not true. So, here Ashisha usually takes on call for us 12 hours when he's up here and hands off the rest team for their day having the next 12 hours. We pretty much use Nagios for the rest of the stuff. The current install we have is sort of a hack. It's got what we call a master install and satellite installs and other data centers. We can see status of San Jose and Phoenix on websites but China and stuff get lumped up with San Jose. I won't go into the details. We're moving to a better setup by the puppet so everything is like you lose a Nagios monster, you don't lose anything, you just pick up another machine, you publicize it, everything just comes back. Nagios is pretty awesome especially for friends because when you're looking what happened last week or how much the traffic was last week or if you're experiencing a denial of service attack or you're running out of the HammerCPU it's a great way to actually look at it visually and you immediately spot the problem. You look at the graph and you're like okay that looks suspicious which is pretty hard to do with just raw SAR data for example because it's just easier visually. Graphite is something that the application devs like to use it's plugged in to most of the Django stuff so it tells them how much a page load and how much a specific operation takes them takes time, what amount of time so that's another thing if your deployment suddenly increased a page load time, they'll notice in the graphs and tell us a little back or what not. If you look at monitoring continued cacti NetOps likes cacti so they still use cacti for all our bandwidth monitoring and our TDU is in the data center monitored usually for temperature so that's on cacti as well. Infrasect uses the next two tools OSSEC and AuditD we're rolling out AuditD to make sure you know we've got better results we've got better logs of what happened on our servers right and more monitoring because you can't have enough monitoring we've got external monitors as well we use WatchMouse which is basically at status.mozilla.com I think, yeah and Gomez which basically alerts us in case it can't load a specific website so these are well, WatchMouse has a lot of our properties Gomez is limited to specific stuff like add-ons and it sends email to on-call saying this website is taking more than 15 seconds to load so please go get to it and it's pretty interesting because Gomez gives you a pretty nice breakdown of the location and then how much time each request took and the entire page to complete and we also do backups because we've had to depend on them all the months network is what we use so we have a local backup machine in each data center all the servers that need to be backed up get backed up on that machine and then from that machine stuff gets transferred to tape we also backup my SQL we've got a backup server that runs as a slave to another but to be not too confusing it's just replication and once it's replicated then we've got that and then we take an SQL down as well and then back that up take as well for a little bit of a plug here we at Mozilla if you actually notice a lot of the all, not just a lot, all of our website code is pretty much open there's nothing that's not open and we'd like our IT to be the same and we'd like to get community involvement in IT we'd like to get community sysadmins you can volunteer over every time you have if you feel like it so these are some of the URLs you can look at we all hang out in hashIT on IRC.mozilla.org and ask us if you have any questions or gently with your setup or you know, what it is now we also I think have a couple of volunteer roles open at the URL so you could look at that if you have any questions if you want any more details, if you'd like to look at some of our younger graphs I could probably show you some of that stuff too whatever you're interested in pictures let me see if I if I can quickly get that from my phone we're hiring as well in case you're interested there's some swag as well there's some stickers and that Mozilla hiring sheet I mean it's just a job description for something we have open right now we'd like to look at let me find you can you transfer it? so that's the SCL1 data center that's my colleague Xander let me just quickly back up and where he's standing is that whole row that's seven rows of Macminis and to the left all these machines are IX machines and that's the networking equipment the uplink from the provider that's the Juniper SRX firewalls and stuff and here we are sincerely sitting and working and that's not our equipment that's just some other random stuff that she shot I have no idea about okay let's skip that's the back side if you can see that that's all the power supplies of all the Minis and I think at this point we were trying to debug 8G and why it wasn't working on that local instance you have any more videos? so that was that's pretty much SCL2 so those are the Minis and if you actually notice these dongles here that's because the Mac Mini will not boot up without a monitor connected to it so we we have a VGA cable with the correct pins shorted out for the specific with the required resistors and we fooled the Mini into thinking that it's plugged into a monitor and it boots up and in fact all these racks that you see that 1, 2, 3, 4, 5, 6, 7, 8 by the way there was 8 racks we spent pretty much 8 hours completely stripping them down to look nice and also to be ordered so each of these racks so the first time when they were set up they were set up in a rush so the first rack say would have all of the min 64 machines so if you lost the rack you'd lose all of the min 64 machines which wasn't a good thing so we then split it across the rows so the first row would be min 64 the second would be sailor knocks 64, third would be windows and so on so every rack would have unique stuff so it's like you won't lose all your things if you lose a rack so that's pretty much it I don't think I have any more data pictures on this one any questions so why did you use this and not the XR machines Apple makes server racks or at least they used to so Jason's question is why did you use this and not the XR machines I think the primary reason was cost the minis are 499,500 bucks at most not sure but yeah that's major factor X servers are nowhere close to that so you're saying that you can get up a cluster of minis and they actually do you better performance than XR so each of the minis are probably doing one thing at a specific point of time so they open up Firefox automated desks to do stuff to see what the performance is like and that's it so you could max out your you could totally max out the core to run whatever you need to quit restart again for the next build with something else so I don't think so yeah I don't think you really need that kind of performance I don't think it really makes an impact so from here I'll show you some trending stuff not that so that for example would be the load balancer cluster and as you can see one, two, three four, five, six all of these machines are blade servers with a one gig network interface on them instead of the 07 is a beast it's got four 10 gig mix and it's basically pretty much doing nothing I think 08 is the same we ordered new servers to replace all these ones so all the blade servers with one gig mix are the ones seeing issues at this point they're on a higher than normal load average but yeah that's pretty much the reason let's see what addons looks like at this point you have some custom database checks that you probably show you have I don't think it's on this one it's on filabug it's probably on the San Jose cluster let me see if I have that really asking you I don't know online only mid this year or so that's pretty much the amount of memory you throw at this any sessions any other things what is the last graph this is basically what he asked me about the last graph was memory usage across all the AMO webhits addons that was run around so across 30 machines total memory usage oh that's because July and August when these machines were installed it was commissioned in September and you can see that jump and then as the traffic went up that's why you see that yeah so this is the very initial install no traffic testing phase and this is all production traffic phase and I think those two spikes there were pretty much attacks as you can see when Mia has a bug that I'm not aware of let's look at something a little smaller see if it makes more sense say last month if you notice you'll see correlations like Mondays and people coming back from holidays and their spikes and they open up Firefox to check and if it's addons to check for new versions addons and stuff so you actually see that as well just to give you a name that's the main credit loads, the crash card website and for the last seven days these are the number of crashes per 100 daily active users and if you ever want to see your own individual crash that you can probably do about crashes and you can select a crash report which then looks up a corresponding report and opens it so that's what your actual crash report looks like this is all public information there might be if you've submitted identifying information like your email and stuff that's not public here only the developers with specific approved access can actually contact you so they can look at your email so this is the whole trace and there's the raw dump the modules that you're involved and what exactly happened if your crash is common enough there is as you can see last crash more than three months and then 1.9 days since I installed this version over here you'll actually see a link to the bugs that are similar this one doesn't have any other crashes so it's not reported with all the crashes then you'll probably see them there question is what do you do with all the crashes so basically the ones that are critical in terms of you're crashing hundreds of browsers developers need to look at it especially if you're on Aurora which is so basically we now have three channels I don't know if you're aware of that we've got Nightly which is just pure development stuff then stuff goes from Nightly goes into something called Aurora which is what I'm running this is basically Aurora and if there's problems with stuff on Aurora they will definitely look at it developers look at it, they have triad sessions this is a very important tool for developers to see where the browser is crashing and it's very easy to spot trends that they make a fix, they make a noise it's suddenly spikes the crash just go up they look at it and trust me as much as people believe nothing happens people actually look at it not all the crashes because there's just way too many of them but the people who look at it people who fix issues they're not reported here so yeah that's pretty much the birds are here what about Fed? there's them also in the same machine what about Fenec? will you test them on the same machines? no, Fenec gets tested on devices unfortunately I do have a picture of that as well somewhere on the laptop initially when we built Fenec it used to run on the N810 and we used to have an array of N810 devices that would automatically keep running Fenec and performing stuff in our multiple view office we also have we call it HACSER or it's basically a fanatic edge which has all the mobile devices so it got N8, I don't think we still lose stuff on the 810s let me try to pull up the N810, I'm sure it's on my breakup so I can pull that out that was the very first N810 test build we had it is really funny seeing all of them reboot, startup, Fenec and then run a bunch of tests shut down, reboot run stuff but this was as you can see April 2009 so there's a bunch of stuff similar to this all the mobile stuff sits on IKEA racks inside that room unfortunately I don't think I've taken a picture of that that's pretty much what it does, so yeah these things get tested on devices not on the usual that's only the desktop stuff that gets tested on the mac things and profile and the questions that doesn't look like a fanatic edge well no that's not fanatic edge that was made before the fanatic edge let me actually google, I'm sure someone has put a picture that is actually the sign outside that thing but it's not the actual thing not the inside of it but that's definitely the outside and so that's a pretty pretty heavy door and you can run out of air inside so there's a big emergency right button on the other side that says if you're stuck hit the button and it will basically blow out these hinges so you can just the door just falls down and you can walk out so that's pretty much how often do we take take backups? I think it's once a week it's daily incrementals mostly and weekly inputs daily incrementals and then weekly inputs, tape on tape I think on tape it's once a week anything else? no more questions for me I'm allowed to see what's inside I would have happily showed it to you but let me see if I can find a better picture I was awake and once a sticker the rest of you is not listening it's a nice dark place and it's a cold night and my voice is not that pleasing it's quite soothing yes thanks all