 So, thank you for coming. For those who don't know, this is being taped and webcast, so anything you say, Canon will be used against you in court. So, just be aware of that. So, welcome. We're going to start, Jim Gage is going to give a talk about embedded systems. I met Jim in January, in the spring, at the RSA conference this year. But he's been- No, I-ITF last summer. What's the idea of this, though? Okay. That's right. We actually sat down at RSA. No, we sat down here last fall. Sometime in the past, I think. And some day to be- And talking a lot about these issues, which is really why I wanted to bring him here. He's actually been around for way longer than that. He's sent me his biob. I'll just mention some highlights. Formerly from Bell Labs, VP of software for the One Laptop for Child Project. One of the developers of X-Windows, when he was at MIT, worked at the W3C, editor of HTTP 1.1, which was a long time ago, and been working in systems and embedded systems and computer systems, I think since they were invented. More recently, which is how I got to know him, looking at drivers and buffers and not the computers you're using, but the computers all around the computers you're using. And that's what he's going to talk about today. And I'm excited about this. Okay. Thanks, Bruce. One other thing I worked on was sort of the direct ancestor of these things we carry around. Some of you may remember the IPAC handhelds way back when. And that's where I got my feet wet with embedded systems. In fact, internally, they're not much different than what's in your home router today in terms of not a memory and stuff like that. This is a bunch of things. I need to sort of make a gauge of the audience here. The first two of these I instigated deliberately. So the article that Bruce wrote that came out in January and wired, he did a wonderful job at. It's called The Internet of Things as Wildly Insecure, and then it goes on and on in terms of the title. Also Dan Geer, who some of you who deal with security know Dan. Dan's an old friend from Project Athena Days. So he also wrote about this problem in May. It's in the Lawfare blog. I recommend those of you who are interested in this problem go look at this. It's also a recent one, which is quite amusing, but unfortunately the nightmares are much worse than this called Nightmare on Connected Home Street by Matt Honin on lawyer last, like, this month. How many of you got familiar with what our friends at the NSA were doing in 2008 in terms of doing interesting things to devices of all sorts, including routers? Remember, that's 2008. There are many other people out there than them, and they've had a long time since then. Take that part. And then there's a final article, which I've, a final pointer, which is a paper that I was quite surprised when I gave a presentation at the MIT Security Seminar was not understood by anybody in the audience. It's called Familiarity Breeds Contempt, the Honeymoon Effect in Something Rather, and I'll talk some more specifically about that because I think the results of that are pretty profound and need to affect how we think about engineering software in the long term. There are links in the slide, in the PDFs, that I believe are online. We're used to throwing computers away. The phone, it often breaks before, you know, before it really stops being useful. Your laptop, you know, maybe lasts slightly longer. Your desktops tend to last yet longer still, but we still basically throw them away. We've learned through very great pain over the last several decades that we have to keep them up to date. But now we're building these very long-lived systems anywhere from our house to automobiles to the network inside your house, you know, and we're embedding these devices, these computers into these things. And the installation costs or replacement costs of these, it's often the labor dominates the cost by far. The box itself might be a $10, $20, $50 box, right? But to get someone capable of plugging it in and making sure it works for somebody may involve a truck roll and or service calls that you spend hundreds of dollars for. So it's really a fundamentally different game. These things are no longer easily replaceable. They're built into our environment. And some of these now, the potential lifetimes are decades. I mean, I have two Nest thermostats in my house. What am I supposed to do if and when Google stops issuing updates for them? I installed new heating equipment about five years ago that probably has a 30-year lifetime. I had to rewire things to get it to plug in. These timescales are long relative to human organizations. That is fundamental here. What this means is you can't presume sometime later that what you'll need will even be possible to do anymore. Whether it's the engineering groups dissolved, the company may not even exist anymore. What do you do? And we've presumed that we can just forget about these boxes. Is this safe? How many of you know what the acronym SCADA means? Well, some of you know that that's a huge disaster that a certain industries are going through right now. This is a much bigger mess, a much more widespread. That's just one instance of a much more generic problem. I just about hugged Sandy and Matt Blaise when they told me about this paper. It's called Familiarity Breeds Contempt. It's the honeymoon effect and the role of legacy code in zero-day vulnerabilities. It was published in 2010, but doesn't seem to have gotten much coverage. This is figure one out of that paper. Seems to be solid as far as I can tell. Certainly some of the people involved are very well respected in the security area. Bruce can speak that better than I can. The left of these is the sort of classic mythical man month that we expect to discover general purpose bugs. And the reason why it tends to go back up after a bit is that you started to ship product and so now there's 10 or 100 times or 10,000 times as many people poking at it. So you see some additional bugs and at some point you sort of have discovered most of the things out there. However, vulnerabilities don't work this way. They studied like I think a three or 400 different software packages, both open and closed source about when vulnerabilities are discovered. It turns out there's this honeymoon period when typically people don't discover anything evil in them. But once you start finding one, unfortunately the rate of an increases. This leaves me with some interesting takeaways that for those of us who read the mythical man month about software engineering, we've missed something entirely fundamental here, which is there's a really cruel master here that if this is true and I believe it is about software engineering, then you can't leave software and devices unmaintained that updates are fundamental for the life of these things. You don't know when, these are very complex systems. You know, a five or $10 part can have embedded in an entire operating systems and user spaces on it, on top of it that are millions of lines of code. We don't know how to make bug free software. That's just fundamentally true. So fundamentally, we have to for anything which is at least network connected or indirectly connected via people dropping USB sticks and parking lots. You've got to have secure update for the life of these devices and we don't ubiquitously. We do for our end systems, though of course Microsoft would like to stop having to maintain Windows XP. They actually still maintain that for embedded use only they claim. But this has some very interesting consequences. If you're somebody building systems like this, you better be selecting components that can be maintained and if you're buying stuff, you want to really worry about whether these products can be maintained and that brings up a whole other little fish. And the owner's got to have ultimate control. You've got to be able to fix these things even when the network's busted. Many people are locking these systems down. I think Google got it right with the androids where at least the Nexus devices, you can unlock them. Okay, and do what you will with them. There are two reasons for this. In an emergency, you need to be able to take action yourself and you need to have the community of people maintaining and fixing code around it. And if new code isn't able to be easily tested on these devices, you don't have a community of alternative software you can run in these devices. So it's very important that despite the fact that I'm saying you've got to have a secure firmware and the like, you also have to be able to unlock the things. So, you know, how long are these things going to be in an ecosystem? I mean, often, home routers often sit in people's houses now for, you know, seven, ten years or more. There's no reason often to touch them. They're often, you know, us geeks tend to throw them away quickly, right? But your grandmother doesn't, you know. Now, this brings up also an issue of trust. Who do you trust? Think about this from a humanity side thing. And think about the TAO catalog of people deliberately putting in them. Why should the Chinese trust firmware that comes from the United States and vice versa? Why should I trust my ISPs firmware? They don't understand security either. British Telecom is a great example of this, okay? And Bruce is no longer at BT. But as some of you may know, they put a back door into each and every home router that they gave to each and every one of their customers. That means if you break that, if you break their management network, you've broken all of the routers. It doesn't matter. There are so many other ways to get in, it doesn't matter. They just made it particularly easy. But over these long time scales, community maintenance might possibly succeed. The X-Windows system, which Ron and I worked on, you know, we're working on 30 years ago. We celebrated the 30th anniversary last week, okay? For the last 25 years, it's actually been upward compatible and maintained. So there are software artifacts that have been maintained over human lifetime scale things. But as far as I know, they're pretty much all just open source artifacts, okay? But if you've got a, what? Yeah, there are a very small number of such systems out there, but, you know, but the number of them, you can probably number them on non-open source ones. You could probably number on, you know, fingers of one hand maybe. But here's a problem. Binary blobs may leave you helpless and vulnerable forever. You can have vulnerabilities in those too, okay? And you don't have control of those bits. You can't rebuild them. Boy, you got a problem. I've focused mostly on home routers and modems, but most of what I'm saying is true for all of these embedded systems. Most of them are now Linux boxes. So we have close to a monoculture there. I focus mostly on home routers and modems, both because I'm worrying about fixing buffer bloats, so I was interested in understanding how long would it take for me to get fixes for buffer bloat into your hands? So about three years ago, I did a thought experiment, which was how long is it going to take? I could proceed and maybe someday we'd have fixes for it. From the time we get it into Linux, how long can you guys go and buy this stuff at Amazon or Best Buy or wherever? And I was pretty horrified to discover three years ago that these devices start out with four-year-old code on day one on a brand new device. About a year, a year and a half might be what you might consider optimal, but no, it's much older than that. In fact, it's, on day one, almost, these devices are on the last stable maintained kernel that kernel.org publishes. So they will just sit and they just rot in your house thereafter. There's no maintenance on that or any of the other pieces of it. What's more, if somebody patches, the patches almost certainly can't get merged upstream because they're so far out of date that they won't apply on the new thing. So any work that somebody does to maintain it downstream is why you pay red hat and the like big bucks for the enterprise systems. Most embedded devices are no different than routers, except they're not on your path to the rest of the world, which is the most critical thing, right? And we're now, more and more of us are depending on our internet service. In fact, it's only a matter of when the conventional POTS copper system gets turned off. Its economic model is so far gone it's not funny. They have to unplug it because they can't afford to keep it running. Okay? This means that your phone service is coming in over that internet service, too. Okay? And or these guys. Now, these have much the same problems, but we replace them every n years. But these guys have binary blobs, too, despite being either Linux-based, i.e. Android or iOS, as well, usually in the communications drivers. So they're actually a similar problem. Shorter lifetime hardware, but this will break someone. Okay. So there have been a number of wake-up calls which have mostly gone on ignored. There's been a number of different researchers, DEF CON, and elsewhere showing signal vulnerabilities that have done things like, you know, half of the home routers they happened to test, right? DNS changer was also, some of you may remember that, was also attacking home routers as well as hosts, usually just because the password security in these things is sort of not, often non-existent. There's 4.5 million DSL routers in Brazil, which people are doing financial fraud on. Okay? Not in this country, but the scale is beginning to get interesting. This spring there was this interesting worm called the moon. There should be a space in it, I guess, that the sans folks looked into. It's mostly benign except that it doesn't leave you with a worm and fuzzy feeling. What it's doing is going around looking for other vulnerable home routers. It's using home routers to determine other vulnerable home routers, and it's most of the links as routers. You're probably familiar with Heartbleed, if you aren't, you should be. It's really a matter of when, rather than if we have a really big problem. In fact, we probably already haven't and don't know about it yet. Somebody's just waiting, okay? So there's this recent amusing nightmare on Connected Home Street, but it's really only a minor bad dream from my point of view. The real nightmare just as likely as the author of this is that things just stop working one day. Not just necessarily yours, but everybody nearby or whatever scale you care to name. You end up with no internet at all. You can't access bits. If the bits were available or became available, you had no way of getting them. You might even need to replace the devices. I've lived this nightmare or something close to it. I've looked over the abyss into the abyss. In our particular case, and I'm not going to say much more than I say I have in the slide, the root cause was the binary blob in other non-upgradable devices that were all over the place. It doesn't matter whether these are bugs. In fact, I still think even after the TAO catalog was published that what I saw or what we saw was a bug. It doesn't matter whether this is an implant or malware or a bug, it's still a disaster. I've already looked into that abyss. We don't want to go there, guys. Unfortunately, there are people who have motives who wouldn't mind it. There are a lot of hysterical causes for this. None of them are great. The first flash file system for Linux happened to allied the user and group IDs. This means that on most of these devices, these things are all, everything's running as root, which radically increases the number of vulnerabilities that are serious. The simple matter of configuration to fix this problem, but somebody's got to do it in the code basis from which these vendors derive their code. At the time, for example, the IPAC, which was the year 2000, came along. We didn't have much flash or RAM space. We had 16 megabytes of flash. Most of these devices today still only have four or eight megabytes of flash. There were way too small to use conventional Linux distributions from which you would find an update stream. They've usually been using some distribution of Linux and stuff on top. Occasionally, BSD, but it's almost all Linux. At some point, the chip cost will get so cheap that it just will be a crossover in the fact that the much bigger chips are at volume, much higher volumes than the old flash chips means that there won't be any savings on the bill of materials. Then we have the economic disconnect. I'll talk more about this between the costs of things and who gets the benefit of the work. The bomb costs drove the bootloader to unprotected flash, typically. Then we've got the binary blob disaster, which is my next topic. You have to understand how this ecosystem works, but you've got a bunch of very big companies that are building silicon who shop their silicon around. They typically have taken some random set of bits and freeze on that while they're developing that silicon, debug a pretty bad device driver into existence, and then shop their sample design around to the so-called original device manufacturers or ODMs. Those things are often usually a binary blob in certain cases. Communications devices are one. GPUs are another. Sometimes others. It doesn't happen on servers for lots of reasons, but on these markets, it's not happened. Sometimes these chips and modules have their own processor, which means they have their own code. You have another binary blob. This is turtles all the way down. One of the hardest problems we had to debug at OLPC was in fact the bootloader for the Marvell wireless module, which was an ARM9 processor, and there was a bug in that. We were having about one in ten upgrades fail, so we couldn't even upgrade the module unless we fixed it. That one caused me to lose a lot of sleep and only got fixed because someone was able to look through over somebody else's shoulder and find the line of code. After six months of struggling, it got fixed in five minutes once the code was visible. So ODMs take this pile of Linux and applications, ship this stuff with these binary blobs, and they have no incentive for updates. They've made their sale. When they made the design win, they've locked that ODM into that chip. So their incentive is to do nothing. Anything more just costs them money. So there's no update stream and there's a frozen ecosystem. The ODMs can't do anything about this. They're stuck with the cheapest, often the cheapest chips, having these binary blobs attached. So you don't see a culture of, yes, we will upgrade this thing over time, the way you typically do. As we've learned, the hard way we have to do with conventional machines. And these poor ODMs have no real financial incentive to update either because they've made their money once you've bought it. You having problems is another potential sale. It might affect who you buy the next time. The suppliers, well, and again, the vulnerabilities, remember when the vulnerabilities occur, much, much later. So the silicon provider doesn't see the vulnerabilities. The ODMs don't see the vulnerabilities. The ODMs might give you new firmware for a order a year, usually just to fish crash bugs because they don't have to take the service calls. And the returns from upset customers. They really don't have the expertise anyway. They aren't able to develop the expertise. They don't pool their effort. Whatever fixes they do do is too old to possibly get integrated into the upstream maintained versions of software. And then most of these devices are still manual updates by end users, which means it doesn't happen unless it crashes. Okay? So we have a fundamental problem here. The, you know, the consumers of these are you or on your behalf, your ISP. The ISPs sometimes buy these boxes, sometimes they don't. There's no real competence in this kind of security either. Certainly not in how embedded systems get put together. You guys don't have viable choice out there because in fact there's at least one such binary blob and nearly every one of these devices is currently on the market. So when we're working on buffer bloat, the number of different routers we could run our code on was very small because we knew we needed to go nearly anywhere and deliberately selected a device for which we didn't have, except with one slight exception, any binary blobs to contend with, okay? So how, you can't differentiate it. There's nothing seriously differentiable except for bandwidth in the market right now. So what do you do? This is entirely a race to the bottom. How cheap can you make something work? These things have no place to stand for security. The devices, even those which so-called lock the boot loaders, turns out that usually the flash chips can be unlocked if you know what registry to go poke. You can usually do that if you just get root, which is easy to do. So doing it properly, depending upon whether you are willing to trust the flash chips, which apparently a friend of mine who looked into these couldn't find any flash chips that actually had a one-way lock on flash sectors, could be zero, but it's 28 cents to do right if you have to do it with explicitly a D-flop and a separate serial flash. We have courtesy of the OLPC, a bootloader which actually does all the real crypto and does secure update, even does it in a multicast fashion. So I once upgraded with one other person helping me over 100 OLPCs over a 802.11 g-channel in an hour with a gigabyte per each, so that was 100 gigabytes updated on that. So what do we got to do? We have to forklift upgrade the edge of the internet. These devices are all bad. The only question is how bad they are. I'm talking about the commercial devices. OpenWord, which is an open source router, is actually pretty secure and pretty up-to-date. Even yet, doesn't have everything we need, but it's a starting point. We have to secure the bootloader. There's a bunch of existing technologies that are straightforward to apply in Linux. This is not a highly technical audience, so I will stay away from them, but none of them are being used, and some of the group IDs which some of you will understand aren't there in most of these devices. We could have a disaster in the meanwhile, but if we don't change our business and software engineering practices, it won't matter. We can do it once and we'll just have to do it again. This problem is a really hard problem. This ecosystem and how we think about building software over the long term needs to change. That's what the honeymoon effect paper at least tells me. We have a tragedy of the commons problem going on here. There's lots of money. You pay every month for your internet service, but the money doesn't go to the right place nor do even those intermediaries know how to function where the software comes from. How do we get the funding model of the organizations in place? We have some decent upstream stuff like Open Word, but there's this gulf in the middle, and that gulf has been caused by the binary blob disaster. How do we avoid monoculture? Linux has become close to a monoculture for embedded systems. This is not safe. I'm a Linux bigot. I like it very much. I like its development style, but it doesn't make me sleep at night to know that so much of these embedded devices are all running the same code. How do we change this proprietary information mindset and deal with a binary blob problem? Some very big clue bats have to be applied. They are technically already in copyright violation usually. It's not a battle that the Linux community has been willing to fight, nor is it really... Legal fights are not the way to probably get this solved. How do we get this stuff out? If that code for that piece of silicon that has 100 million of them all over the world, isn't available 10 years from now when you need to rebuild it, what are you going to do? And who do you trust to do that? There are lots of reasons why you may not be willing to trust the manufacturer or the ISP. You don't want a monoculture in even the builds of this. For those of you who are computer scientists who've read On Trusting Trust by Ken Thompson in the Turing Award lecture, you know that we probably ought to have multiple builds from multiple locations. How do we do this? How do we organize ourselves worldwide to face this problem? We have not been thinking about software in the right way. I'm quite surprised to find myself now closer to Richard Stallman than I could possibly imagine. Because the irony of this is the MIT license was defined for the X-Windows system, and it's sort of the prototypical thing you like with this code. Now, I'm not sure this is really properly enforceable by legal copyright means, but Richard also missed the fact that our lives now depend upon us. It isn't just liberty in the pursuit of happiness, which is where he's come from. This is life stuff. If your home network stops working, we're very close to having human lives, and actually some people probably already have a position where they depend upon that continuing to function. We will certainly be there in the next 5 or 10 years where people will die immediately. Okay? You know, so I'm quite surprised to be close to Richard these days. Is there hope? I don't want to be too... terrify you too much. Linux Foundation has various efforts and embedded systems of various sorts, as does the Devian project, has a thing called Embedian. We're very fond of OpenWRT, which is a Linux distribution for more than 150 different models of home routers. There are various other community efforts, too numerous to do this, but these are all currently limited in terms of what they can run on and the like and how much synergy they can have among themselves by these stupid binary blobs. Come back to bite people. You may have noticed that the first of the Google Nexus phones went out of support. I believe the clause was the graphics driver was the chip vendor decided that they weren't going to be in the cell phone business and basically blew up the development group who was maintaining the driver. So a small amount of egg on Google's face from that. This is a generic problem. There are some ISPs who are aware of just how badly served this market is to them. They get the phone calls when things don't work well. It costs them 10 or 15 bucks just to pick up a phone. They have to roll a truck. It must be more than 100 bucks. It's directly out of their bottom line when these things malfunction. So open word is the most interesting of the open source router projects from my perspective at least because they try to keep up to date. That doesn't mean that there aren't lots of things that need to be done but this is actually seriously interesting software that all the community networking groups in Europe at least are using and is being used commercially on a number of the vendors but they're typically having to freeze on the old versions again because of the binary blobs if they are open word based. They can't maintain it and upgrade it typically. So it's roughly four years in advance of what you can see commercially. My home router has real IPv6 support and support for networking and other good stuff, DNS sec in it and things like that in the home router I run. But as soon as you can hope for something like that right now is about four years. Not good. There's a bunch of policy questions. I'm mostly just going to let you read through these and then leave it for questions. But I want to provoke people to think about what does this mean in terms of policy because I know that's what the Berkman Center is about in large part. This is, I believe, means we should be really thinking how do we develop software, right? Over the long haul. Remembering that we now have real issues with Windows XP as an example on the commercial side of how we do this, right? I'll point out that code without a community is worthless, okay? This is why I'm fond of when arguing with Dan Geer I think his ideas about code escrow or any abandoned code should go open doesn't go far enough because you don't have the community able to maintain the code at that point. What you've got is abandoned where there's nobody around who you probably even understands how to build it anymore. This is not a good situation. So I don't believe in code escrow and stuff like this. I mean how do you know you got it all or what version you've got? Again, this is because a lot of software has lifetime much longer than human organizations. Okay? Monocultures are really dangerous, okay? As anybody who studied biology, Dan Geer has a biology background, is aware of, but this is true for a pile of... Linux is just the operating system kernel. We have a bunch of stuff on top of this. It turns out OpenSSL was, which we all are very aware of with Heartbleed, was actually multiple different alternatives for it, too. But it was painful enough as only being a partial monoculture. A problem can be a problem if it's even just a good fraction. How do we encourage alternatives to Linux? Again, the poor BSD guys have been fighting the everything's got to be completely open for years and often they can't run in the hardware of a binary blob someplace, okay? So how are they ever going to achieve critical mass to compete with Linux? Really tough. Think about aftermarket upgrades, okay? Should the fact that you've bought something mean that that vendor who may not even be there in five or 10 years, like, you know, who knows if Tesla will be here in 10 years and you buy one of their cars, you're going to be able to update a wonderful computer and user interface that's now built into the car that's even capable of adjusting the height at which the car goes. Okay? Yeah, we're all thinking about shiny and new, but now let's think a little bit further downstream. Should that whole computer be replaceable? By whom, right? Maybe we have to take a position that the internal interfaces should be open so there can be competition in the computer that's on that. So in the long term, you can buy a replacement in 10 years out because that's better than having to stay with the old hardware. How do we solve this tragedy of the commons problem that Heartbleed made obvious with just one of a pile of projects out there that are much more important than people have appreciated? Okay? How do I get proper support for the open work, guys? Okay? I want to have a somewhat secure router today. That's where I'll point you. And they've been starving for decades. You know, not decades. They're projects only of order 10 years old. But the point is that even the vendors don't throw money. They don't feed the penguins. They have not been supporting that project properly. Now, Linux on servers has organized itself so that ecosystem exists. But in this embedded space, it does not exist. The vendors do not. Often they're working on very tight margins but other people get most of the revenue, like the ISP. How do we make this happen? Yeah. Critical infrastructure is any hardware or software that's widely used in large quantity. Okay? You can't know in advance what's going to become critical infrastructure. Okay? You can't work. Right? Oh, that one actually got shipped in 100 million. Now I care. Well, if you can't regenerate that blob, you're too late. What role should we have? What role should government have in pushing this market into a safer direction? Okay? Without stifling a market, because that's not good either. Yeah. So I will mostly close with the mean to spread. Friends don't let friends run factory firmware. Okay? And remember, those devices, unfortunately you probably can't change your modem to open source, but at least the router you can. It's got that radio. It can listen to anything nearby. And it's between you and the rest of the world. Wonderful. Right? Yeah. So that's pretty much all I'd like to... Yeah. It's really now time for questions. I want to ask the first question. So let's say you're giving this talk to a company like Symantec who says this is fantastic. It's an opportunity we can sell every home, blah, make it. What can a third party do in general for this problem? This is tough. There's very little margin in the hardware. These are all commodity components that are being made in millions. But I'm assuming that Symantec can sell $100 every home needs a security thing that we're going to install on your network and it'll magically fix the problem. Is there something a third party can do without anybody else's help, approval? How do they get... That's your router. You have the rest of the world. You have the problem until... That's your man in the middle. Until you've replaced that man in the middle, you are vulnerable. You got to replace... Remember, you've got to replace these things. These are at least one forklift upgrade to get out of this mess. You know? Of the thing on your network path. Think of this from your network path. That's why I focused... I started worrying about the routers but in terms of ability to manage... There are 14 different devices in my house at this point. If my network doesn't work, I can't update any of them. So the reality is the network connection matters the most for lots of reasons. But the general problems are endemic among the... in this market. Do you want to feel the questions? All right, David. So the carbon monoxide detector in my apartment has an explicit seven-year lifespan and after those seven years are up, it not only stops working, but it keeps making annoying beeping noises so you know that it needs to be replaced and you have to buy another one. Do you think this is a possible model? Well, that's certainly one of Dan Geer's opinions, but I don't think that's always feasible. In particular, to make a stop working entirely may make your network stop working. You see, you can't do that for anything which is actually involved in the network itself because then you've made it impossible to update things. Then you bag a whole lot of other security issues like how does it beep? You don't have any next money so they have an incentive. Well, and these things are now getting so cheap they're getting built into physical devices like probably your furnace and so on, who knows. Will those be available in 10 or 20 years? The question is what's the lifetime of these things? If you want there to be an aftermarket and to protect yourself against a vendor going out of business it needs to be possible for other people to build those things to be built into those other devices later. So how do we enable an aftermarket? That's why I had that up as the question there as a issue. I was thinking kickstart or some form of community organized just let's build a router that can actually be built with no open binary blocks from the start. I think you could sell that for a hundred bucks if you could get an actual buyer. Here's the problem. We probably could have only been considering doing that. So Dave Todd who runs the Serowork project which is this advanced build of open work is a way to think about what Serowork is. We've been doing the buffer block work among other things. We've been looking for the next platform. So we found one which was satisfactorily open for the previous generation, the WNDR 3D800. The problem is that all the chips for the 802 11 AC have binary blobs in them and or at a minimum in firmware that they run but often also the device drivers again. So it's even identifying even one of those is a problem right now. For the last generation of the Atheros 9K series of chips had only a tiny binary blob whose source was available to the Linux device driver maintainer and the driver was open. So we knew that there was code everywhere even though there was still a minor blob that we couldn't be distributed in a source form. So these these silicon vendors have caused this freeze in the ecosystem to the point where it's really tough right now to even identify hardware to do exactly that step for AC which is coming into the market now at this point. Question about redundancy. Is redundancy a possibility to sort of run a component store? Is that a way to well it always is but the redundancy is sometimes much harder to come by than you naively think. So Broadcom for example has a near monopoly like 70-80% I'm told of DSL chips. So most of the DSL routers out there will have a particular device driver and finding something else isn't obvious from the outside. How can you do that redundancy? Right? You can have hidden monocultures. You didn't know that monoculture existed or near monoculture. I mean it's close enough that from a biological point of view it might be it's a monoculture or near to one. I think you're overlooking the role of the professions by jumping straight from vendors to communities. The perspective from healthcare is that the profession that's supposed to be watching over this drift from open source to binary blobs controlled by vendors is asleep at the switch. And though this your talk's not primarily about things like the devices that I've talked to Richard about that are embedded in that he's worried about the same thing I think could apply to the routers and to the more fundamental thing. We traditionally used to have a professional class to mediate the community issues. I understand. I started talking with people like Ben Surf because I believe very much that there's an educational component to the profession about these problems. We've been asleep at the switch in essence. And so part of why I'm giving this talk and we'll be giving another one on Thursday with a slightly different bent is to begin to raise awareness if you're designing one of these things try very hard to avoid those blobs not only are they your biggest schedule risks but downstream they're the disaster no company wants to face where you need to recall them all or things like that. You said I think that retrospectively the MIT license was a mistake. I did not say that. Okay. I'm not sure that the issue is enforceable by the GPL anyway. History has that it hasn't succeeded in this area. I understand Richard's point of view with that. I'm saying there are more fundamental reasons than the license why you should care a lot about there being source available to these things. Okay. There are also consequences of the GPL around patents and so on that may be unacceptable and make it harder for people to go. I'm not saying the MIT license is a mistake I'm saying that it's not the really big reason for the code being open in the first place. And Richard didn't realize it either that this whole thing of vulnerability is occurring way downstream. That piece of information is really a new result as far as I'm concerned that seems to have slipped by since this publication but it's an important one. I think I want to draw the distinction between social and legal issues versus technical issues and I'm wondering if there's a purely technical solution or whether you need to have something based either on government regulations or standard organizations. And one idea is we currently have benchmarks for performance in database, world, TPC whatever. Should we also have database, sorry some kind of benchmark standards for maintainability and security and with that help and with some role of standard organizations in enforcing some degree of openness. I think this is a very multifaceted problem and so the answer would be of course. There's lots of roles for that. How do you know if I mean having something where you know that the insides of this device might be source code might be available to you in the future indefinitely is you know I'd like to know that when I go in and buy this physical box but that needs a certification mark. That's done by various kinds of standards organizations and or government. So yeah I think there's a role for that sort of thing. Sometimes it can be as much, sometimes it doesn't work out all for the positive but at the moment we're just in a bad place. I mean yeah who is liable for this it's an interesting question and yeah I'm sure there are some things that can be learned but it's not an area I've dabbled in so I'm not sure I'm the best person to answer your question. And see we also don't want to stifle innovation as a piece of this and the really interesting observation from my point of view is we don't know in advance what's going to become critical. What really is happening here is it's critical only when it reaches high scale nothing bad happens if a tenth of one percent or a hundredth of one percent of something goes bad when we have the same vulnerability all over the place that's when it's a big deal and we can't predict in advance what that is. So I'm not sure that I'm not sure that a lot of the typical reactions will have thought about quality necessarily apply to that situation but it does imply responsibilities of how we engineer these systems so that in the downstream for the things to succeed we are able to maintain them. To comment and I wish the first comment is has to do with the fact that we see an asymmetry we will see systems that are very heavily maintained and very expensive on the one side and then the price pressures will push systems that are not maintained in the end and what I read in your talk is that the community needs to be active and support the systems so people have to self organize to support their property basically. So those are the two comments and I don't see any easy way to getting away from that. I very much to the comment of setting some standards there on basically having the code with you so my wish would be may the source be with you. Thank you. Yeah I just want to push back a little bit on the premise that these things are landing in the home and rotting in full of the nest is something that is actually connected and has I know it is getting updated. So I think we are moving towards a standard of some of these devices are way better than others in this but the point I want to make are things like I just took the the audio receiver that I bought in college to the dump okay and this is 30 year or more of a lifetime. What do we do how do you make sure that a device like that can't cause a problem 30 until it broke it was perfectly usable we are now beginning to engineer devices that have very long lifetimes there is nothing inherent about these processors having a processor in them to shorten their lifetime these are silicon devices they can last a very long time and we have already seen the example of Google having gotten bit on their on their Google phone very quickly the first Nexus phone unsupported again if there hadn't been a binary blob that they had no control over they probably would have continued to issue updates for it for at least a while longer okay so this goes back to this Nexus of Blobs not under your control even if you don't worry about the NSA or whomever doing bad things in there and as invented devices go and that's pretty expensive right that's $200 $200 device so it's not a $50 or $15 or $10 right so it's more long lines of a cell phone than a it also might be sold with your house so that the person who now has it didn't even know my expectation is I should probably continue to work for the next 10 years my I expect my heating system to last a while and these things are now getting built inside of these things people you know want to be able to turn dampers on and off and that might as well the only way to do that is if you look at what's happening with the ZigBee guys they're trying to get networking into all sorts of places inside the home to be able to control things like that and that's what they're trying to do for the next year I had to replace the entire mother board of my refrigerator I had no idea we had one but we did and they just pulled out the board and stuck in a new one the device lasts but the computerization that's right and that's obviously probably the thing to do with the car my frustration with the cars has always been that I'm going to want a better I buy new cars every 15 years to be able to update them occasionally but the car vendors have not been willing to encourage a viable aftermarket for these things maybe we should because that's a way to help this problem so we don't have to maintain that code base forever any other questions? so there are things you can do okay so Ciro works version of openwork that we work on for buffer bloat and it has real IPv6 support which by the way our friends at Comcast have turned on in the Boston area so you can have native dual stack IPv6 these days Ciro where it differs from what you're used to because it routes rather than bridges and up to date more or less so goodies like that you know about the openwork guys in general in that sense we're part of the openwork community a sort of bleeding edge build on particular hardware what about this Firefox and EFF route a project to share open connections securely there are a number of different such projects they don't communicate as well as they should in fact the most interesting stuff I've seen is often going on in Europe in the community networking area there but I'm a little bit aware of the EFF project for various reasons I haven't been able to pay much attention to it recently so it's going to be around the edge it's like one particular piece of hardware and one particular configuration it's very non-general so it'll be a nice marginal thing but it's really just a marginal thing that going in openwork that sort of behavior just out of the box by the way you really won't need to fix buffer bloat properly to do that because you don't want the other guy to screw your performance and right now buffer bloat's got you badly if you haven't fixed that then sharing your connection is not something you often don't want to it's very much the case that one could these projects I mean this is again support so there isn't a good the closest thing to a community meeting for people working in this area is the one that Sasha Meinrath does in Europe which brings the community networking people together but again that funding would help there are more people who can't afford these are often poor starving student types how do we get the right people to those meetings even though they're very inexpensive they have this meeting called battle mesh every year where they test out mesh networks in Europe and I've been to one of those but how do you get the right people there this whole thing is not properly supported there's no margin and the vendors are clueless so they don't understand they should be supporting that work the way it should be and this is back to the tragedy of the commons the server guys have tons of money relatively tends to be less starvation there any last questions some companies for example Microsoft are pushing softwares to service do you think we're being pushed into that we seem to have two possible directions to go one open source more prevalent and the other more vendors forcing users into softwares as a service where they're essentially instead of buying something outright paying a license how to keep working at the beginning we have to have the network working in the first place so we have to solve this problem but that's the ISP's responsibility it's inside your house arguably your responsibility if bits coming out of the spigot how do those bits get maintained okay how do you get your bits around the house so I don't I also don't think the centralization thing is a good idea once you've properly secured devices like this I much prefer to have my email coming to my server than to Google I don't have my mail go to Google right now because I know I'm incompetent to keep to keep a mail server secure and up to date given the situation with spam I'd like to have a box where where that just worked and under my control and if the upstream source for those bits stopped doing a good job I could switch it to somebody else who is I'd be willing to pay a service for that but you got to secure that device we have to have a place to stand in the home that's a logical place for people security certificates in a box like that but it's got to be secured and they're not today well thank you all for coming thank you very much a lot of people are moving more things to the cloud a lot of vulnerabilities involved with that and the wisdom of that is subject to a long discussion my