 A couple of years ago, some of you may remember, I gave a talk on hacking voting systems. And it was while we were hacking the voting systems that we came across a question for which we just didn't have an answer. And the question is, if these voting systems are such crap and they're so easy to exploit, everywhere you throw a dart, there's another exploitable vulnerability. Why is it that there hasn't been a single documented case of voter fraud because somebody has exploited one of these vulnerabilities? There isn't one case and we looked. So we started thinking about this because this isn't right. It's not like U.S. elections don't matter. It isn't like there's not a hell of a lot of money going into this. So if you had $100,000, would you pay for advertising or hire somebody to make the election go your way? And nobody's done it so far. So the only clue that we had, the only difference between a voting computer and any other type of computer are available to the public about three days a year. In other words, people haven't had time to learn them. So this makes a great hypothesis, but I'm a scientist. And you have to test a hypothesis. So what I'm going to talk to you about today are the results of my trying to test to see whether or not this works, or whether or not this hypothesis was actually correct. Now, I don't know how many of you write software or how many of you are software engineers, but the whole idea of software reliability, finding bugs and fixing bugs, has been at the center of software engineering pretty much since the field has began. The bug lifecycle is highly predictable. This graph is from the mythical man months. And if you know anything about software engineering, this is the Bible. In 1975, when this book was published, Brooks came up with the idea that when you release a piece of software, it's got a lot of bugs in it. And some of those bugs are easy to find. And those easy to find bugs get found really, really early. And then you fix them. And then they get a little harder to find. And you fix those, and they get a little harder to find. And eventually, the curve gets pretty low. And at that point, you know your software is fairly reliable, right? Bugs are overwhelmingly discovered really immediately after the software is released. And then the curve drops, right? And this is remarkably stable over time. This graph is from 2008, which is the location of the mythical man life. And it shows exactly the same phenomenon. The actual numbers of bugs found were higher than Brooks predicted they would be. But the curve is exactly the same. So even if we can't fix the bugs, we do know that this aspect of software defects, this is well understood. But you know, we're security people, right? We don't really care about bugs per se unless they become vulnerabilities that we can exploit. So here are just a list of some conventional wisdom, some assumptions that we all make. The first is that vulnerabilities are bugs, right? They just have really, really nasty side effects. The second really common assumption is that if we reuse software, software that's been out for a while, software that's been tested and has had a lot of its bugs removed, that makes the system more secure, right? It's new software that introduces new bugs. And the third assumption that we make is that if we can just improve the quality of our software, then we'll make it less vulnerable. If we can use the perfect type safe programming language, and if we can find the perfect compiler, if we can just build those, then we'll have secure software, right? Well, no. This is what we found, that at least not in the early part of the lifecycle of a product or a release of a product. This doesn't seem to apply. So here's what we did. We looked at a bunch of vulnerabilities, and we looked at just the early lifecycle after each one. And in a nutshell, what we found is that there's this honeymoon period. It's immediately after release, software has this little bit of time that protects it. And this is going to have some really important implications that I'll discuss later on. So it actually looks like this. Instead of the way that the bug lifecycle looks, with the curve starting high and then dropping down, the curve actually starts really low and speeds up. Low hanging fruit or not. Easy to find, vulnerabilities are not. Software seems to enjoy some sort of protection from attack in the time immediately following its release. And that's what we're calling the honeymoon effect. So now I'll get into a few of the details for you. This is what we did. We collected 30,000 vulnerabilities and correlated them with all the major vulnerability people, track, Secunia, NIST, NVD, OWASP. And then we looked at a list of the most popular programs, whether they were operating systems or server apps or user apps. And we ended up with 700 different releases. Then all we did, no fancy statistical analysis at all, all we did was count the number of days, the number of days from the official release of a product or a version of a product to the days of its first vulnerability, its second vulnerability, its third vulnerability, and its fourth vulnerability. And that's what we discovered that at the earliest part, it seems to be safer here. So when I'm talking about a honeymoon period, this is what I mean. I mean that it's the time... I can't see your laser pointer. Could you use a brighter one? By the way, this is one of the co-authors of this work. This is Matt Blaise, professor at University of Pennsylvania. So the honeymoon period is that time from the initial release to the first vulnerability. But there was a rather high variance because different products, different developers release at different rates and fix at different rates. So in order to normalize the data, we use a ratio. We compare the release to the first vulnerability against the first vulnerability of the second. And if the time from release to v0 is greater than the time from v0 to v1, that's what we're calling a positive honeymoon. Somebody, you'd be using the term positive honeymoon and negative honeymoon a lot. If the honeymoon period is shorter than the time from v0 to v1, that's a negative honeymoon. Now it's also important to realize that we're not comparing version 1 against version 1.5. We're only looking at the vulnerabilities within version 1 and comparing them against each other. And we're only looking at within version 1.5 against each other. Each release is looked at independently of the other. And this is important because of software quality issues. So here's what we discovered. In 62% of the releases we looked at, the honeymoons are positive. That means it takes longer to find the first vulnerability than it takes to find the second. This is exactly opposite of what is true for software reliability. And not only that, the time it takes to find that first vulnerability, the median number of days will be 100%. That's a long... And not only does it take longer to find the first one to the second, but it is almost twice as long. The median overall honeymoon ratio is 1.54. That means it takes almost twice as long to find the first vulnerability as it does to find the second. Okay, so there's a honeymoon effect. Big deal, what does it mean? Well, at first we thought that this was a result of software getting better. There's absolutely no denying that things like Secure Design Initiative, the use of type safe languages, fuzz testing, there's any number of things that make software that is written in 2010 of a much better quality than software that was written in 2001, right? So we broke it down by year. And this is what we found. It didn't matter. Doesn't matter what we look at, we have consistently positive honeymoons. And remember that our software reliability models tell us that we should find mostly negative honeymoons. We shouldn't find that there's 50% or greater positive honeymoons no matter how we look at it. So we broke it down every single way we could think of. We looked at just operating systems, consistently positive honeymoons. We looked at just server apps, same thing, user apps, same thing. We compared open source code to closed source code and it didn't matter. It takes longer to find the first vulnerability than it takes to find the second vulnerability. So just for a second, let's consider the implications of this. The fact is every software release is likely to enjoy a honeymoon from attack in the period immediately after it's released, even though not a single one of the bugs that it's ever going to have have been fixed yet. This is contrary to our expectations based on what we know about bugs. We'd expect that the newly released software is at its weakest. Every single bug it's ever going to have, even the easy ones, even the low hanging fruit are still there. Yet it's precisely at this time, precisely when it's at its weakest that it's least likely to have an exploit discovered. Okay, so this got us a bit confused. And we tried to figure out why. And what we discovered is this. Intrinsic properties are the things that make a difference to software quality. They are how the initial product is designed. What type of language you choose to write it in. The skill of the programmer creates and is important to the intrinsic properties of the software. But they're not important to whether or not it's vulnerable. That is related to the extrinsic properties, the things that the programmers and the developers themselves cannot control. And the extrinsic properties are legion. They're everything from the operating system it's running on, from the network interfaces, from what other applications or products are running on the system at the time. They have to do with the black market and the economics. What does a vulnerability for this particular product sell for? There's so many of them we don't know what they are and we don't know how to measure them. But we do know that a lot of these extrinsic properties are growing at a rate faster that we can measure. Things are getting worse. So if most of these extrinsic properties were responsible for the honeymoon period, we'd see the honeymoon period drop to zero because they are getting worse. But the honeymoon period is still consistently at 50% or greater. So most of the properties that are extrinsic to whether or not your software is going to be exploited don't matter to the honeymoon period at all. What does matter? We got our first clue when we compared major releases to minor releases. Major releases of a product are intended to sell a new product. They contain the most new features. They have the most new code. This is the primary difference because minor releases are mostly bug fixes. And while they do occur with a lot more frequency than major releases, they don't have nearly as much new code. This implies that there's some sort of learning curve going on here. And then our second clue came when we compared the difference between open source code to closed source code. What's really interesting here is that open source code has a longer honeymoon period, but it has a shorter honeymoon ratio. What that means is that it takes longer to find the first one than the second one, but it also takes relatively longer to find the second one. And what that seems to imply in open source code, the attackers are not climbing the learning curve quite as quickly. And we think this is because open source code releases versions much more rapidly, and they're changing the code more rapidly. But we're not sure about that, and we're going to be running some other tests to see what we can find. But this is our hypothesis. This is what we think is affecting the learning curve. That the longer it takes the attacker to climb your learning curve, the longer you've got. That's your honeymoon. So the reason we think there's a learning curve is because, and the reason we think that the rate speeds up after the first vulnerability is found, is because that the knowledge gained from attacking something and successfully weaponizing an exploit allows you to build a set of tools that you can reuse. It allows you to find vulnerabilities that are similar without expending nearly as much resources. And we think there may also be something like a blood in the water effect going on here. If someone announces that they found an exploit in product Z, everybody else starts looking at product Z because they want to find an exploit there too. And so that's what we think is going on. So the last convincing piece of evidence that we got that made us think that the honeymoon effect is the learning curve came when we realized that that very first vulnerability, the one that destroys your honeymoon, there's actually two types of these. And most people forget this. And I'm not sure that this isn't even an area where people pay much attention to it. But we as engineers, as coders, as developers tend to believe that we introduce new bugs with new code. We're human, we write buggy code. We know that. So we expect that every time we write something new there's going to be a problem with it and then it will be okay. So a bug that is introduced in a new piece of code, that's what we're calling a regressive. But the second kind of vulnerability that can be the very first vulnerability in a brand new release of a product is something that we're calling a regressive vulnerability. That is the product, maybe the product's at version 4 and a new vulnerability is found, but it also affects version 3 and version 2 and version 1. That means that the software that introduced that bug existed in version 1. But the bug itself laid dormant until it was discovered in version 4. We call this a regressive. Now, because we're trained to believe that new software, and from our own experience, I mean, software I write has bugs in it. But because we're trained to expect this, one would expect that most of the first vulnerabilities found will be those easy to find low-hanging fruits, progressive vulnerabilities. But that is not what we found. What we found is that 72% of the first vulnerabilities ever found in a brand new release they were not a result of a new code. 83% for open source, 59% for closed source. And what's really interesting is here is there's still a positive 3 versions or 10 versions for 4, but they still take longer to find than the vulnerability after them. So I think what I want to say here is new code is better than old code, even if it's crap. If it introduces new vulnerabilities, you get a grace period, a safety period. And you get a significantly long, twice as long honeymoon period. The more new code you can get is a driver. It forces attackers to relearn your system. It makes your tools no longer work. It makes them extend resources. It costs them time. It costs them money. It's exactly what you need to do to up your side of the arms race. So in a product, extrinsic properties matter more than the quality of the software itself. And more importantly, the discovery of a new vulnerability appears to depend on how familiar the attacker is with the product. Software, even weak software, even insecure programs are protected by this learning curve. And for our sake, what does that mean? And you know why they're not the same? Because their extrinsic properties are different. The extrinsic properties of a software bug are static. They are fixed at the time of the software they release. It needs this operating system. It needs this much RAM. It needs this CPU. They are so well-defined and well-understood that they are printed on the box when you buy the software. The extrinsic properties of that relate to vulnerabilities, I don't even know what they are yet. They are not well-defined. Hell, they are not well-understood. The security community hasn't even delineated all of them yet. All we do is say, we know this is a problem. We know this is a problem. We know this is a problem. But we don't know what they are yet. How we measure them. So what makes the honeymoon affect important, and this is why I'm glad you're here today, is because he did it all away to tell people, this is a life cycle. Here's what you can expect, and here's how you can protect yourself. In other words, we need metrics. If... I don't know if you know math, law, paper, art, on safe practice. When he goes and buys states and our lab is full of them, we know that when they have a particular dual rating for 30 minutes, that that state is secure that can crack a state from an arc torch to hammer and chisel to acid to whatever you can put on it. I know that if it's got a UL 30 rating, it is safe from attack for 30 minutes. So I make my security by having my guard walk around every 49. We don't have those kind of metrics for software yet, and that's what we think the honeymoon effect might help us develop. Recognizing that this honeymoon effect is a result of familiarity means that what can we do to increase the attack on learning curve? One thing to do would be to rewrite your code every three months. The minute you ship it, throw it all out and start over from scratch. Okay, that is not going to happen. But we can do other things. We can do some form of code obfuscation. There are definitely problems with that. They just need to work for Q mail. And Q mail, as far as I know, has no publicly released and portable vulnerability. There are also things like dynamic execution. Go ahead, man. So I just want to put the implications of Sandy's research which I'm enormously proud of in terms of actually the message of what we looked at from the data. And, you know, we can't explain why this is true. But I want to emphasize the single point here is that everything taught in software engineering school when you go to software engineering school, you know, the first thing they tell you is code reuse is good. Reuse your code. Bugs are bad. And what we've discovered in practice, the software engineering wisdom actually makes software less secure. It's that code reuse and emphasis on the bug life cycle that actually pushes the vulnerabilities out in front in new releases. So essentially the so what that isn't on Sandy's slide is we have to rethink how we teach software engineering for security purposes. The basic precepts of that field seem not to be working at all for security. So, sorry for that interjection. Save your voice. My advisor. I'm proud. Anyway, so the other thing that we can do, we've got to develop some technique that builds into software anything that can increase the attacker's learning curve. I've been looking at research into dynamic execution and I think that looks interesting. Can you make functionally the same but executionally different binaries? That's a possibility. What I'm also looking at now is adaptability rates and I hope to have a paper coming out on this in a week or two, a position paper on so that if you're the defender how fast you need to release your changes in order to stay ahead of the game. So, look for that soon. At this point I'm going to turn the time over to Renderman. He's been looking at things from the attacker's side and he's got some interesting ideas and examples of what he thinks are practical practical demonstrations of the honeymoon in action. So, we're going to switch here. First thing I have to switch my hat here. Scott Moulton a favor so I decided to wear his hat in the talk. Apparently, speaker notes don't translate from open office to Microsoft PowerPoint. Apparently, the microphone's working now. When I first heard about this paper from Maus I started thinking about it like I do everything that she tells me because she's always got something cool. I started looking at practical examples of this in my own life. I deal a great deal with wireless everybody knows it's my thing but the problem was attacker tools are not well documented. She was pulling CVE reports NIST all those other sites. A lot of the attacker tools you don't know when they specifically implemented things because it's the underground economy it's not the whole formal process because it wasn't some lab that came up with it. So, it's really hard to pin down some of these initial dates when things happen. But even if you on this rough timeline you can do approximate guesses when things came in you can start to see this honeymoon effect in real life, in everyday things. So, I'm not an academic I'm a rubber meets the road but I play one on TV, yeah but I'm a rubber meets the road kind of guy I want to see how does so often we'll read an academic paper and like okay how does this actually affect me what's the whole world implementation of this and I actually found a whole bunch of it within my own life and I thought it was really neat. So, I started looking I started looking at WEP for those of you quick timeline here it was originally ratified September of 1999 and used RC for oh okay no it's just throwing me off it was originally ratified September of 1999 used the RC4 ciphers there was a first major crypt analysis was 2001 August of 2001 Aerosnort was first released this is what most people consider the first practical web attack practical is in quotes because really it was like 10 million packets needed to be collected it was like a whole days worth of traffic and even then it was kind of a shot in the dark it was never anything that I ever found truly practical but it was the first it was an implementation of the crypt analysis where the rubber really hit the road was in 2004-2005 when Chris Devine first released Aircrack I can't find an exact date for its first release but archive.org says it's about 2004-2005 around December now the interesting thing was the Apple Airport released in July of 1999 this had a really high price tag it was like over $500 when it first came out the Linksys WAP 11 that came out in 2002 had a much cheaper price tag I think it was about $100 off the start and it's now down $50 and I mean now you can get these things dirt cheap anywhere and I'm thinking these are the wrong slides that you loaded up for some reason I think I might be missing some slides here so once Aircrack was released features were added very easily ARP injection, chop chop you had a framework to work with so now attackers that are looking at WAP starting to poke and prod at this new thing that's starting to permeate the rest of all of our lives starts picking up development it's easy to now just add a new feature Aircrack and G picked up development after Chris Devine just vanished there's an extrinsic factor for you if your author just suddenly disappears what are you going to do about it what are the lines in this I'm not sure where you got these other slides oh ok but by 2007 WEP is fairly idiot proof to break I mean Spoon WEP came out there's a whole bunch of point and click WEP cracking apps it's now you can literally teach your mother to do it many of you probably taught your mother to do it but it's interesting by 2003 WPA had completely superseded WEP according to the IEEE they had said WEP we know it's got vulnerabilities we got problems don't use it anymore so this is even before Aircrack came out this is even before the tools the framework started coming around fully deprecated in 2004 but as anybody was doing war driving or any wireless analysis knows it stuck around WEP it's the thing that would not die for the longest time the PCI standard started batting WEP as late as 2009 so you've got this almost five year gap between when vulnerabilities were found and when it was finally dealt with with industry and somebody was saying you must change out this protocol or else you can't process payment cards TGAX was the exploitation that just kept on giving for you guys the wireless community it was initially penetrated in March of 2005 this is actually just after Aircrack came out but this is long after there was initial problems being shown they didn't start their conversion until almost six months later in October so there's this weird thing that tools were out there industry knew that they were out there they never did anything this is what I'm basically calling the divorce period where you've got your honeymoon your honeymoon is there initial vulnerability comes out okay there's a problem things aren't the greatest but you know that if you assume a positive honeymoon all the time you know your best days are behind you suddenly the next thing is going to come along and probably put that nail in the coffin it's going to be a much shorter timeline this is where you need to start divorcing yourself from this problem software thinking about maybe upgrading to the next version or maybe even just seek counseling in some way of mitigating this problem yeah always buy your IT staff beer and pizza you know I say this at so many different conferences but there are the guys that know this is the system that's like three versions behind that's out of date it needs to be dealt with so there's this long honeymoon for practical attacks I'm talking in the order of years some of your data you were saying when we were talking before you only had a resolution of one day for Microsoft stuff you could probably count the minutes but on the protocol level how do you change out a billion access points if suddenly the protocol is broken you can't do that easy I spoke with the chairman of the IETF when they were implementing WPA2 they were absolutely terrified of mass obsolescence that's the reason that the first version WPA the interim used RC5 excuse me RC4 that was already implemented on the hardware it was something that they could just graft onto the existing stuff you can't just suddenly got a billion routers and just dump them in the trash that'd be stupid but it was something that you need to think about get ahead of that curve if you know that by the time that first point hits it's only going to get worse well assume the worst and keep going the cost of equipment is a huge external factor if it's a $500 device I'm probably not going to buy it and tear it apart and see how it works but if it's a $100 device that's easier to to get past your budgets get into you know throw it on the lab take it apart I mean we've all bought stuff we've all bought a cart and wondered where was that warranty card where was that instruction to put it back together but the framework once you've got an existing framework like Aircrack development just starts snowballing but at the same time as that you've got increases in adoption increases in interesting places that start implementing wireless bigger businesses retail outlets stuff like that but you just know things are going to get worse 1999 to 2005 WEP was considered suitable though weak you know they admitted it but post Aircrack the attack vectors multiply ease increases WEP is just no longer suitable so you can look at other things like code reuse in these protocols because you know like 802.11i is just an addendum to the whole you know wireless protocol standard you know they're still using underlying things so things like deauthentication attacks still work even when you're using a WPA2 network the WPA T-CIP attacks reuse the chop chop attack which is originally developed for WEP you know see one thing helps another because you now already have an exploitation framework the other thing that again I don't know what happened with the slides but what I had also found was that as you go along and you start looking at okay here's when things were ratified here's where things were released you know here's where Aircrack came out there's other things like the monitor mode drivers that Aerosnord introduced there was the Airjack injection drivers that were around then you know you had all these little pieces that added up and people started integrating them into these frameworks and then you had all the pieces necessary for a complete tool to you know break WEP in 60 seconds other external factors in other things that we're all familiar with PlayStation 3 you know Sony comes out with this wonderful thing and they've got you know this install other OS and everything they seem to be playing nice thing you know people wanting to use this thing for research how many universities bought these things you know by the palette to set up cheap supercomputers I know that the U.S. Navy has like 1700 of these things that they're using for parallel processing so in March of 2010 Geohot found that wait I might be able to use this install other OS thing to get direct access to the hardware to do other cool stuff Sony's reaction was to pull the install other OS option through a firmware update so either you didn't update your machine and you couldn't play online and you lost this option it had been several the unit had been out for several years by this point and suddenly the you know nobody had any real practical attacks on it suddenly they pull this rug out from developers who've been using these things I wonder what happens with the Navy when they have to send these things in for warranty do they get back a reflashed one with the new firmware that doesn't support this so by December of 2010 this is only like nine months later initial weaknesses were shown in 25C3 in Europe but by January of 2011 I should be in 11 Geohot basically found the master keys that allowed you to sign any software you wanted to run these things and ran with it so a hell of an extrinsic factor is don't piss off your customer base if you've got a whole bunch of people really pissed off that you changed their product out from under them you're pointing a big target on the back of your head so how many companies do you see that do contests where it's like oh we're our products hack proof we'll put it out there for test and everything like that I mean you are just asking for things to happen I mean Sony, Lulsec and others like they have painted the biggest target on their head that I can possibly imagine but it's a matter of thinking okay you've got this initial release it's going to be a while before the tools come about to exploit this when they start coming together you know that the best times are behind you you've got to figure out how okay how do we mitigate this do we need to upgrade do we need to flash a new firmware on whatever upgrade protocols procedure or something but understanding that is likely to be before it gets really bad you can teach your mother to own a device that's where a lot of the value of this research came in for me I've been doing some work with Aruba Networks they're implementing sweet bee encryption on their products for doing secret level stuff over wireless it's basically an NSA developed and supported set of configurations for wireless gear so suddenly you can have wireless on base and use an iPad to access things theoretically secret I'm willing to get out and say because this stuff is only being implemented at the higher level you've got like several thousand dollar price point that you're going to have to break through that is going to be a major barrier to entry so for several years, X years you're going to have no major attacks against this but as soon as there's an open source implementation, when I can load sweet bee level encryption on a linksus router or something like that and interact with it at home suddenly I'm sure within X minus 1 years however long it takes for those before that open source implementation comes out it's going to be a much shorter period meaning there's going to be a positive honeymoon but it's going to be a shorter period of time and it's going to be a way to manipulate or abuse this so research like this for me proved that maybe all those academic papers that I was ignoring because I didn't think they were applicable actually do and I would hope the rest of you would look at your environments and think okay how can I apply this what do these numbers tell me and can I plan ahead and sort of get an idea of when things will go bad bring up some interesting points and there's something I'd like to conclude with by calling to mind how many of you have played around with smart cards smart cards honeymoon is just about over and the reason it's over is because it's cheap now to get I have with me I have a 16 channel logic analyzer right here that I can plug all sorts of wonderful little wires into and stick the rest of them on to whatever device, whatever bus I'm trying to snoop and suddenly I have complete access to communication I'm a grad student and I can afford this um I'm a drunk hacker and I can afford that um now that hardware tools are cheap enough hardware honeymoon is over no one can rely on cost or obscurity anymore um the entire economic environment ecology if you want to think about it in a biological term has changed it's no longer disgruntled teenagers in their parents basements it's highly financed um organized crime it's nation states training up their children to be tools or to be weapons or to know how to use the tools and weapons and that changes the honeymoon as well and these are extrinsic to the quality of anything whether it's hardware firmware or software and that's a scary that keeps me awake at night because I don't know how to solve this problem yet what I'm hoping that you'll take away from this is that we have to the way that we fix software now is broken we cannot rely on the patch and prey cycle anymore we have as um security people two models for how we protect ourselves we use the center for disease control model which is that you inoculate yourself against everything that you know that's out there and then you stick yourself out there and you wait to be attacked and when you do you clean yourself off and you go into detox for a while um and you do it again and the only other model that we've got is the military model which is um the castle and moat um you dig a huge moat around yourself and you put up really high walls and you carefully control the ingress and you carefully control the egress and you stick yourself out there and you wait until you're attacked and that's it we can't do that anymore in we're losing this arms race every piece of um research that I could find from anybody from semantic to security anyone that to OWASP to Brian Krebs to anyone that watches vulnerabilities and that watches exploits and that watches the marketplace the attackers are winning this game and it's because we patch our software and we fix our hardware the same way we fix our bugs and we're learning from that and that's what we're going to have to change so I really appreciate your attention and I'll take any questions thank you yes um the honeymoon paper was published at AXAC in 2010 so it's available on their website um the position paper about adaptability should be out on crypto.com within about two weeks um all of our data is publicly available it's from the NVD database the NIST database, OWASP bug track, Secunia um so you can just run the numbers the same way we did um in a way I am um or at least run as fast as they do so his comment was it sounded like we're advocating a third model which is outrun the attacker and that may be our only defense because we're you can't win a defensive war anyway right all you can do is maintain the best you can do is maintain a status quo so or at least stay inside your enemy's oodle loop if any of you are military people and know that term so well thank you very much