 Good afternoon. Welcome to the remote access, the advanced persistent threat talk, and I welcome you all here and thanks for coming to this talk. I know you've got a lot of choices. Key messages for this session, guys, I wanted to let you know that your current security architecture is flawed now. I've published everything you need to know. It's all on the website. From first principles to demonstrations, full code releases, the proof of concept, including a test framework for at least the first set of strategies, people are welcome to take photos of this and film this if they want to. The impact is going to be significant. There are no constraints to data theft for remote workers or offshore partners today. And there are no easy answers, but the paper has some suggestions. From my job, look, I've been both red team and blue team. I'm currently on the blue team in my career. In my spare time, this is one of my hobbies that I'm presenting today. This has nothing to do with any of the companies that I currently work for or have worked for. I laugh at my daughter's Barbie car. My poor daughter has suffered time for each of these projects. Her Barbie car remains outstanding. We'll have a look at that this Christmas. She shouted out Barbie car in the middle of the KiwiCon presentation, which was pretty funny. She's a cutie. I want to give credit to researchers. There's been a number of researchers in this field, and there's actually a whole heap of technology that's been reinvented completely uniquely a number of times. There's a short list here of people most directly related to technologies that I've also reinvented again, and I discovered them afterwards. Some of what I'm going to present is completely distinct from these. Some of these overlap with all other projects, but I wanted to give credit to these people. There's a much bigger list on the website. So let's start with the problem space. First principles. My assertion is that any user-controlled bit is a communications channel. Any user-controlled bit is a communications channel. The validation for this is that the screen transmits large volumes of user-controlled bits. I want you to imagine the screen as a fiber optic cable that's been cut through. Huge amounts of data is being pumped out into the room. So the question then is can the screen be transformed into an uncontrolled binary transfer interface? I've heard this guy exists. So we have a tradition at Depcon. First time speakers do a shot. This is our first time speaker. It's very hard to get acceptance. Give him a big round of applause. I got three. We'll all get one. Yes, we do shots in every track. He only does one. Thank you very much. Hold up a second. All right, this is also to all of our new attendees. Cheers. Thanks, Mike. There's a way I didn't practice the talk. So engineering a proof of concept. There you are going to want to join the talk. Going back in time. Terminal printing back as far as 1984. We talk about printing as a switch in the software. So data was sent to the virtual terminals and now sent to a printer device. Not really sending data out of the screen. Same as we did with X, Y and Z modem. We switch it from coming to the screen to going to a file. Not literally out of the screen. 92 to 96, there was a VHS tape backup solution that I stumbled across in the spare parts bin of my local electronics store. The way this worked was data was actually sent out of a video port and captured to a VHS player. You record that data as a chunk of grayscale blocks that could then be played back as video out from the VHS player back to your computer. Literally backed up as a visual signal but not literally downloaded through the display. Pretty close. The first real screen data extraction that we get is the Timex Datalink watch which was a Microsoft project back in 1994. Some of you may have even owned one of these. The way it worked was there was an e-prom inside the surface of the watch where it exposed its window and actual lines printed on the CRT, sent signals to the e-prom that programmed it through the face. It had to work on a CRT. There's been a couple of open source projects that I've referenced there where they've had to use an LED because it didn't work through an LCD display. 20 seconds to transfer 70 phone numbers. And here is that high quality ad. 20 years ago, the first computer watch revolution. Windows 95 had a tool where you could manage your phone numbers and actually export them to the watch. The good old days. There he goes. How through the CRT into the face of the watch. Now working into machine recognition, come 1994, we had QR codes. I'm not going to go into the complete background of this because this is a much more technical audience than I've spoken to before. But the features that I want to take out of this are the highly distinguished codes. The fact that they're easily recognized and machine recognizable and 360 degree scanning. I don't actually have to line this up. Quick response codes were formalized in 2000. They now support rapid scanning capability, automatic reorientation of the image inherent error correction and native binary support. The features I really wanted that error correction binary support and reorientation support. They later supported deformed and distorted codes. Really recognizable. Large capacities, but you'll see in this demo that we don't need the larger capacities. So the Zen moment here, if we consider the QR code as an optical packet sitting within the ether of the display device, then what it now represents is a datagram at OSI layer 3. So to get beyond the packet boundary, what we want to do is replace one code for another. So I've got multiple codes going past the viewer. The receiver then uses video instead of a photo, so we don't want to take one and then exit. We want to take a video and we want to keep processing. This creates a number of layer 4 problems. It's a unidirectional interface. What we mean is data is coming out of the screen. There's no way to signal the sender. So I've got no synchronization. I've got no flow control. This requires oversampling because it's a picture. I have to be able to take multiple pictures two to three times like any other sort of waveform to sample the screen to make sure I actually captured the image at least once. But oversampling creates duplicates, which requires de-duplication. De-duplication may have been intentional because that may have been part of the layer 7 protocol. So I may have had multiple copies of the same data because of what I was transferring. We're now at the point where we need a transport protocol. To create the transport data flow, we want to take the first octet of a packet. The smallest packet we have in QR code is version 1, which has 14 bytes of capacity at 15% error correction. By putting a header in there, we take one byte and create a header. That means I now have the choice of framing up this protocol as I like. I've separated it with a control and a data frame. The data frame has the control byte, which is the header. I've got a flag to tell me what type of packet it is and then I've got a counter and I know where I am in the stream, at least so that I can detect those duplicates. And the payload is simply the data mod, the actual packet size. So now the packet contains the data. The control frame, all we've got is a flag to say whether or not we're control or data. And then a major type and a subtype. You can see here the types, just as an example, this is a protocol I've thrown together for a proof of concept. File name, file size, QR code version, FPS and bytes, with a stop code, for example, that gives CRC. The payload is the contents for that control message and most of these messages are simply designed to give me good user interactivity, a good user interface, as you'll see in a moment. Now, this is a one way transfer between two or more peers. So don't forget, two devices can see the one screen, so I now have multiple receivers off one sender. The features at layer four through seven, I've got high latency. I have no choice but to support high latency because I can't tell the sender and I support interrupted transfers because I know my position in the file based on how many packets I've received and it includes error detection both within the packet but also end to end I've got a control message with a CRC so I know whether or not I've got the whole stream. I've picked there at layer three a number of specs just to make sure that we've got a good sampling without making it complicated, so one, two, five, eight or ten frames per second because I'm assuming that I've got a commodity 30 frames per second, ten is probably the most I can display. A range of QR codes, you'll see why I've chosen the smallest one, binary encoding and error correction. What does this look like? Well, most of this has no real impact on the protocol other than the MTU that we've specified. So here because of the ECC compression the frame will actually spill over to a larger size frame depending on some types of data if you push it up to that size. So what I've done is selected an arbitrary reliable frame size that makes sure it doesn't spill over to larger frames which interrupt the flow of the stream, the recognition from the receiver side. For reference the smallest reliable frame capacity there is ten bytes which means the rest of the protocol has been shaped around that. As a quick example here's our hello world, I'm going to send a hello world file out to the room now. That is control start file name hello world that's start control file size saying 13 byte file. There's start control QR code byte saying I've got 148 bytes per packet. Start control FPS I'm setting 5 frames per second so now my client can tell how long it's going to take for the user to receive it. There's my data with a counter of zero saying hello world and then I'm going to send a stop frame that says that this file is complete with this CRC now the receiver can validate it. What does that look like? This is what you can see from a transfer. This is a PDF that's being uploaded to the room now. To give you a quick feel of data rates if we apply the frames per second to the packet sizes you'll see that we've got a minimum of 80 bits per second and a maximum of 32 kilobits per second entirely limited by the receiver. If the receiver had been able to process much higher rates of transfer. This is an example of the PDF that I was showing you before stored in YouTube being downloaded by an Android phone in flight mode real time. This PDF is a letter that I sent to the, it's an open letter that I sent to the office of the Australian information commissioner advising him that the difference that was made in 2014 between use and disclosure and the privacy act was actually not valid. If I can see it on the screen I can download it. You'll see at the top the icon for flight mode and at the bottom you'll see a yellow status bar that shows that I'm real time storing this data. I've almost received that file and then there's a message to say that it was successfully retrieved. You can pull that down for Android and Apple as a proof of concept now from their stores. I picked that ridiculously low QR code version 1. It's a native resolution 21 by 21 pixels. We know that 80 by 25 will contain 21 by 21 pixels. What you're looking at here is the same program outputting a QR code flow using just the space character with ANSI codes for white on black and black on white. We'll see why that's important when we get to the architecture. What have we got at this point? At this point we've got if transmit software was on my laptop here at the podium then I'd be able to exfiltrate any file I want out of this computer and to a device you can't see to a camera in my hand. The question is how did I get that transmit software on the laptop in the first place? If any user controlled bit as a communications channel and I've got a keyboard now the Arduino Leonardo comes with USB HID support. USB HID support's been available to us for 20, 25 years. That means there's no drivers required in the target system for this to be recognized as a keyboard mouse or joystick. I'm going to use this as a keyboard. The top one's the DigiSpark which was a community project. That's got six kilobytes of flash. The bottom one is the Leostick with 32 kilobytes with a flash. That's about 25 kilobytes of space that I can use to upload a file. The question is what do we upload? The sensible thing would be text, source code because I can type it in as text but that's hard because I've got a compiler in the target system. What I'm going to do is GZIP a transmit binary, turn it into Hex allow it to type the Hex into the target system in a script form, wrap it around as a purl or a bash script and let it output that binary target system. This is a HP thin client with XP embedded that my wife ordered from eBay. I have no idea what the administrative credentials are for this box. I've used Putty to log on to a Linux system. Now what you'll see in a moment is I'm going to, I'm opening a text editor there so I can save the data but I'm going to plug in the Leonardo and when the Leonardo plugs in, there it is there. Beautiful hand modeling. I need to, I need drivers for an Arduino Leonardo. I don't have rights for those so I'm going to cancel that but it will also pop up with the USB HID keyboard. The Leonardo USB HID ID can also be programmed so this could look exactly like a HP Chikoni keyboard for example. Now it's typing in the script which is the payload that I want to output into the target system as it types and types and types or save that script. Change the permissions on it. Now when I run that script it will output the GZIP binary which I'm going to capture to a file. Gun zip that file. Change the permissions on the payload and I'm going to run that payload. That's a 64-bit Linux payload that just got uploaded through a thin client. Technology checkpoint 2. So what have we done at this stage? So now there's no barrier to getting it. We've obviously got data off the system which means at this point I've got a bi-directional data flow. So let's look at the USB HID interface. Now it's polled interface by the system. It comes up once every millisecond. Typical implementations include a packet full of keys and then a clear packet. Unfortunately it's a small packet. It contains only six keyboard keys by code which means it's non-binary. It's also an automatically de-duping interface so if you see the same key twice it'll strip it out. That means at this stage we have the same problem we had before. I need a transport protocol for the keyboard. In this case the packet. I'm jumping out at myself. It is still unidirectional going inbound to the computer. When I originally wrote the paper I hadn't seen an implementation where someone had done exfiltration which you'll find referenced on the site of data through scroll lock, caps lock and num lock up to 10 kilobits per second. You can use the status data which I haven't done. You create a binary payload again by using hexadecimal. That brings us down to three bytes per packet per millisecond. I've retained the key clear which gives me three bytes per packet per two milliseconds and we need to correct for the de-duplication so I've done my own compression and re-hydration which is all in the paper that you can find online. Again the packet is tiny so we don't want to steal a byte for a header so I'll actually bookmark a stream of these rather than putting a header in each one and we'll ignore everything to do with Firebase transfers because what I really want to do is I don't want to be limited to that 32K chip. At the top there I've still got the Leonardo at the bottom I've got a USB serial adapter so the attacker on his computer can see a serial port. The binary data going out of that serial port goes into the keyboard device and gets converted to typed keys. Combined I've called these a keyboard stuffer. I exposed a number of internal controls for the framework to make it faster and now it's a native binary interface for the attacker. Before we augment TGXF before we continue I augmented TGXF to strip out all of the file controls for that as well so now I've got a stream for TGXF and a stream for TKXF and we're going to join them together as a single console application. This is what we've got. On the attacker's computer on the left you'll find a TCP socket listening on that system. Anything received through that TCP socket will be sent out of a USB serial port heading towards the keyboard stuffer. The keyboard stuffer will type it in. Whatever is typed in is received on the organization side decoded and sent out of a packet of a TCP socket on that side inside the organization. Whatever comes out of the organization is rendered encoded and rendered to the screen that's then received by a camera decoded and output of the socket on the attacker's device. Through a screen and keyboard native TCP socket. The reference implementation is limited to the example protocols. I've got 12 kilobits up on the keyboard side I've got 32K down on the on the screen side there are ways that I've suggested you can improve the performance. Now at this stage we've got a bi-directional binary-clear serial connection with a native socket interface with insane portability and massive vulnerability in the environment. The ESA context, when we get back to enterprise security architecture TGXFTKXF and TCXF are a storage based covert channel attack and some people have referred to it as an overt channel because it's so in your face. But then where's the enterprise in all this? So far we've been working from a local computer I gave you one example where it ran over a thin client and over the network. But in the enterprise we abstract the screen and keyboard so that throughout the organization we stretch until it looks something like this. If I'm an offshore user today so I'm in that managed IT service provider offshore. What I see after I've VPNed in Citrix VDI'd SSH'd and gone all the way through every single one of your gates all the way through the deepest part of your organization the keyboard keystrokes I type here go through all those tunnels to the back and pixels rendered at the back come all the way out to me offshore it's a completely clear tunnel through the organization this is console abstraction in practical terms on the bottom of this picture if you can see it is an attacker on the left and the enterprise on the right but this means that the attacker device isn't the end user computer device that you gave me offshore this PC that you gave me offshore is the one perhaps maybe the VDI where you gave me the DLP the AV the anti-malware this is where you've got all your controls I'm not going to attack this device I'm going to plug in a keyboard and point a camera at it the attacker's device is in my hand and not connected to the network inside your organization that was on the left on the right in the deepest part of your organization where you've given me access to manage your infrastructure is the other end of this client which is right next to my goal and where you don't have anti-malware detection right an example on the left in the red is the attacker's device with no network connectivity whatsoever in the green yellow tags we've got that HP thin client which is my end user computer device and next to it I've got an application server that I've SSH2 you'll see I've got the keyboard stuff I plugged in and I've got a camera and a couple of Pringles cans pointing at it at this stage I've run PPP on that TCP socket and we've just negotiated an IP address so my attacker PC with no network connection is now sitting on the same IP network as the application server I'm now running SSH over that IP connection apologies for the blurriness on this I'm not a very good Elbe model it'll come clear in a moment you'll see all the negotiation there's the request on the left on the attacker's screen saying do I want to accept that SSH key I say yes you'll see another few packets come and go that's the request for the password type the password and that's the login so now the attacker's PC that has no network connectivity at all has just SSH into the application server so solution 2 new for Christmas 2014 when you present these things and people blog about them they say it's interesting but I can stop QR codes I know that's what I put in the paper so when I went to KiwiCon I released something new and that was an ASCII version so I believe this is an unsolvable problem and this was another variation to demonstrate that so at this stage TGXF is again transport at layer 4 I've got my datagram at layer 3 though I'm changing from a QR code to an ASCII character so text now it could have been graphics I threatened to do pixels because they'd obviously be significantly faster it could have been images I'd love to see an organization who was out there trying to filter 14500 logos the lawsuits would be exciting it could be letters, words, phrases whatever you choose I can adapt in this particular case I've chosen ASCII characters just to prove it was possible this is clientless because at this stage I no longer need a substantial client it works after 300 bytes of bash and I'll show you that on the next slide minimal server side indicators of compromise now what you're looking for is not a landed binary but simply some bash script or it could actually be Perl script or it could be in PHP or it doesn't matter and it demonstrates the futility of QR code detection there's the bash code all I need to do is display a counter and some data and I can make it run I've got a particular set of font and colors because I'm using optical character recognition it's just for the proof of concept you could train that away I've switched from a camera to the AVerMedia Game Capture 2 device anyone who doesn't know these devices these have been sold specifically so you can plug your XBox HDMI cable into this device so it man the middle and captures it for your YouTube uploads, for your replays but it saves to a USB key that's a tiny little orange USB key at the front of that picture this example is designed to capture data at one kilobit per second we'll go into speeds in a little while I've got a 1920-1080 display at 30 frames per second these devices don't run this fast and I'll show you an example of that shortly my recovery runs a lot slower but it doesn't matter what I've stolen it at is the kilobit per second so I'm going to recover this through an MP4 file in Linux now the red room last year at Black Hat at a bar late one night a gentleman pulled me aside and I was telling him about this and he said look seriously and that's cute and all and we see this thing off shore the red room is the room that has the secret source it's got the special recipe it's the place you have to go to access certain data assets off shore we tend to have rooms that classify to a certain specification and we put certain physical controls around them with variable success anyway he was focused on the red room the rules for the red room are a device can enter the red room but it's got to be formatted everything except the firmware the technology in and then the device can leave but it's got to be blanked again except for the firmware and so his question was well how am I going to get that USB mass storage out with the MP4 file and my response to him was well Captain Coon says be creative if you don't know the reference you'll have to watch the movie this is an example of that bash upload file descriptor 3 is being given et cetera password just as a piece of content to send I've put that bash script on the key so it just types it in for me and what you can see on the left is clearly a counter in binary 0s and 1s going through 0 to 255 on the right is the data also in binary so I'm getting one byte per packet effectively when we decode that and that MD5 does work out you can have a look on YouTube now what I've got is a Linux system I've opened the video I'm processing it one frame at a time doing optical character recognition on each frame if you can see it I don't know how clear it is on those you'll see a little rainbow colored boxes floating around the letters on the screen that's where it's recognized characters and attempting to process them on the left hand side this is in debug mode so for every line of et cetera password that comes out on the left you'll see another line of it appear on the screen so that data is coming out line by line as it's processing the video now new for today you release something like that and you think people are being impressed but then they say it's so slow I don't care it wasn't the point so at Christmas I got bored I was watching Deep Space Nine I think for the third time so I went for the pixel threat I assumed it wouldn't be too hard and certainly the encoder wasn't very difficult at all there is a pixel at layer 3 so I'm using HTML5 canvas and JavaScript so all I need now I've left text mode I've left text but now if I had that VDI in the environment or web browser presented in Zen App for example I can now encode the data visually and send it back out it uses about 20k actually that's now about 30k of JavaScript to enable a clientless mode it feels a little big to be clientless I can plug in a key and upload the whole thing again minimal server side IOCs again demonstrates the futility of targeting a specific implementation now I tried the same box I get 1.3 megabits per second out using two frames per second and one bit per pixel so this is simply black or white that's $120 box the over media I'm using 1280x720 at 60 frames per second now as you'll see and we recover up the same way so slightly different encoding that's me plugging in the key and typing in the client this is a web browser at the moment it's Firefox but it works in Chrome with F11 mode so it's full screen I'm doing a local file upload to the browser itself so the JavaScript can process the file and that's what the data looks like in black and white looks just like a static TV right I'm going to let this one run so you can see the progress bar actually counts up the speed as well the content of this file is the 5.5 megabyte white paper that I wrote last year on TGXF so that file has been uploaded and that was 1.37 megabits per second easy enough to do downloading is the problem here I've got the same program framework only I've distilled all the optical character recognition on the left you can see the line by line marking that's each individual frame of this video and what I've taken away from it this is debug output first thing you'll see when I upload the file is the big red box the big red box allows my software to locate the region on the screen that contains the packet so I can find layer 3 and there it's found it then what we'll do is every single there's a whole heap of control messages going past that we can't see at the moment full screen fulls of data now there's a CRC in this protocol and you can see there's two or three lines before a successful line before a successful frame where we've miscalculated we haven't got the full data as the over media captures you've got about 50% through this transfer and you'll see that the picture starts to res up it's like it takes 10 or 12 frames to completely capture I think there's an internal bit rate that's been encoded in this device and you'll see loads and loads of CRC errors before we get the one frame that works in the bottom corner you can see the PDF is slowly being restored from this file transfer so now I'm getting more errors there's loads of errors I'm almost going a full updated frame a full updated packet before I get a valid packet if I push this one more frame per second faster it's not successful ticking ticking ticking and that's transferring very close to not successfully recovering each individual frame and that's almost complete and the last packet will be the CRC32 that's successful so that's a big list of CRC32 validations on the file and there's the PDF but that's not good enough for DEF CON so that's what I had when I submitted to DEF CON this is pitiful and I'll show you why this should be substantially higher so I bought a better card for $30 more you can get a professional capture card unfortunately I didn't read the fine print this one is a YUV capture card even though it's an RGB data source so the burst I could do here was still one bit per pixel without getting a whole lot of mess however by being a better card I can now do 8 frames per second and that works out to 4.7 megabits per second for the low low price of 10 times that much you can buy the Declink 4K Xtreme 12G this thing is designed to capture real time 60 frames per second 4K video frames this thing will capture the next couple of generations of what your VPN users are going to use same resolution but now I'm doing 3 bits per pixel I'm doing 10 frames per second so I'm up to 300 kilobytes per packet and a total of 12.1 megabits per second in the demo the only reason why I'm not showing you today a 1 gigabit transfer is because I couldn't pass properly pass the AVI file that it made FFMpeg came the closest to converting the file and I was able to get the 3 bits per pixel reliably but I couldn't get 10 bits per pixel reliably which this card will capture but I couldn't convert so this is where I've left it that's the same file with this card capturing it let's recover that file you can see I've already captured the frame I can't even resize this picture fast enough there's the control you'll note that there are only two CRC eras there's only two times I didn't correctly get the frame the first time with this capture card that's done that was 12 megabits per second so architecture, look you can leave out the PPP example the PPP example is not part of the solution because it requires privilege you require privilege to set up an interface on our system so leave that aside but before we had that we already had a TCP socket that was working between two nodes I was just having a bit of fun but the important thing to note is the technologies I've shown you do nothing for privilege they can only do exactly what your users can do today so what you can type and read there's no privilege at all the distinct properties of the delta seem to be along the lines of volume accuracy structure and utility and the paper goes into a few views on that and the cat and mouse games that you could play on that the problem we have is in Australia in the Australian Privacy Act and also in HIPAA and I believe FISMA there's a distinction drawn between use and disclosure it's considered use and safe into your environment and works with the data in your systems that's considered to remain in the offshore case that data remains onshore it's not offshore even though the screen is displaying it that's use it hasn't left your system disclosure however is when the data is taken from that system take an offshore and the user can do whatever they want with it now obviously the tools I've presented today I designed to completely destroy that barrier but I haven't done anything with the privilege on that in the Privacy Act if the data is taken offshore the Australian entity is actually liable for that data going offshore if they didn't take reasonable steps and the only one of these steps that seems to make any sense in this context is monitoring so the question is what is reasonable monitoring Butler Lampson in 1973 wrote a note on the confinement problem brilliant, brilliant work his conclusion was quite a difference between levels of clearance, so a high level user not being able to leak data to a low level user his conclusion was it was probably cheaper if possible at all to just to leave it as a to accept the risk for this type of problem his work was rolled up into the TCSEC specification for B2 and B3 trusted systems the conclusion that document came to was a 100 bits per second data leak or 100 bits per second was considered a high covert channel because 100 bits per second was a valid terminal so if you had a valid terminal that was leaking at the speed of a valid terminal then they can't possibly be secure now at all the examples that I've given you today none ran under 100 bits per second not one including the text one that will run through your SSH service HDMI at 1920 by 1080 by 24 frames per second by 24 bits per pixel is faster than gigabit so in terms of acceptability the TCSEC spec said that the maximum bandwidth accepted for covert channels would be one bit per second and any covert channel that was above one bit in 10 seconds had to be auditable in your environment today the question I put to you today is do you have the ability to see every single key change caps lock light change pixel change and any delta in your environment that runs faster than a tenth of a bit per second not in any organization I've met the business impact I'm going to refer to an example from April this year here in the US the FCC went after AT&T they were correctly their offshore centers in Mexico, Colombia and the Philippines lost 280,000 records the lawsuits settled at 25 million which was then reported as the fine if I took one of those users offshore so that works out to about in rough numbers about $89 a record so $89 per personal record lost if I took one of those users working today with an A4 page whole records I'm saying here 2 kilobytes per record full bits, 8 bits that works out to a thousand words a day works out to about 5 kilobytes a day the worst damage I could do to you in four business days would be 10 records bit of a contrived example multiplied by 10 we're still talking less than $10,000 we're still talking less than 100 users stolen in four business days assuming the FCC doesn't give bulk discounts then what we've done in the last 45 minutes is taken that to 12.1 megabits per second I'm now moving in the same period of time 87 million records and with a cost to the US organization of almost $8 billion in fines but I don't have to work in business days because they're just business days they're 8 hour days I can now work in 24 hour days because in that same timeframe I can work for 9 o'clock this morning and pick up the results at 5 o'clock tomorrow afternoon when I go home in terms of 24 hour days we're now talking about one fifth of the US population being able to be downloaded per 24 hour day at a fine of around $6 billion per day and that would be the entire US pinched in one week or Australia in 8 hours so the punchline effectively there is no difference between use and disclosure if you're operating in that type of framework or the HIPAA or FISMA you'll need to pay attention to these rules there is no pragmatic difference once it's been displayed it's been uploaded to the room so far as off-shoring, right sourcing best-shoring whatever you want to call it if you as a name for remote access for untrusted users to trusted data onshore if you want your data to be yours and yours alone then this is not currently unlikely to ever be safe I would like you all to consider how many bits per second data loss is too many to accept thank you very much