 Continuing on the hardware track, we're going to bring on Bunny Huang, the CTO from Chiptronics. I'll just let him set up. I'd like to introduce Bunny, who is best known for his work hacking the Microsoft Xbox. He's also known for his efforts in designing and manufacturing open source hardware, including the Chumbie, the Chiptronics, and the Novena DIY laptop, a home assembly laptop with all the blueprints and schematics published publicly. Today Bunny will be sharing on security and highlight the challenges of handling sensitive data. And this obviously is a big topic these days, especially in Singapore on security, data privacy, and data protection. As talk today introduces Be Trusted, a device that might be a possible remedy, a physically separated trustful display and keyboard directly connected to a secure enclave. Excellent. And Bunny, over to Bunny. Round of applause. See if I can make this work. Now I don't get the little cheat thing, or I get to see my next slide that's coming up, so I have to see if I actually remember this presentation well enough to give it to you off the front. So I'm sorry for the delay, and also thanks everyone for coming to hear this. It was a really good series of talks earlier, and this is a little bit of a non sequitur, I guess, but I've been thinking a lot about sort of the mobile space in a bit, and I find it sort of like a little bit crazy that we sort of use our phones for absolutely everything, and we sort of just take it for granted that everything is trustworthy. I think Harish was talking a bit earlier about the issue of trustworthiness, and that people feel like they can go ahead and just on one screen, look at pictures of cats on any random sort of web page, and then flip to another one to Google Authenticator tokens, and it all just stays in sort of the right place, right? And as a sort of a harder guy myself, I find people's sort of blind faith in these abstractions to be disconcerting. So how many people here are familiar with Spectre? Have heard about it? Okay, most people have heard about it. So there's sort of a vulnerability that has been disclosed, and actually we're continuing to disclose more and more about it, where basically a sort of a fundamental assumption that's built into every computer, every piece of hardware virtually that I'm looking at in this room has this vulnerability in it, where sort of hidden machine state that's used to accelerate your computation is used as a side channel that can leak secure data out. Other processes can go ahead and read what should be in a secure area. And the bottom line is basically there's a choice you have to make. You either get speed, or you can have safety. And there's an interesting paper here that I sort of referenced on the right, where a bunch of engineers at Google have sort of written some stuff up about how sort of proving that this could be actually sort of a not fixable thing in the near future. This combined with sort of like a very strong demand for exploits. So this is a chart here from a website called Zerodium that you can go ahead and use to price what the current market price of an exploit is. So if you look at the very top, an iPhone, remote jailbreak with persistence is going to land you $2 million. That's sort of the market price today for one of those exploits, all down to like $100,000 for sort of a Wi-Fi remote controlled exploit, or a bunch of other exploits in between. And so some people say, well, this is because we've gotten so good at security, the price is going up. And other people say actually the demand for exploits is so high that we just can't crank them out fast enough to meet that. And I think actually it's more that there's a very strong demand for exploits out in the market right now, which is why the price is high, and not so much that we've solved the security problem there for. It's like incredibly expensive, hard. And the other thing is that there's now open markets for sharing the costs of these exploits. So the reason why people can pay so much for them is not that there's a single person in the world who's willing to buy it for that much, is that once the person gets that exploit for a couple of million dollars, he can shop it around to like a dozen different governments and they'll buy it off of you, and you can really make your money back on these types of exploits. So exploits aren't really going away, which is why you're starting to see more and more people going to hardware to go ahead and split physically sensitive data out from where you have it in place where it could be exploited into separate tokens now. So there's a bunch of different names for them, like secure enclaves and mobile, trusted platform modules, secure elements, U2F2FA dongles, people are putting different pieces of hardware inside the machine or giving a second separate fob or a token to try and separate out things that you want to keep very secure. The problem is that even though you've taken your secret, your core secret and brought to separate enclave, is that the input and output is still insecure. So how many people here have a two-factor authentication token or something like that for banking or whatever, you've all had to go in and plug in another set of numbers in addition to your password to go ahead and clear a banking transaction, right? The problem is that there are attacks like Man in the Browser where you can go ahead and enter your username and your password, and then once you enter your token in, those things are go ahead and forward it unchanged inside the browser, go ahead and modify things like the amount of money you're trying to transfer or something like that or the target that it's going to. And so that token in your hand just proves that you're physically present in front of your computer, but it is nothing to prevent you from making sure that you're actually doing the transaction that you really think you're trying to do. That token itself has very limited IO value in itself. And so just to sort of dig into this a little more because this is sort of the foundation of why I'm interested in doing what I'm doing these days, is that if you look at this very abstract view of a phone, let's imagine this is your phone. You have a CPU, you have a screen, you have a keyboard, they're probably virtual, and you have a security enclave that has some keys on the inside. And then there's the internet, which is somewhere out there, which has hackers. We don't all wear hoodies, but apparently that's what the icon is these days. So you go ahead and you enter your data in and plain text. It goes to your CPU because you're a human, you read plain text. Sends it on to the security enclave. It comes back encrypted to the CPU. It goes to the internet. The hackers can't get it. Everyone's great. Data comes back from the internet. It's encrypted. It's verified by the secure enclave and it's plain text to your screen. Everything's great, right? So this is good. Now say your hacker gets into your CPU. It's getting into your secrets. Well, the good thing is it can't get to your keys, right? Your keys are in a separate enclave. It can't do anything. But the thing is that the hacker can still, for example, see what's on your screen. Can still log what's on your keyboard. So even though you can't get to your private keys, you can still get to your private matters at the end of the day. So a local attacker inside your device can go ahead and like, check your phone to show that you're in airplane mode when it's really not in airplane mode. Can go ahead and change what you see on your screen for your website. Even though your keyboard appears when you type your password, it shows a star, star, star, star, star. You know, it's actually in plain text. Like when people kind of freak out and they actually type their password on their screen. They're like, oh my God, my password is now visible to the world. It's like your password is exactly as visible to the computer. You know, if it was star, star, star, star, as if it was showing the characters on your screen. It's just the last little bit is showing you. Same thing for your screens and swipes and taps on the screen. And so even though, you know, your keys may be secure at the end of the day, what you do with the keys is not. So a key thing to remember is that private keys are not the same thing as your private matters. Even though your keys are safe, your secret chats may not be secret. And this has caused me to sort of stay up at night a little bit. You know, I'm much worried, such hack. And so my kind of proposed solution, so this is what I'm going to talk about now, sort of like where, and this is probably the reason I'm bringing up here is I want to sort of gather feedback and get ideas and make sure I'm not approaching this wrong way. It's a solution I call betrusted. And so the idea is to build a machine that's basically a secure enclave chip that's directly connected to a simple trustable screen and a real trustable keyboard. And what do I mean by a simple trustable screen? So I've gone and kind of canvassed a bunch of screens. I found, for example, an LCD screen that's a simple black and white memory LCD from Sharp. It has no silicon chips in the screen. Actually all the electronics are built into the glass of the screen. You hold it under a microscope and you look at it under 40 X zoom. You can actually see like the flip flops and the gates that compose the data that routes to the screen. So there's no silicon chips. There's no place to hide something that I don't know what's going between the secure enclave and the pixels that are going to my eyes. I can actually inspect it optically. When I say a trustable keyboard, I mean like a physical keyboard. A lot of people love virtual keyboards and things like that, but how do you know what you're tapping on where it's going? When you go ahead and build a physical keyboard, there's real wires. You get them, they go to the enclave. There's nothing in between. If you're really paired and you can look at an x-ray, make sure there's nothing going on. There's no silicon chips other than the silicon that you expect to be there. So the idea is to create a secure enclave that's connected as directly as possible to things that I as a human being can individually evaluate and determine what's going on and can relate to me human relevant stuff, not just cryptographic keys. And so the idea is then to create, you know, sort of split off your most important private matters and stick them into this enclave. And this is where things get a little bit tricky, but it's a balancing act between the human computer interface and complexity. So today's 2FA tokens and stuff like that, they're by far too sort of on the simplistic side of things. They have a very simple inflexible interface. It's all hardware. It's very hard to attack, a very minimal attack surface, but doesn't really do enough, I think, to secure things. On the other hand, you have web browsers, right, which are great. They're powerful, they're featureable, but basically, as far as I'm concerned, they're an intractable attack surface. People just keep on finding exploit after exploit and all the different features that get packed into these things. And so sort of right in the middle is where I want to sort of balance this device of just enough and no more with the secureable attack surface. And so focus is going to be very important, and this is where I'm trying to figure out, am I covering the primary use cases that really need to be covered? So at the moment, the straw man is to look at text-based chat, sort of like a signal-like text-only chat, but very importantly, it has Unicode support and multilingual IME. It's not everyone speaks English, right? Not everyone reads left to right. Not everyone uses the, you know, Latin characters. So it's going to have to meet some support for bitmaps. And this is even within the context of us here on Clive, this is a very, very large project, I think. And then on top of that, you know, not everyone in the world reads and writes, right? There's a lot of people who I think want security, who may be illiterate, or find it inconvenient to type on a phone, and so I think voice chat is also going to be an important thing to sort of incorporate into it. So asynchronous messaging, people seem to sort of talk, leave a message and they listen back and forth, like on WeChat style thing. It's going to be the minimum viable product. Real-time voice call could also be cool. Other people have been like, yeah, banking and cryptocurrency. I didn't see any reason why you couldn't do that sort of stuff on here that probably fits in the enclave, sort of inside my MVP. But for example, no browser, no games, no video, no social media, no app store, that all introduces too much of an attack surface to lock down. Importantly, be trusted as open software, as open hardware, and one thing that I think I'd like to sort of try to take on is open silicon, right? So try to enable a new level of transparency for finding and mitigating vulnerability. So as I talked about earlier, we have the issue of Spectre, right? And part of the reason why Spectre is so intractable is that all the CPUs that we rely upon are fully closed source. So even when people think that they've found a way to patch around their branching vulnerabilities, there's all this hidden machine state that you can't find. And so an attacker can go ahead and dig through the hidden machine state and leverage it to go ahead and make an exploit. So by sort of making the functional level transparent, hopefully we can find what these sort of exploits are. So what open silicon doesn't get you though, importantly though, is it doesn't prevent what's the time of check, the time of use exploit that exists in the supply chain. So you can go ahead and say we're going to build an open fab and check the mass and all sorts of stuff. Between you and that fab is like some customs inspectors, a courier, a distributor, wherever it is, those guys can go ahead and swap out your silicon. So a lot of people say we should build open fabs. I don't think that actually solves the trustable problem that we're looking to solve. And so really more the concept is to go to verifiable silicon. So to build the silicon with features that assist users to go ahead and verify it without having to have a scanning electron microscope. So the basic idea is to put features in there and where you fire a laser, like an optical laser, through a regular optical microscope into the chip which disturbs some of the transistors. It's going to disturb a large number. These are 40 nanometers. Those are really tiny compared to a photon. But you go ahead and you disturb a group of them and you read out the syndrome from what happened. You say, okay, this cluster of logic gates located here, you go in your shift left by sort of a half a micron or something like that, you fire again. And you keep on scanning across the chip and say, okay, all the gates are where I expect them to be. This is a chip that I fabbed out. This is a chip that I expect to have and it's done in a non-destructive fashion. So you're going to have confidence that the chip that you have in your hand is actually the chip that you intended to be running your code. And so because be trusted encrypt solvers communications, you can go ahead and use it safely with your existing Wi-Fi hotspots. You don't have to build it into a phone. So I'm not looking to build a phone, right? I'm looking to build something that goes ahead and essentially uses a security modified SDA offers communications and can talk to any number of physical layers. Initially it's going to be Wi-Fi. And then, you know, finally the idea is to try to build this as a protected place for your private matters just so as to sort of make it a little more concrete. I have a kind of a tentative development plan where there's an alpha phase. We're spending a lot of time with an FPGA looking at what the hardware looks like, looking at the RTL, looking at the security primitives. It's not going to be what's called evil made resistance. So if someone gets to hold of the FPGA prototype physically they can still extract your keys, but the idea is to make it very strong for remote code exploits. And in parallel with that, sort of doing verification of the optical fault induction methods for the silicon probably takes a couple of years and maybe this is the sort of thing that you can crowdfund. But then when we go into sort of the ASIC-based system, so the one where we're actually spinning chips, it's another level of money that's required. So maybe partially crowdfundable, but probably looking for fundraising grants, but this would allow us to do something that's truly sort of really secure against evil made, has user verifiable silicon. And then by going to a full ASIC you can actually make it very thin and very small so it's not a burden for you to carry around. The idea is like this concept here is like about three and a half millimeters thick so you can sort of just stick it in the case of your phone on the backside of your phone and always have your secrets with you and not have to carry around a really big clunky extra device. So thanks everyone for sort of listening to this. Part of the reason I'm here today to tell you about this is I want to hear your feedback. This is just a proposal. Everything here is renderings. None of this exists. I have some little pieces of hardware where I'm testing circuits and stuff like that, but nothing has really come together. And I'm also looking for developers who are interested in potentially working on this. Thanks. Thanks for your time.