 But how many people actually saw the talk I gave in the morning of Thursday? So like, probably like half maybe people here or something like that. Okay, so I wanted to kind of recap for some of the people who didn't see it. And then also I have like some little bits and pieces I can hand around. We can look at them some more if you want. So I've been thinking a bit about just to give a little bit of background. Thinking a little about the fact that we like basically use our mobile phones for everything. And it's like kind of crazy to me that we think that, you know, you can go ahead and on one screen, you can go ahead and you know, be looking at pictures of cat memes on sketchy websites and then flip over and then you have your Google Authenticator. And we're like, oh yeah, that's totally fine. There's like no reason those two should ever talk to you because of attractions, right? And basically it turns out that there's a bug called Spectre. Have many people heard of Spectre? Spectre? Few people? Right. Okay, a lot of people here. Okay, I won't go into it too much. But basically, you know, software isolation is dead. So you can't really count on the abstractions that you thought about to sort of keep your data safe. And also there's a huge demand for exploits. Like, you know, this is a table from Zerodium. And, you know, right now it's like $2 million for an iPhone. We go with Jill White, Jill White exploit. And this is not because necessarily they are hard to do, but what it represents is there's actually a lot of demand in the market for exploits. People want to buy them. And the reason why they want to buy them is because they can actually sell it to multiple other smaller nation states. So you might think that your threat model may be, okay, worry about FSB or NSA or the PLA or something like this. But actually what it is now is that you're looking at smaller governments like, you know, Mexico and then, you know, other small countries all buying exploits from one exploit vendor. And so now you have, like, the multiple resources of, like, dozens of small nation states being pulled together to buy these exploits, which is part of the reason why the cost is going through the roof or the price is going through the roof. So in response to sort of this sort of escalation in the world, people are trying to, okay, so let's go ahead and split our secrets into a secure enclave, a separate piece of hardware, so that if your main machine is compromised, you don't have to worry about it too much. But the problem is that if you just put your secrets in a separate piece of hardware, how do you know what it is that you're signing, right? You know, people have used those little big tokens, right, and you just sort of enter it in. So what it protects you from is that someone steals your password and they can go somewhere else on the internet and pretend to be you, right? But if you're at your computer and you have your token and you have your password and your browser is infected, all the browser has to do is pass your token on through and then change like the values on you and it doesn't, you know, good, right? So the token itself doesn't actually authenticate the big transaction at all. So just to sort of illustrate the problem, this is a very schematic view of what a phone is, right? So you have a keyboard, a display of CPU, and perhaps a security enclave on the inside. And that's your phone and you want to connect to the internet and you're worried that the internet has lots of hackers. You know, not everyone wears hoodies, but... And then what happens is you enter your, whatever your password or your messages and plain text that goes into a security enclave, maybe gets encrypted, send out the internet, okay, great, great, the hackers can't get it, stuff comes back from the internet, gets decrypted and sent to your screen. And so then when the hacker gets into your CPU, still the good news is that your keys are safe, right? They can't get to it, your CPU is compromised, and the problem is that they can still get to the contents of the screen and the keyboard. So as you type in your keyboard right now, some people, when they type in the screen, you miss the focus in your field, you're typing inside. You're like, oh my god, I see my password on my screen, I feel so dirty, I have to wash my eyes out. You can't see, your computer sees this all the time. It's not like, you know, something different that you can see your password. Just imagine this is what an attacker at Sector Machine sees all the time, your password is typed in the clear. So that's the secure IO problem. Basically what it means, and the other day, is that private keys are not the same as your private matters. So what the secure IO is trying to secure are your private keys. They protect you from remote code exploits and people getting your keys and they're pretending to be used somewhere else on the internet, right? But they don't protect you from someone inside your machine spying on you directly inside your machine. And this is becoming a more and more realistic threat model. So I've been thinking about this, and my kind of proposed solution is a thing I call retrusted. And the idea is to have a separate secure enclave but then attach to it a simple trustable screen and a trustable keyboard, right? And so, I don't know where did it play all the way through, but when I say a trustable screen, so this one here, I found an LCD screen. It's a black and white LCD screen. I'll spend a little more time on the hardware here because it's a little more hardware. So it's actually got 200 PPI resolution. It's pretty cool. And how many people are actually taking apart an LCD screen and sort of seeing the parts on it, right? A couple people here. So a lot of times they take it apart on the LCD screen. You'll notice there's a little circuit board on the LCD screen itself or maybe there's a little silver flak of silicon on it, right? And so the concern is that, what is on that circuit board? What is inside that little flak of silicon? Can someone hide an exploit in there so that when they say, okay, they show you some icon on your screen that says you're safe. In fact, it's hiding something, right? So there's an issue with the back of the screen. It can't have to be trusted. This particular screen here that I found is made by Sharp. Actually, all the electronics are built onto the glass directly. So there's no silicon flak. There's no circuit board. And what they do is they actually use kind of TFT transistors built onto the glass itself. And those have a very large geometry. You can actually just put them under a pretty typical microscope and you can see these sort of traces here. Each of these are logic gates. That's a logic gate. That's a logic gate. You can actually see them and count the address to code and stuff like that. So it's actually like, you can be like, okay, did someone hack this or not? It's pretty obvious to see. It doesn't mean by a trustable screen. When I say a trustable keyboard, I mean literally keys. So keys that you can see and verify and see that nothing's in between you and it. Because again, touch screens, as wonderful as they are, all of them basically are driven by a small embedded MCU. The code inside the MCU is very, very closed. And then those MCUs themselves have capabilities of like hundreds of kilobytes of flash. They can store passwords. They can store screenswipes. They can store a lot of things. And so those can't be trusted and they're very proprietary. And so a regular keyboard is where it's at. So, and then the idea is to go ahead and take a, you know, sort of a secure enclave and connect it directly to a possible human-friendly I.O. Which means that you're going to take a selection of sort of your applications and put them over into the enclave, right? But there's, you know, how many applications can put over there? Like, you know, the first thing people ask you is, oh, this is going to be a browser, right? It's totally cool. And it's like, well, actually the problem is if you put the browser in there, now you can't trust it anymore. Because the browser itself is like super exploitable, right? And so there's like this balance between going from like these, you know, hyper-minimal but very secure, very small attack surface hardware devices to full-feature browsers and sort of trying to balance right in the middle. What is the right sort of balancing actor? And this is part of the reason I want to have this conversation with more and more people is to make sure that we're balancing this in the right way before we go in building a piece of hardware that doesn't solve the problem that people have. And so I'm focusing initially on sort of text-based chat. So I'd say text-based chat like the signal. But also, and this is where I'm greeting my teeth. I want to support like Unicord and IMEs. So people who speak, you know, Chinese can use it. People who read from, you know, right to left Arabic and Hebrew can use it, you know. And also some simple bitmaps are possible. But this is actually a very large piece of code to write for our security. And then also voice chat would be good to support. It turns out that's actually a bit easier than a text chat site for a number of reasons. People's, you know, banking and cryptocurrency seems like a popular thing these days. Like particularly the crypto guys, they have a lot of my, maybe we, you know, support something with crypto to try and help them fund the thing. But that's not like my core interests. It's not, I'm not billing this for the crypto crowd. But I think it'll work for them. But you know, no browsers, no games, no video, no social media, no app store, right? Because those things, you know, why would you want to just trust random things and put them on your phone and run them? You're trusted and run them. And so one of the things that also I want to kind of grip my teeth on is not only do I want to make it open software and open hardware, but I also want to try and make it open silicon, right? To try and get around the Spectre issue. So if you read the paper on Spectre, it turns out that part of the reason it's so hard to patch around it is that the CPUs are closed and there's a lot of hidden machine stick that can be used as a side channel. And so the security analysts can't dig into it. They can't actually find out what's going on at a level that's meaningful. And so by disclosing all the functional apparatus of the CPU, we can go ahead and hand it over to the security analysts and say, look, we want to do branch prediction to accelerate things, but here's like the bits of the branch table, you can go ahead and write an effective software mitigation for it and actually do a proof and guarantee that this is not going to leak your data or something like this, right? So that's one thing. But then the other thing, this is an argument that I've kind of been having with a lot of the people in the open hardware crowd for a long time. They say, oh, we have to go open all the way to the silicon because that means it's trustable, right? But the thing is that even if we pooled together a billion dollars and we built this open hardware fab and silicon and we could all look at the masks and that sort of stuff, that fab is located somewhere in the world that you're not, right? And so those chips then get encapsulated, put it on a plane, goes through at least two customs, right? So a bunch of officers get to look at it from different governments that you don't know, goes through a warehouse and a courier and then to you. So along that you spend like a billion dollars and then along the way, someone can just lift a courier a thousand bucks and then put the chips they want into your hand. You don't know, right? So the way you normally avoid this in software is that you go ahead and you pull down your binary, you hash it, you check it before you run it and you run that thing that you actually hash check. Well, you can't do that with hard, you can't hash your hardware and just do it with a binary signature check on it, right? So the idea is with silicon is to go ahead and try to make it user verifiable. When I say user verifiable, I don't mean like mom and dad are going to verify this, but I mean like you could conceivably build a machine that a hacker space could afford. So it's around a couple thousand dollars, right? And the hacker space can go ahead and help you verify that your silicon is actually constructed correctly. And the reason why we can do this is that if we put certain features in the silicon, we can make it easier for a machine that doesn't have nanometer resolution to go and verify. So the idea is you go ahead and you, you know, transistors today are like 40 nanometers, right? So they don't know how big a photon is, like, you know, light on 600 nanometers, right? So one photon there's 10 transistors inside one photon, right? So this is one of the problems of doing like verification optically. But it turns out like, you know, if this table here is the size of what I'm looking at and I've got like, you know, maybe like 50 standard cell gates in here, I can go ahead and shine a light on it and cause a fault in this area here, and I can read back the state of the chip and say, okay, in this area, there's 50 gates and they all belong to this logic function, right? Then I just step the laser just a little bit, not even a full wavelength, partial wavelength over, and now I get this section of the table, and I step a partial wavelength over and I get this part of the table. And you step and you keep doing that over and over again. You can get a map, a good idea of roughly where the gates are without having to have a SEM, without having to have some really fancy machine that requires like a PHD to run. Cause it's all encrypted and stuff like that, you can go ahead and use the Wi-Fi hotspot, so the idea is not to build a phone, right? I don't want to put LTE 3G in it, it should be kind of initially slaved off of your phone as a hotspot. And so basically, at the moment I'm thinking about doing sort of an FPGA based system initially, right? Because of course being doing silicon is very expensive, we want to make sure we get it right. So the initial one is going to be like, it looks almost like, when I say it looks almost like the same size, but a bit thicker because the battery needs to be thicker due to the leakage of the FPGA. And the idea of this is to sort of vet the human-computer interface aspects, vet the keyboard, vet the sort of like the code space of it. Yes, if someone got a hold of it, they can read out the code because it's an FPGA, but that's, you know, at least it should be sort of very resistant against people trying to attack from the outside. And in parallel maybe get some off-the-shelf MCUs around the same resolution, de-cap them and sort of validate that the optical validation can work by, for example, scanning across RAM and showing that the RAM patterns can come out. If I can validate that that works and build a machine that can sort of optically scan RAM on an off-the-shelf microscope, then I have like confidence that when we tape out the mechanism in the open silicon, we can actually confirm that it works. And then like the idea is the final version would be like this, you know, whatever cool thin thing that consumes no copyright, you know, whatever stuff into the right things you want to say. It's, you know, again, it's like a bank shot and a bank shot. But this is, you know, for people who like pictures of pretty things, I made a picture of pretty things for people to look at. So just have like a couple minutes just to kind of dig around a little bit. So just to talk a little bit about, so this is, it's a big project, right? And so the approach to the project this is to break it down into smaller pieces. So for example, I have this little circuit board here and this is like has what I consider to be like the highest risk circuits. The ones I don't know, like I don't know what I'm doing when it comes to building an avalanche noise source and building a physical keyboard. I've never had to build one of those before. SIM card slot, like, you know, you know, how do I get the mechanical mating to the edge? So I basically build like a very cheap sort of like mule of a piece. It doesn't even have a CPU on it, right? And it has like multiple circuits on the inside. It has multiple little areas. And I go ahead and I just populate those high risk circuits and I go through what I do, like a validation of it. So I can show you like I was just had this up. Let me find it. So how many people are familiar with avalanche noise generators? Okay, so just very quickly, this is a very cool thing that happens. So you guys know that diodes normally conduct only in one direction, right? So what happens when you take a diode and you put a big voltage in the wrong direction, right? Eventually it breaks, right? It turns out that the way it breaks is it's not consistent. So you can imagine it's like, imagine you have a big, you know, sort of pool of water, right? And it has waves on the top and you're filling it and you're filling it. It's keeping the water coming out. And every now and then like a little bit will like, you know, a small part will bend down in the pool and a bunch of water will start falling out of it. And then eventually when the water level gets a lot up, the pool sort of like writes itself, sort of imagine like a kiddie pool. Like, you know, those like plastic-savable pools, right? So a diode at a very high reverse bias is like that. Basically you have all this current kind of building up and trying to get over the barrier, a triangular barrier. Eventually what happens is one of them just through thermal noise pops over the edge and because it's such a high field, it's actually a small particle of satellite inside it. Actually it's really quickly through the field. It hits another silicon atom. It releases two more electrons. And those are also accelerated by the field hits another two electrons. You have this exponential cascade called an avalanche effect. It's completely random, right? So there's no predictability to it. And so I went and I built this circuit and I took like a whole selection of different transistors and diodes and put them in and sort of characterize the amount of noise, the fact they have. So I found one that worked better than the others. And then you know, okay, that's kind of what my noise waveform looks on the first path. I say, okay, well I want to go ahead and optimize a little more. And I found like a different bunch of settings, tried to tweak the bandwidth of it. So I'm keeping notes of everything as I go on like, okay, I'm going to go ahead and tune the capacitors to the bandwidths in the right zone. I'm going to tune the current going into it so and so forth. And then I do stuff like I put cold spray on it. And I say, does it boot a cold? I put a heat gun. It does it boot a hot. I put a 2.8 volts. Does it boot a 2.8 volts? Put a 4.4 volts. Does it boot a 4.4 volts? So you're now covering all the corners, hot hole, high low voltage, making sure the whole thing actually works across all those corners. And then measuring the current that it does, it's like 1.76 milliwatts when it's running, which is great. It's actually totally mobile friendly. It's about a seven-year square. And then you know, the final waveform looks a bit like this. So now I've got a nice full zero-to-one volt peak-to-peak waveform that I'm going to put into an A to D converter sample in and use it to see the number generator 4 for the thing. So this is like the step-by-step method. So a lot of people think that somehow hardware is born out of my lab perfect and it always works. Hardware is done like, you know, seat-by-seat, row-by-row. Use one foot in front of the other, like solid sort of like science. There's like freshman, physics, science lab sort of stuff you have to do to make sure the individual primitives work. And even before I put them on the circuit board, like the final one, I'm checking all those things. So this is a methodology to make sure that you get to that point to make sure that it all comes together. So, I think I'm already over my time, so we'll go ahead. Okay, any questions, I guess? Or we're going to have an AMA shortly, right? Cool.