 It is a noon hour on Thursday folks. Ted Ralston here and Bert Lum in the downtown Honolulu Studios overlooking the harbor here of ThinkTec Hawaii. And Sandy Bay we have in San Francisco, California, we have John Mullen, CEO of Promia Incorporated, a high-end network and cyber organization. And we're here talking about something that is related to drones, although we don't have any drones on the table which we normally would come in with because drones tend to make you focus on the drone, right? We're talking about the cyber substrate that operates behind drones and just about everything else. And the reason we're talking about this today is the event on Saturday which was a cyber event to a certain extent and certainly was actually in my mind it was really good that that happened. And I think that we're going to learn so much from that and as soon as the witch hunting is all over and people get back to work and get things done and stop beating on people and start getting something really good done out of this maybe we can move away from kind of the back of the bus that are front of the bus in this regard and with the help of guys like Bert and John Mullen. So what we had and we're doing this in the spirit of Darrell Wong by the way who's not here on the show but it would certainly be thinking along with us in this domain. How do we think of what occurred on Saturday? It basically something shows up here on my cell phone and suddenly out of the clear blue sky I've got something that I've got to deal with. I've got to audit it, I've got to assess it, I've got or I can do none of that and react to it but in the world that we've fallen into here of so much dependence of the population on these commonly available systems which we have no idea their credentials or their qualifications yet we depend on them. How do we result in a sort of a trusted system that can operate within that we can with confidence use and use that to conduct our lives appropriately but not get side-swiped by things that are either errors or malicious attempts to get into us. So that's where John Mullen thinks a lot from the cyber and network perspective and Bert Lump thinks a lot about that from many perspectives here in Hawaii. So let me just ask you Bert to start that reaction. How do we deal with something like this or the information that comes on it in a trusted way? Well, so let's frame this scenario up for our watchers or our listeners and viewers. Viewers, not listeners. Not listeners, but you're normally from right. So the situation is that there is a trusted source and I've already gotten alerts from the folks over at the Hawaii Emergency Management Agency and this is a pretty broad, let's say, message that goes out to basically a million people, everybody who has a smartphone. And the situation is that if that many people get this message, the question is how do you actually verify that it's authentic? You as an individual, who may not have any knowledge of what the... The situation is right. So number one is that you've already gotten it from a trusted source. I'm already acknowledging it as a trusted message that's come in from a trusted source. But it's pretty major, right? I do want to go back to that in a minute. How does a person know it's a trusted source? We'll get there in a minute. But you've received it and it's a very high impact message. High impact message, right? So then the first thing that I would do is, number one, look at each other and say, did you get that message too? Because others that we were over at Impact Hub over at Kakaako, we're actually doing the egg hackathon. So we're doing the eggathon. And others got the message. So that validated that others were actually getting this message. So it wasn't like I only got it on my phone. The other thing is that I would start to look at whether or not other new sources, other trusted sources were also getting this message. So then I would go to the TV station's website or I would go to Twitter. I would look at Twitter. I would look at the folks over at the DEM with the city and county. And I didn't see anything. So then you start to think, well, if this is such a major high impact message, shouldn't other sources of messages also reflect that? So assessment is the first thing you would do, assessing it by looking at multiple sets of information you'd know are also from trusted sources. So you're doing that. That's a relatively informed perspective because that means you have already knowledge of where these other trusted sites are. I would suggest that the average person probably doesn't have access to that kind of information. So the average person simply is responding to what comes in on this. I came from a little different perspective. I said, the message, the structure doesn't look right to me. You know, Pacific Command is not where you're going to get a missile alert from. You're going to get it from NORAD. You're going to come from FEMA. The source didn't look right to me, so I thought this wasn't... OK, but I'll ask you, in terms of Pacific Command, Pacific Command is the folks over at Pearl Harbor, right? Why would you not consider that as being a trusted... That would be a secondary source. It's a primary source to us that comes from NORAD or through FEMA, and then you get other validation. So I would have a different perspective. But anyway, we had no choice but to respond in kind and shelter in place and such. But anyway, let's turn to John for a minute. John, what we're speaking of here is of course trusted networks that work all the way from the top security level down to the handheld device I've got in my hand. Your view is, John, on how we're going to get to the point where something coming out of here is, as Berta's saying, is totally trustworthy and we can do an assessment as individuals. What kind of personal accountability or personal preparations are required to get to that point where we can actually make those judgments? Well, I think Berta's right in what the natural human reaction is. And he's right on each tip, too. Whether he's an expert or not, that's what people do. They look at other sources. They see that it happened to other people. And then they try to say, how else would I know? Because it's a natural thing. And by the way, when you study security, you study how humans react. And you know, this is very, very basic. But it is also, I mean, this one incident about a potential nuclear strike is very high impact, obviously. But there's a lot more going on every day in the news sources that isn't right. And the issue of being vigilant and also suspecting and skeptical, it's a very big issue and it's very important right now. Because technologically, I mean, we're working on a lot of people on new technologies that will provide a complete trust and source that quantum machines and networks, but they're not going to be here in the next short period of time. So people need to be more vigilant. And I think your point about the analysis is correct. And I'm spending it a little bit more than just a potential nuclear attack, but just information. Because remember, they've already admitted, not just rush up at many other countries, that they constantly try to manipulate other people's elections. And we're the most vulnerable because we're the most wired network. So by getting into Twitter and some of the others and doing certain things, even things that aren't even illegal, you could bend millions and millions of minds. And it happens every day, unfortunately. So I'm a little bit more of a proponent of anonymous behavior. I certainly support lawful intersteps for police and fire and military. There has to be a focus for them to be able under purview, maybe warrant like they do with warrant searches or things like that. But at the same time, we need to be able to be anonymous. And there's a whole set of tools. You can download them. They're all free. It takes a little bit of time to catch up and do it. But you can be anonymous where the systems can't trace you. And then they can't feed you information based upon your particular person. And this is what the different large systems, Facebook and Twitter and Google all do. And they do it for marketing and advertising, but unfortunately it has an unfortunate unintended result in that it polarizes the population and it also can really hide certain key pieces of information. So I think you guys already know for information, whether it's a newspaper or whether it's an internet or whatever, if you really want to validate, you read the far right, far left, you read all different dimensions, you read national and international, and you also understand the bias of where the source is in each time and consider that in your analysis. So I think it's the same thing Bert was saying, but I kind of do it a little bit broader scope, but it's critical. And now they are teaching it in schools, finally seventh and eighth grade. We're seeing it in Wisconsin and middle schools where teaching people to be more vigilant. And I'll just leave on one note. The number one way cyber attacks happen around the world is spearfishing. And that happens because somebody sees a message and a lot of times they know they shouldn't click on it, but their habit is such they click and they open up their machine, the entire corporate network, the database is all to large exposure because they clicked on something. So it's about being vigilant. For a period of time we have to do this later. The networks are going to be so strong, I believe, but we won't have to do this, but right now we have to do this, everyone. See what I mean? And that's really interesting. As you're saying that what Bert said, what goes through my mind is this one single device is the display portal for everything that we're speaking of all the way from things that are completely unreliable, like the tabloids, I suppose, you can probably get them on here. My favorite one on here is Onion News Network, which is a parody, but some people take it as if it's real, it's been a few cases like that, and then you've got all the way to nuclear attackers. We have things that are of 15 minute criticality, and we have things that are a daily criticality, like your stock market report, or seasonal criticality, like how the Patriots are doing, and yet that same device is seen as the output portal for all of that information, and you can get, as John said, you can get bogus emails and things, and so I think the average person has an expectation that whatever, if it comes across on here, it must be right. Well, but you have to differentiate, right? You have to, if you're the user of this device. There's a level of sophistication that we have to teach then, right? So that people are, Right, so if you're getting something on your device, you should be able to differentiate between something that's coming in from a fairly high level trusted source, like an emergency alert, versus something that might come in on the onion. So if you can't differentiate between those two information sources, then you get bigger problems than using your smartphone. But we probably have to teach that somehow, or generate awareness. People have to hack the sources. People hack the information sources. So I could put a message coming out on your emergency network that says anything I want. Well, that is definitely a concern, right? Because if somebody could hack the system and send out a message that falls to a million people that reside in, let's say the state of Hawaii, then that's a breach of security, right? And that's a bigger problem. Well, that's where the malicious inputs go. It's called swatting, where on the internet games, these guys playing, you know, with a shoot-em-up game. But one of them's very angry with another one. So he literally called in that there was a humanitarian problem, a, I guess, a disaster at a certain place. And there was a guy with guns and doing all this stuff. So the cop showed up to this innocent house and killed a guy that they thought he was reaching for a gun as he was open in the door. And he was an innocent man. And it was done because somebody had called in a warning because they were mad at losing on a video game. And that just happened three days ago. That's amazing. But that's even using the internal system against itself. But it does happen. So, again, you have to be very, very careful about these sorts of things. And so what we have is a situation where the user needs to come up to the sophistication level of the device. And the qualification level of the various sources and be able to sort all this out. I'll bet involving our personal knowledge, like information from our families, we probably have an assessment built in because we know what our family members are doing, what they're likely to say. And so we get something from our family, we can make that assessment. If it's something from a source that we're not really familiar with, it adds this new dimension of how do you actually go forward in making some kind of assessment. So some form of maybe a game or something that people can play where Simon says, that's what we did as kids, right? And he didn't do it unless Simon said to do it, right? And so there's some form of that that's got to have to apply here. We got to have to figure out some way to make the real stuff come out of here and stand tall and that it isn't real or should be checked. And let's get back and talk about how we're gonna do that from an educational, from an outreach perspective after our break. Sounds good. Be right back. Hey, I'm Pete McGinnis-Mark and every Monday at one o'clock, I present Think Tech Hawaii's research in Manoa where we bring together researchers from across the campus to describe a whole series of scientifically interesting topics of interest both to Hawaii and around the world. So hopefully you can join me one o'clock Monday afternoon for Think Tech Hawaii's research in Manoa. Aloha, I'm Winston Welch and every other Monday at 3 p.m., you can join me at Out and About, a show where we explore a variety of topics, organizations, events, and the people who fuel them in our city, state, country, and world. So please join us every other Monday at three and we'll see you then. Aloha. We are back folks, Ted Ralston here, Bert Lemon of Studio in Honolulu, John Mullen standing by in San Francisco on our weekly show, Where the Drone Leads. This is a noon hour on Thursday and the world, as you know, sets their watch by tuning into this show. The show's on, it must be the noon hour on Thursday. So thank you guys for coming on. We're having a pretty interesting conversation here, I think, about how the average person in the public can be alerted, made aware, and made skillful in order to sort out information that is rapidly changing or is really impacting that wasn't expected. And exactly how do you assess whether that's something you should react to, or you should tell our people I heard it, but here's how I'm discounting it and here's what I'm gonna do. So I was just thinking, in a break we were talking a little bit, but in the world of airplanes, which I come from, this problem's been dealt with for a long time and you have a guarded switch. You get me a lot of stuff in the cockpit. Certain switches are guarded. There's a guard, a mechanical latch that you have to lift before you can actually operate the switch. And the mental message is, something really interesting is gonna happen when you operate that switch, because you have to lift the guard to get to the switch. So I better think twice, do I really wanna hit that switch? Or there's another story that goes around if that switch in the cockpit doesn't work very well, it looks like it's kinda corroded and maybe it hasn't moved for a long time, nobody else ever had to use it. I'll bet you don't have to use it either. Exactly why would you want to? Well, what you're talking about now is user interface, right? Which is this, don't worry. And I think that's why there's a lot of good lessons that are gonna be learned as a result of this, you know, false alert. If the user interface for the person that's actually responsible for potentially pushing that button or selecting that hyperlink, if the selections are presented in a fashion where the test is right next to or very close to the real message or the message announcing that there's a- Oh, the consequence. Yeah, ballistic missile, then the likelihood for an error is higher. Now, I often tell people in a situation where you're about to send out an email. Let's say you have a mailing list and your mailing list, you know, could be MailChimp or whatever, right? A lot of the mailing list applications will tell you you're ready to send this out to the community of whoever you're sending. So it could be like 50 people or it could be 5,000 people, right? Or it could be a million people. So when you're ready to send it out, are you ready to send it out to this million people? It tells you what the consequence is. Now, if you had a drill message, it probably wouldn't go out to a million people, right? So if there's a way to tell people that this is how many people you're gonna reach. And again, it's all about the user interface. And you gotta look at the guy that's actually sitting there having to decide what are some of the safeguards to help him make that right decision in the moment where maybe he has to make a quick decision. So enabling him to make a better decision by intelligent design of the interface so that ordering pizza and alerting a missile attack aren't no look the same is an important aspect and human factors big time. Right, I mean, a simple example that I just brought up. I mean, if you're gonna send out a test, probably the test is gonna go to, I don't know, five people, half a dozen people. However many people that test goes out to, right? You're familiar with test by people and they know what's going on so they're gonna react properly. And then you got this mass alert job. Yeah, a billion people. Are you ready to tell a million people whatever you're gonna, I'll probably say, no, I don't think so. So some kind of a scaled alert awareness risk factor. Risk management goes on. So again, it's kind of all in that user interface. How was it designed? So we'll presume that that will be discovered and that'll be a piece of the design is gonna go on here to go fix this. But we still have the issue of this user interface because that alert looks the same as the pizza order or the Patriots football score. They all have the same level of apparent credibility. They all have colors on them and they all, you know, something you can read quickly. So we still have the issue of alerting the public to helping them make that assessment, that multi-sensor, multi-direction. Well, I think that's where, again, the validation of the message that you got. Whether it comes in as a little message window, it pops up and it says emergency message, that's one way, right? I mean, you've got that. No, if it gets substantiated by, let's say sirens going off, or perhaps you start to see on your other feeds, whether it's Twitter or other messages that might come in from other agencies that now substantiate the fact that you got an authentic message, I think that would help your decision. Then what we have to do, and we need to get John's ideas here as well, but I think that that is a level of sophistication and understanding that we have to generate and somehow let the public define what those sources are and they're gonna look at and program their phones and such to bring that in. We had that experience in Waimanala a couple of years ago when we were working with the CERT, the Community Emergency Response Team. And one of the questions raised was, what sources of information should we look at in Waimanala? And as the conversation went on, it turned out there were about 15 different sources one could program and look at that would provide different forms of information that Waimanala ought to know about. But there was no Waimanala package. There was no one thing that you would look at. You had to be an alert observer and consumer of information and make that decision yourself. Weather, for example, traffic. These things all have multiple means of delivery to us. So we're talking about a level of awareness and alert, that an alertness that goes above what we've been before generated by multimedia systems and social media, which we probably ought to drop the word social as media, right? Whether it's social or- Well, the key thing that you have to remember too now is that social media is immediate, right? Everybody is very useful, this immediacy of getting your messages. So whether it's coming in from the emergency alert system or coming in on Twitter, you gotta, you know, that's why people were probably overreacting because all this stuff was coming in immediately. Okay. And so, you know, once again, and John, your ideas here in terms of trusted software and such, the aerospace industry wouldn't exist if it weren't for trusted software that does exactly what you say when you throw a switch. There's some, actually there are some problems sneaking in when very complicated systems have to be reduced to code and the physics and the code writer weren't on the same table. And so we have some problems of that type, they get sort of out. But EAL, for example, equivalent assurance level is one means of determining credibility in software. Could that concept of equivalent assurance level somehow flow over this use of social media in some way, John? Sure. Sure. What the EAL levels, the higher you go, the more testing, you know, so you try to test out all combinations, all possible events. But I think, you know, we had three situations here. One of them is where the emergency people are trying to do a test. And I think Bert said that one correctly, right on the button, as far as how to best manage that. The second one is a real event that really happened and that, thank God, is not where we are. The third was an accident that happened. And the accident came out of the system. And I'm not trying to tell people how to do their job. And I'm sure the organization's very well organized and run and everything. In our systems, we would probably not allow a message out like that unless two separate people validated. And it's not hard to do that. So, and that's not, that's not on your end reading it. That's on the end of the guy sending the message out. And what that would do is minimize accidents. I mean, it wouldn't do anything about the testing. It wouldn't do anything about the real events. But it wouldn't minimize accidents. And it doesn't take that much to do that. So that would be one of my recommendations on the other side, okay? But as far as trying to raise the level of vigilance of everyone, we've been trying to do this in corporations and military for years and years and years. Some success, not as much as we'd like to say. We've also studied with teams of people, human cognizance, so you understand certain things a certain way. And that's, we're trying to solve that problem of the spearfishing. We have some hardware solutions and software solutions, but just making sure everyone's vigilant. Cause you're sitting at your desk all day long, you're working very, very fast and all this money in there. And you just click on something, you just have it. You see, and now you've opened up the whole corporation without even knowing it in a tenth of a second. So, and that's how those things happen. And so trying to raise the level of cognitive ability or just basically vigilance is very, very difficult. Educating any large space is very, very difficult. And we have to keep trying, but there also are some software and things that can be done. The one that, it's a group called Iconics in Silicon Valley and they do a very good job with parts of this, that the whole trade-off is, if you really start to enforce this in the email, what happens is there will be some times that you block access to a valid person. And that might make some people mad. You won't be too many times where it's a false positive and you block it anyway, because you think it's a malicious actor, right? But that's far better than every now and then blocking bad guys and every now and then getting one that's not really a bad guy. You have to stand up and you have to go solve the problem, but that's better than having the whole system vulnerable. So I think there are things that can happen in software on that device. One problem is every day you got a different piece of software from a different group and making those all work together. It's not the easiest thing in the world, right? But if the government wants to do it, like the ad council kind of manages press out through different media, they really want to do it. It's something I think is extremely valuable. It's training people to be more vigilant, but it's not easy. And if you can break that down to elementary and scholastic levels and start injecting that into the class environment, I think that's something we're all going to have to face sooner or later here so that the kids can learn like that we're doing on drones, the kids learn about drones and advise their parents, the same thing here. Mom, don't click on that. Don't respond it that way. Don't do it when you're driving, like that. Yeah, exactly. Well, to John's point too, there has been very sophisticated filtering services that will actually help reduce the amount of spam that you're getting. So I know there was a day when you were getting all kinds of spam in your inbox. And services like Google will filter that spam. Now, the level of sophistication to determine whether it's a spearfishing attempt or whatever, I mean, that's another level of sophistication. But you know that people are working on filtering and looking at keeping your inbox as clean as possible, right? And there's probably also people looking to defeat that. I mean, it's just going to be a never-ending role between the guys who want to make the internet useful and those who want to make our ban on it somehow. And well, this has certainly been an exciting week for us. And John, how do we get a hold of that list you've got of anonymous operations to keep your identity clean on the web? Well, there's a couple places you can go to try to get this information. One of them is the Electronic Frontier EFF Foundation, Electronic Frontier Foundation. Of course it's not, you know all about that, right? I don't agree with all their things on there, but they do tell you how to be anonymous. And then another one is a BitTorrent World. You go through there and they'll tell you all the tools. Another one, the one I like the most is probably DuckDuckGo, which is a search engine competing with Google. But it has, it's open source and it never traces you, no tracks, no anywhere. There's a marketing badger that does it for all the websites that, because a lot of times the website will come and try to get your credentials out of your own machine. And this, and they won't act unless they think they get it. So this thing will feed them fake credentials all the time and they're all happy, they're going down the road thinking the marketing group, thinking they got all your stuff and they don't have anything. And so there's all these little tools and if you put them all together and they work even if you're going internationally to certain countries that do very invasive investigation, really China, Turkey or the two big ones right now. If you have the right tools, you can go through there and be completely anonymous. But it takes time to put them together and investigate. But there's full tutorials on each place I just mentioned, full primers on how to do it. One thing we try to do on the show is walk away with something we're going to go do as a result of it. Let me appoint Bert as the receiver of the information John just provided since you were nodding so vociferously during the conversation. And let's figure out a way to make that available through DBED. We both have association with DBED in some way, but stand it up and we got the legislature just started up yesterday. Let's take that on. Figure out some way to figure out here's how to, here's a couple of pages to read about to start you down the path of becoming safe, anonymous and ability and with the ability to assess from different perspectives. Let's do that. Sounds good. Well, we could also just do it through our own personal channels that we have access to. But I mean to help other people, right. Okay, well John Mullen, thanks very much for joining us from across the shining seas in California. And we'll get you on again sometime and get you on when Darrell Wong gets back. And Bert Mullen, thanks for coming down from around the corner. Yeah, nice meeting you John. We'll see you guys again. Thank you. Thank you all and we'll see you all next week.