 Coming up on DTNS is the Polly Hacker, really a white hat. Shannon Morse has a theory, plus Samsung's using AI to design chips and the secret WhatsApp Mango supply chain. DTNS starts now. This is the Daily Tech News for August Friday the 13th. It's 2021 in Los Angeles. I'm Tom Merritt. In Colorado Studio 2.0, I'm Shannon Morse. Welcome to Top Tech Stories in Cleveland. I'm Len Peralta. And I'm Roger Chang. The show is produced. We were just talking about baseball mascots on Good Day Internet and admiring Shannon's new studio. If you would like that conversation, become a member and get Good Day Internet. Patreon.com slash DTNS where you can join our top patrons like Norm Physikus, Chris Allen and Mark Gibson. Let's start with a few tech things you should know. Reuters sources say the European Commission will present legislation next month to establish a common charger for mobile phones. Apple, Samsung, Huawei and Nokia previously signed a voluntary memorandum of understanding in 2011 to harmonize chargers on new phones coming to market. Any use study found that in 2018, 29% of chargers sold with phones used USB micro-B connectors. 29% had USB-C and 21% had a lightning connector. AKA they were from Apple. The Microsoft Windows print spooler is going to end up being the most secure element in all of Windows when all of this is said and done. Microsoft has released an out-of-band security advisory acknowledging yet another print spooler vulnerability. This is CVE 2021-36958, which appears to be a local privilege escalation vulnerability that needs user interaction to be exploited. Microsoft has not released a fix, but it appears if you restrict the device to only install printers from authorized servers in a group policy, as long as the server isn't compromised, you'll be able to use print spooler. How many more print spooler vulnerabilities could there be, kids? In an interview, Xbox head Phil Spencer said that Microsoft was open to discussions to bring Xbox Game Pass to other console platforms, although currently there's no interest from other parties to do so. Spencer said Microsoft will continue to focus on open platforms like the web, PC and mobile, so no Nintendo for now. Facebook is adding voice and video calls to its, quote, secret conversation option that lets you use end-to-end encryption to voice and video calls in Messenger. Messenger text got end-to-end encryption in 2016 with the secret conversation option. Facebook also is adding some more time options to set text messages to disappear so you can now choose from between 5 seconds and 24 hours. And Facebook is conducting some beta tests as well. Some users will get an option for end-to-end encrypted group chats and to convert an existing chat thread to be end-to-end encrypted as well. And some Instagram users will also get an option for end-to-end encryption of Instagram direct messages. I am looking forward to that part. And Disney Plus subscribers increased more than 100% on the air in Q3 to 116 million paid subscribers, beating analyst estimates. Disney's on a roll. ESPN Plus subscribers were up 75% to 14.9 million and Total Hulu subscribers grew 21% up to 42.8 million. All right, let's talk about Samsung again, but not foldable phones this time. Samsung confirmed it is using software called DSO.ai from the company Synopsys to design its Exynos chips. They told the wired, yep, we're doing that. Although it's not clear if these have gone into production or if they are going to go into production or what products they might show up in. Synopsys software is trained by reinforcement learning on years of chip design. So basically reinforcement learning means when it creates a design that works or improves on a chip, it learns from that and makes more like that. It can automatically draw up the basics of a design, including the placement of components and how to wire them together. Synopsys works with dozens of companies, not just Samsung. And Synopsys says its clients generally see performance gains of up to 15% when it uses the AI designs instead of doing it by hand. Samsung and Synopsys aren't the only ones doing this. Google has published a paper about using algorithms to arrange components in its tensor chips. NVIDIA and IBM are exploring software aided design. And Synopsys has a competitor called Cadence that is developing tools to help with mapping out chip blueprints. Software like this only helps set out the designs. You still need humans. Human experts are needed for some of the more complex tasks to finish the design. But still, even just those basic designs can take weeks to months for humans to make. And the software can get the basics down in a few days, letting designers experiment with more ideas than they would have time for otherwise. Wired quotes MIT professor Song Han who notes that these advancements raise the possibility of having algorithms develop not only the hardware, but software optimized to run on it at the same time. Professor Han said AI powered co-design of software and hardware is a rapidly growing direction. I am so interested in this because I've been following the Pixel 6 line and we've heard about those tensor chips. However, Google has not publicly stated whether or not those are going to have the same kind of technology built into them as they have in their overall tensor product. That they have been using to develop AI on these chips. So I'm really curious to see in the next couple of years if we're going to see a lot more of these AI platforms built into consumer products like cell phones. Yeah, the first thing that this made me think was if you've got reinforcement learning in this case, developing chips. That means they're developing chips faster, which means that you can, like we said, try out more ideas, which means all of this talk about like, well, Moore's law, we're probably at the end of Moore's law. Well, maybe this is the thing that extends it. If suddenly you can design, you know, may find more optimized designs within the space because the thought on Moore's law recently has been like, we're reaching the nanometer threshold. You just can't get much smaller. Yeah. But what if that's not the end if we can optimize better because of this? I also wonder if, thanks to this kind of technology, if we're going to see faster implementations of new technology in chips that could help with things like these chip shortages that we have been experiencing. I know a lot of that has to do with materials, but maybe with AI developing these processes faster to get these components out. I wonder if this would help in any way so that we wouldn't experience those kind of shortages in the future. That's really interesting because I know one of the problems is making older components that people don't even bother developing anymore. If suddenly you could say, well, let's spend a few days seeing if the reinforcement learning algorithm can come up with a faster way to build that display chip for the car. Exactly. Whereas before, they're like, well, we're not going to spend months sinking our time into that. That's an interesting thought because if you could use the same capacity to build twice the chip, suddenly that helps ease the shortage. Absolutely. Well, if they haven't done it already, then they can take my idea and go with it and make millions. Are you open sourcing that? I'll open source that idea. Generous of you. No, there's a lot of interesting things here. And it is a trend to be aware of that there is some what they call AI generally, which is a huge basket. But in this case, reinforcement learning and machine learning systems for designing chips, which could speed up chip design. That's important. Well, moving on, we have some information from Apple. They have definitely been in the news this week. Well, the Apple SVP of software, Craig Federighi, talked to the Wall Street Journal about the company's recent announcement and their plans to scan photos uploaded to iCloud for CSAM. And a separate program to scan messages on child accounts for sexually suggestive images. Now Federighi said, quote, we who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy as enabling a more private world. Interesting. Now Federighi also provided a few more details on the plans. He said the threshold of matches needed on uploaded photos before photos are unlocked for that manual review is something on the order of 30. Interesting. He also noticed that the database he emphasized that the database of hashes for known CSAM will be constructed by multiple organizations and verified by an independent auditor. Reuters sources say an Apple Slack channel had more than 800 messages about the new policies with some employees expressing concern that the feature could be exploited for censorship. Now those employees were mainly outside of lead security and privacy roles, but some defended their company's policy and said that it was a reasonable response. And Federighi admitted Apple maybe should not have announced both the photos and the messages plans at the same time. And he said, I quote, again, by releasing them at the same time, people technically connected them and got very scared. What's happening with my messages? The answer is nothing is happening with your messages. And that's true for almost everyone, right? Unless your account identifies you as 13 years old or younger, nothing is happening with your messages. Shannon, what has been your take? I've talked a lot about my evolving take on this, but how have you reacted to these plans? I'm glad you asked, Tom. I'm glad you asked. I did my own story about this on Threatwire and I'm very much a privacy advocate. So this is concerning to me with the idea that these concepts could be used in the future for some kind of other response, not just for a CCM. Now, I also mentioned that I have nieces myself and I want to protect them in any way that I possibly can. That's incredibly important to me, but also my security when I'm using an iPhone is very important to me as well. And I don't want Apple to make some kind of mistake if I took a picture of my niece at the pool, for example. I made the example of with the messages, if a whole bunch of parents were having a pool party for their kid during a birthday or something like that. How do we know that their construction, their hash algorithm is going to correctly verify that the photos that parents are sharing amongst themselves or the kids are sharing even are totally normal and, you know, not something to be worried about? How are they going to make sure that those are not matched up to some kind of hash? Now, they did say that over the course of a year, I think there was like a very, very teeny, tiny minor chance that they would get one incorrect. And if that was the case, you could reinstate your account because they would lock it down. But even with that said, there's still that potential and there is the potential of something happening in the future, even though Apple said that they would not relegate this technology to anything that governments asked for. I also disagree with what Federighi said when he said that people were connecting messages to the photo scanning. I don't think that that's entirely true when I read it and when I read the PDFs and everything about this new technology and the plans that they have. I did not read it as such in any way possible. I read these as two completely different pieces of information that they're implementing to fight the same problem, but there's still two completely separate technologies. Yeah, I think he's right that they shouldn't have announced both of these at the same time because I do see other people confusing them. And it's forcing us to have two separate conversations at the same time. So, you know, I don't know what good it does for him to say we shouldn't have done it that way. But yeah, sure, maybe maybe it would have been better if you did these one at a time so we could focus on each. Because at first I was more concerned about the messages because of the leaking of information from what is otherwise end and encrypted. Over the course of reading a lot of people's thinking about this and security experts, it seems like security experts in general aren't as concerned about the messages thing, because it only notifies and the images are only saved when the end user opens them, which means that they would have been available for parental review anyway. No matter what they were, it's just that there's a notification saying, hey, make sure to review that one. So the more I think about it, the more I'm like, I don't love it. But maybe the other one is the bigger problem, which is I'm not so concerned about accidental hashes. I think they've done a very good job of saying these hashes are come from known images. The only way they could be accidentally matched is because they're trying to do partial images. They're trying to be able to say like even if it's been cropped, that hasn't been independently tested. That's where it could go wrong. But the fact that they have now we know up to 30 of these have to be mismatched. It does seem unlikely that if you have that many mismatched that you aren't up to something and then even then they have human manual review. So I'm fairly comfortable with the idea that like it's going to be really hard for an accidental match on the photos side to happen. I think I just one last thing. I think the problem with is the security people that like Alex Stamos have pointed out is Apple just ran in and ended the balancing argument. Instead of saying, hey, we'd like to do this. Let's let's nail it down. Let's all agree where the balance should be. They said the balance is here. And I think a lot of security researchers are saying, well, I'm okay with there being a balance between combating CSAM and privacy, but let's talk about it first. You don't get to just wait in and decide. Yes, 100%. I do agree. I'm glad that he did tell us like how many photos that have to match the algorithm before they're going to mention something or going to shut down your account and alert authorities. So something on the order of 30 makes a lot more sense to me. I think that with consumers, we do have to value our privacy and our security when it comes to our own devices. And I think there's a there's more privacy held there than when you're putting things up in a cloud, which many clouds are already doing this kind of scanning as we already know. So there is that that point to make. But yeah, I am glad that they did tell us how many before they're going to like shut down my account because they think my niece is something bad. Yeah, yeah. And I feel pretty comfortable about that. I really agree with Stamos. Apple should hold off implementing this and just and engage with the security community and come to an understanding of where that balance should be so that everybody can get behind. Because I do think what Apple's trying to do is the right thing. They just may not be going about it. They should do an open forum at Hackert convention, which is so not Apple, but they should not Apple, but they they should do that. We would love to have those talks with them and have those transparent conversations because I really think that they could grow this. And I think we all agree that it needs to be done, but it needs to be implemented properly. And it could become an industry standard where you could pressure others that are doing cloud scanning to not do it that way necessarily anymore. Yeah. All right. Car and driver did some tests on the level two driver assistance systems for multiple car makers. Level two means the car can automate some driving tests like steering, acceleration and braking. But human attention is still always required. This is often called advanced driver assistance systems, Cadillac supercruise, Tesla autopilot. There's a couple examples of level two. Tesla gets all the breathless attention with the dramatic headlines blurring the Tesla self driving system can be fooled to be used even when the driver isn't in the seat. But car and drivers tests found not only is that true, but all 17 level two systems that it tested. It was true about not just Tesla. None of the cars use seat detection in the driver seat, even though it's in the passenger seat to detect for airbag detection. You put a heavy bag of groceries in your passenger seat, it'll know something's sitting there, but not in the driver's seat. So you just need to trick the detection system if you want to use it without being in the driver's seat. Tesla system relies on checking for torque on the steering wheel. That's just a little bit of resistance that comes when your hand is on the wheel. Doesn't have to be a lot. That's defeated easily by putting a small weight on the wheel like an ankle weight that fooled all but three of the cars in the car and driver test. The BMW and Mercedes systems use touch sensors, not torque. So to fool those, a person would have to sit in another seat and then reach over and touch the wheel. But it could be done. And then the GM Super Cruise actually allows hands free driving and works only on approved stretches of highway. So it won't work if you're on a test track, for instance. They had to shut down a highway to go test it. So to make sure you're paying attention, GM Super Cruise uses a camera to look and see if you're paying attention. If your face is there and looking forward. Except car and driver was able to fool that system with a pair of novelty glasses with eyeballs printed on them. So terrible. That said, Shannon, in all of these cases, car and driver had to do something intentional. They either had to attach a weight or they had to like not sit in the seat but still reach out. Or they had to like wear novelty glasses. None of these things are things you'd accidentally do, right? I would hope not. I would hope that you're responsibly driving your vehicle on a public road. If you are not, then there's some natural selection going on here and I would feel very, very bad for you. I've even seen similar things in my Hyundai. We have a Tucson and if you don't keep your hands on the wheel, even though it has the lane detection and it'll keep you in the lane. It'll yell at you if you don't keep your hands on the wheel, but it's only looking for a touch. You don't have to like move it. There's no torque involved. You just have to touch the wheel. So who's to say that I couldn't sit in the passenger seat and just touch the wheel and use lane detection and stick it in cruise control. I would never do that because that would be really scary, but there's a thing there. I mean, but even with old fashioned cruise control, who's to say you couldn't, you know, just put a brick on the accelerator and, you know, set cruise and then let it take off. Like part of me says like it's good to know that these are the weaknesses in the systems, but car and driver Angelopnik both were very adamant that these are unsafe systems. I think they're not foolproof systems, but I don't know if they shouldn't be allowed. Although the argument goes when they can drive 80% of the time, it lures you into a false sense of security. So I think I think that's a fair thing to make sure people are aware of. I think the advertising that that companies have been using and maybe not necessarily companies, but fans of those companies have used saying like automated driving and things like maybe have given people the wrong perception of how these vehicles should be used. And I don't think that really helps people be responsible on the road likely should be. Yeah. Well, folks, let us know what you think feedback at daily tech news show.com. If you need a little deeper explanation on big tech topics like this, we've got a show for you. We just had an episode about DNS in the feed. We've got an episode about HDMI 2.1 in the feed. It's no a little more 10, 15 minutes to explain a topic in depth. You can get it either in the Patreon feed or by becoming a subscriber at know a little more.com. The Poly Network confirmed it offered a $500,000 bug bounty to Mr. White Hat, the party that recently used a security exploit in digital contracts to take $611 million in crypto assets from its network before subsequently returning the funds. In statements exchanged on the blockchain, Poly said, we believe your action is White Hat behavior. We plan to offer you a $500,000 bug bounty after you complete the refund fully. Also, we assure you that you will not be accountable for this incident. In a statement, Poly said, we have also come to a more complete understanding regarding how the situation unfolded as well as Mr. White Hat's original intention. Okay, I love that they call him Mr. White Hat. I know. So funny. That's too good. We talked about this in depth earlier this week about how it was done. And we talked a little bit at that point. They had given about half the money back. Looks like they've given almost all of it back. It sounds like Poly Network has convinced that this was not malicious, that it was helpful that they've been able to plug the vulnerability because of it and no harm done. Which seems weird when you had $611 million go missing for a moment. But hey, if it's coming back, I guess no harm, no foul in the end. The stance of Mr. White Hat is what is so fascinating to me because it's unconfirmable. It's just, you know, you could write anything on the blockchain that you want as a comment. But it seems to be that whoever did this is trying to convince us that like, no, no, no, I found the vulnerability and I wanted to make sure that these coins didn't go missing. So I protected them and now I'm giving them back. Shannon, do you have an inkling what might be really going on here? So there has been a rumor mill, I will call it on Twitter. When this first started, a lot of folks in the information security and hacker industry and community made the assumption that maybe the Poly Network is behind this hack. And they are playing it off as being hacked by a third party so that they could write off this vulnerability and be able to keep the money, even though it went quote unquote missing for a short period of time. It's definitely a conspiracy theory, but there's kind of a back channel that makes sense here. When you look at what's going on and how strange this entire thing is, the name Mr. White Hat does not make sense with how this is playing out. Because this is much more of a gray hat scenario of somebody who finds a vulnerability, attacks that vulnerability without a contract signed with the company, without any kind of responsible disclosure, without a 60 day withholding of public statements or anything like that in order to give the company time to patch that vulnerability. And then they just steal the money and make everybody panic. I mean, there were a ton of people who were victimized by this. And that's not something that a White Hat is supposed to do when it comes to a hack. So this is definitely not something that you would normally see from somebody who is an ethical hacker. The most recent message since we started recording this show was earlier today and said, I am considering taking the limited bounty as one source of the compensation fund for unexpected victims. So that even addresses that, but then says, but it's hard to prove that your loss is my fault, especially when you are already gambling behind your capability. So it's like, you know, give with one hand, take with the other sort of thing. Oh, it's so weird. Yeah. There's an Avengers reference, a Batman reference. So, you know, rides the DC Marvel line well in these comments here. I don't know, I'm absolutely fascinated with the communication around this. Obviously, you can't say that Polly did this. No, that's of course a rumor. The White Hat hacker that they got to do it to play Mr. Whitehead is an excellent writer. Like this does feel like things people write in this kind of situation, but it's not obviously faked. I don't know. I guess in the end, Polly gets their money back. Probably like you say, they get to write it off, you know, any damage that they can attribute to this, including paying $500,000 as a bug bounty. The $500,000 bug bounty, it just makes it even more puzzling to me. It's so weird. Yeah, it's so strange. I'm going to keep following this because I bet. Yeah, let us know. Cryptocurrency company Circle operates a stablecoin called USDC. If you're not aware, stablecoin means the coin's value is pegged to a real currency. So it doesn't rise and fall like Bitcoin. USDC is pegged to the dollar, get it? USDC, US dollar coin. One USDC is always worth $1. Circle has announced it intends to become certified as a fully regulated bank by the US Federal Reserve, the Office of the Comptroller of the Currency and the US FDIC. If it does, Circle intends to place all its deposits, the backing for the USDC, $1 for every stablecoin it's issued, directly with the US Central Bank. Each holder of USDC would then be backed by actual money on deposit at the Fed. Interest that the Fed pays on those reserves would go to Circle for its operations, but you'd actually have a deposit at the Fed. Now that's significant because it would effectively make USDC a central bank digital currency, or CBDC. That's the thing we've been talking about on this show, that the Bahamas has, that China is testing out, that every government is basically developing. Buying a USDC stablecoin would, if this all panned out, effectively be buying a deposit at the US Fed. But the Fed may not approve it. A bank called DNB tried to do a similar thing and was denied. It was trying to operate as a bank, though, with the Fed holding all of its deposits, and it was going to split the interest with the holders. Circle wants to do a slightly different thing by having a cryptocurrency backed by the deposits. In TMB's case, the Fed believed it would negatively impact monetary policy, interest rates, the success of other banks, so it denied it. But it's unclear if the difference between offering deposit accounts and operating a cryptocurrency would significantly change the Fed's assessment. Anyway, Circle says that it's willing to do whatever the agency say it needs to do to make this happen, which could be a very free market way to do a CDBC, a central bank, or CBDC. It's very fascinating. I'm really interested to see how this goes, because it could be totally on par with what other countries are doing right now. Well, real quickly, because we did say we would talk about it, Eater.com has a story up called Inside the Secretive Semi-Elicit High Stakes World of WhatsApp Mango Importing. The subtitle says, Customs restrictions, high transport costs at a short shelf life have made the world's greatest mangoes grown in Pakistan difficult to come by in the U.S., but WhatsApp is getting around it. The short version here is that the process that the U.S. has put on mangoes coming from Pakistan that was implemented in 2010 to say, OK, you can import them, but you have to irradiate them in the U.S. to prevent them from becoming spoiled, and they have to be quarantined. So by the time they get harvested in Pakistan, somebody orders them to get to the U.S., they go to the facility, and then they get on a truck, and then they get ordered, and then they go to a market. It won't last very long because mangoes have a short growing period and a short shelf life. So folks are using WhatsApp in order to short circuit this. They have WhatsApp groups in Pakistan where they make sure that they can get the mangoes as quickly as possible after the harvest and then get them over to the U.S. where other WhatsApp groups coordinate the shipping information so that you can buy them through WhatsApp, through the WhatsApp shopping carts and everything, show up to the cargo area at the airport, show them the WhatsApp code, and they're like, oh, yeah, these eight boxes of mangoes are yours. Here, take them. And it's a fascinating read just about Pakistani mangoes, why they're better than the mangoes you get on the shelf in the U.S. because they're tastier. The whole thing is really, really interesting to me and delicious. It's the Mango Dark Web, Tom. It's the Mango Dark Web. It really is. The Dark Mango Web. That's amazing. All right, let's check out the mail bag, Shannon. What do we got in there? We got an email from Tim deputy who writes, hi gang, given the recent discussion about how traceable Bitcoin transactions are, I thought you might find this article interesting. It talks about a service that's available on the Dark Web. Oh, ironic, which will check how much your crypto wallet is connected to illegal activity from the BBC article he sent us. It says, quote, we're seeing criminals start to fight back against blockchain analytics. And this service is a first explain Dr. Tom Robinson, who is the chief scientist and founder at analysis provider elliptic who discovered the website. Oh, that's, that's, that's really interesting. So the traceability has been a thing where it's like, maybe we don't know who you are, but we know enough about you from where you move your coins to figure out who you are sometimes. And obviously, that's not going to last forever before somebody figures out how to obfuscate that to make it harder. There you go. Thank you, Tim deputy. Keep those emails coming folks feedback at daily tech news show dot com. A special thanks to Rodrigo Smith Zapato, one of our top lifetime supporters for DTNS. Thank you, Rodrigo for all the years of support. And thank you, Len Peralta for drawing today's show. What have you drawn for us, Len? Well, you know, when I read the story about Mr. White hat, the very first thing I thought of was the Mr. Men series by I've been Ronald or Richard Hargraves. Anyway, the little white books with the blob people. Yes, there's there's also a little miss. There's also that. Well, this is my take. This is Mr. White hat. And it's almost like a little funny story, you know, like a like a bedtime story. Mr. White hat gives back the bitcoins. I know that it weren't bitcoins. It was more just crypto, but it's kind of fun. And it's also fun. Yeah, exactly. It's kind of fun to draw the little bees in his eyes there. So if you're familiar with the Mr. Men series, you may this may be a little bit close to your heart. This is actually right now my patreon. If you're a patreon backer, patreon.com forward slash Len also at my online store at Len Peralta store dot com. So check it out. Mr. White cat, we say oopsie oopsie. I hacked your Polly. Sorry. All right, thank you so much for being with us today. If folks want to find out more of what you got going on, where should they go? YouTube.com slash Shannon Morris where you can follow along with this entire series of me building out this new basement studio and you can see the entire process, including wiring up a whole bunch of cat six Ethernet. That was super fun. Good times. It's a good time. Especially when it's done. Hey, we're live Monday through Friday, 4 30pm Eastern 2030 UTC. Find out more about that at daily tech news show dot com slash live. If you're interested, we will be back Monday with Blair Bazderich as our guest. Talk to you then. This week's episodes of Daily Tech News Show were created by the following people, host producer and writer Tom Merritt, host producer and writer Sarah Lane, executive producer and Booker Roger Chang, producer, writer and host Rich Strafilino, video producer and Twitch producer Joe Coons, associate producer Anthony Lemos, Spanish language host, writer and producer Dan Campos, news host, writer and producer Jen Cutter, science correspondent, Dr. Nikki Ackermans, social media producer and moderator Zoe Detterty, our mods, Beatmaster, W. Scottus One, BioCow, Captain Kipper, Jack Shit, Steve Guadirama, Paul Reese, Matthew J. Stevenson, J.D. Galloway, mod and video hosting by Dan Christensen, video feed by Sean Wei, music and art provided by Martin Bell, Dan Looters, Mustafa A. A. Cast, Creative Arts and Len Peralta, live art also performed by Len Peralta, A Cast ad support from Trace Gaynor, Patreon support from Stefan Brown. Contributors for this week's show include Allison Sheridan, Scott Johnson and Shannon Morse. Guests on this week's show include Chris Mancini and Nate Langson and thanks to all our patrons who make the show possible.