 Daily Tech News Show is made possible by its listeners. Thanks to all of you, including Peter Bohack, Philip Less, Howard Yermish, and everybody, welcome in our new patron, Cashmere. Welcome, Cashmere. Thank you for joining us. Make him feel welcome on this episode of DTNS. The humane AI pin arrives and is not a total failure, plus Instagram's fight against new DMs and Congress's fight against new deepfakes in general. This is the Daily Tech News Show for Thursday, April 11th, 2024 in Los Angeles. I'm Tom Merritt from Columbus, Ohio. I'm Rob Dunwood from deep in the heart of Texas. I'm Justin Robert Young and I'm the show's producer, Roger Chang. I don't know what happened today, but we have a lot of tech stories about nudity. So we will keep it family friendly, but just, you know, be aware. Shall we start with the totally clothed quick hits? Indeed. Apple sent notifications to an unknown number of users across 92 countries, warning them that they might be the target of spyware attacks. Notably, Apple used the term mercenary to describe the perpetrator of such attacks. Usually they say the word state sponsored, but apparently they use the word mercenary instead of state sponsored after the Indian government said, hey, can you not call them state sponsored? Apple sends these kinds of notices to high value targets several times a year and those are usually politicians and journalists. Apple announced Thursday that's today that its repair program will validate used Apple parts and repairs. Customers and service providers will also no longer need to provide a device's serial number when ordering parts from its self-service repair store for repairs not involving replacement of the logic board. Apple requires parts pairing as an anti-theft and security measure, meaning all parts must match Apple's database or you'll get a persistent notification that the part is unverified and that it may not work at all. Used parts reported stolen will also not validate. Although it's not today if you're listening to this on Friday, so I'm glad Rob made sure that you knew it was Thursday too. DuckDuckGo has launched its first paid privacy subscription product called Privacy Pro, gives you a WireGuard VPN, personal information removal service from data brokers and identity theft restoration all for $10 a month. Not the cheapest VPN you can find, but it's a nice little package of three different things. All three services will be provided in DuckDuckGo's browser, but the VPN will protect your entire device, not just the browsing. The identity theft restoration service is provided by Iris, which helps with canceling and replacing documents, freezing credit reports, and challenging fraudulent claims. Starting May 15th, Google will add several of its photo editing tools to all users of Google Photos, including iOS users at no charge. These include generative tools reserved previously for pixel users and Google One subscribers like Magic Editor, Magic Eraser, Photo Unblur, and Portrait Light. And Adobe continues to future proof its generative models against possible copyright lawsuits. Bloomberg reports Adobe is now offering photo photographers and artists in its network 120 bucks if they'll submit a series of videos that Adobe asks you to do, shooting people engaged in everyday activities, and then Adobe is gonna use those videos to train its models. The requests ask for more than 100 short clips of people walking, interacting with objects, showing emotions, like show somebody happy, show somebody sad, all of that. Adobe has trained its models on stock images it owns the rights to, and images obtained with permission from creators, and is now commissioning videos to be made to train its model. Show your hands. I have seven fingers like a normal human, Justin. All right, let's talk about Humane AI. That's that little voice-activated pin that uses some generative models to answer questions. It projects images on surfaces, like your hand in their TED Talk, that's how they did it. It also speaks its answers to you in a lot of situations. The device was announced in November for 699 bucks, and then you gotta pay $24 a month to get unlimited text talk and data service from T-Mobile because that's how it does a lot of its AI stuff. But it can also make phone calls and send texts for you. So you want that service for that too. The device begins shipping Thursday and the reviews are out, they're not good. Overall, reviewers say the device has limited features which work slowly and sometimes not at all. Are you two familiar with how this thing works? Did you see all the hype about it when it was announced last year? Indeed. Yeah, I took a look at it. I was hoping it was gonna be like a Star Trek badge. You could just tap your chest and it would work, but so will. Yeah. That's the way it's supposed to work. You wear it on your shirt, you tap the button, and then ask it something, and it does what it can on the device and often goes to the cloud for the rest. It has a vision sensor, so it can not only take pictures but also sense what's there. You're supposed to be able to show it something and say, like, is this bag of potato chips good for me? And it can tell you that. That didn't work for David Pierce at The Verge. Humane says it can also send text messages, take photos, and play music from title. So you have to be a title subscriber, at least have a title account, but it can do some stuff. It's not just an AI pin. The Verge's David Pierce describes several of its buggy responses in the fact that it took about 10 seconds to answer a question about the weather and send a text to a friend. It's an ability to, it's inability to recognize items with vision is first on the road map. So I should say, it's ability to recognize items with vision is first on the roadmap, which is, I guess, why it doesn't do it very well yet. They haven't quite got to it yet. Wired's Julian Chakatu said it identified a temple in Thailand as being in Cambodia and also got incorrect information about California's high fructose corn syrup laws. So far, so bad, right? It's not sounding great, is it, Justin? No, it's a gigantically hyped product that got a lot of attention, is extremely expensive, and it appears is not only not living up to that hype, but also is a subpar wearable in a very, very crowded category that doesn't seem to be making the most amount of money. Yeah, it's a solution to a business critical issue that didn't exist and it doesn't work. Oh, yeah, it's like they're missing on a bunch of areas of my things. Well, let me tell you the good things because it did do some things, right? The user interface that like tapping felt natural. Most of the reviewers praised that. They're like, yeah, having it on my chest, being able to tap it and make it work, that all worked great. People loved that. They said the projector actually worked in a lot of cases better than they expected, though not in bright sunlight, which, you know, it's a projector you probably wouldn't expect it to, but that is reduce its usefulness when you're out and about. CNET's Scott Stein said the on-device translation worked fast, although one time it got stuck on a language and he couldn't get it to stop speaking that language. The battery extender worked well, though that tended to get a little hot, which is a concern when you got this thing hanging on your chest. Humane told the Verge that it will add timers and calendars in a software update this summer. So, no, it can't set a timer yet. But hey, if I'm gonna be positive about the Humane AI, because these are pretty bad reviews, the fact that it's got the form factor praise tells me there's still a chance that this thing could work because it's much easier to work on software and improve software on something where people are like, yeah, no, it works fine. I don't need a different form factor. I just need a software update. You can fix that this summer, potentially. Dom, can I ask you a question? Sure. How often do you wear a brooch? You know, I don't wear a brooch very often, Justin. It's interesting that you asked me that. Rob, do you wear a brooch very often? Yeah, are you a brooch man? Not regularly. It's not a thing I normally do, so. I get where you're going, Justin. We all should wear broaches more often and Humane AI is providing us that opportunity. As we move into the brilliant, beautiful future. Wearables are an extraordinarily interesting and exciting category specifically with AI. I do believe that there are a lot of really interesting things that this could do. Unfortunately, I believe that it was just way too early to market. If you look at where AI is going and how much you're going to be able to do, not only on device, but also with some kind of mobile data in the next year, I think that devices that we already wear up to and including the AirPods in my ears and the Apple watch on my wrist will be able to outpace their current functionality by leaps and bounds because of their access to AI. Unfortunately, sometimes you're the first one through the door and you're the one that catches the bullets and RIP Humane. Yeah, I'm right with you, Justin, on this. When I first saw this thing come out, I was excited because of the Star Trek factor, but they didn't really partner with Paramount to do any of that cool stuff. So it's like, OK, you have this thing that I just, I wear it on my jacket or on my shirt and I tap it and it does all these things, half as well as my phone does. And I think you're right. AirPods or earbuds, ultimately, that's what these things are going to do. It's going to do everything that this does, except for the projection stuff. And you don't really need the projection because right in your pocket, you have a phone that has a nice pretty screen on it. So I just I applaud them for trying the next thing because I think that that's what they were trying to do. What can we come up with that is going to be the next thing? I just don't know that this form factor, even though that the people are saying that the hardware is OK, I just don't know that this type of device is the thing that we're looking for next year, two years, three years from now. That's why I was surprised that the reviewers gave the form factor the props that it did because they weren't holding back on criticizing other elements of it. So I feel like that is something where maybe we're wrong because I'm with you. It felt like this is a pretty clunky form factor. And there are other ways of doing this, even if you don't pull your phone out of your pocket with a watch or glasses or something. But it seems like they were like, oh, no, this works surprisingly well. It just didn't work because it couldn't answer questions or it's slow. I guess it's a race between fixing that, making sure the software works better and is more capable and doing it before people say, well, I've already just got it in my AirPods. I've got it in my Galaxy Buds. I don't need to buy another device that Justin has well put out, well pointed out. No one's pinning this to a T-shirt. This is a product designed for people that wear sports coats. And I wish them and their private school education as well. I will make this promise right now as soon as I get mine because we did order one to try it out and it has not arrived, I will pin it to a T-shirt and take a video to show you exactly what that looks like. I look forward, I look forward to that photo. Interesting, you didn't say, come on the show, you said I'll post a video. Oh yeah, no, I'm not doing the whole show that way. He's only committing to texting me a photo, that's the end as far as he will say. I'll put it on the DTNS TikTok, you too. But yeah, yeah. So guys, we've got some pretty interesting news coming out of Meta where Instagram announced a new safety feature that will automatically blur out nude images sent via direct message on the platform. The feature uses on device machine learning to analyze whether an image via, sent via Instagram's direct messaging service contains nudity and the company won't have access to these images unless they've been reported. The feature will automatically be turned on for Instagram users under the age of 18 and a notification will also encourage adults to turn it on. Testing starts in the next few weeks and a global rollout is expected over the next few months. Now, I haven't been paying a lot of attention but I haven't noticed anyone just outraged about this device or about this new feature like they were about, if you guys remember back in, I think the fall of 2022, iOS was basically wanting to report images that were flagged to C-SPAM. They were gonna report that for images, for devices, for your device, if you had images in your iCloud account. I'm not hearing anything like that and I just wonder, it's because we've kind of gotten past that or is it because they're not flagging as C-SPAM or is it just because this is all happening on phone only if you send the message? What do you guys see as the differences as to why there was such an outrage when Apple was doing something similar to what Meta is doing here with Instagram today? Yeah, I think the difference is C-SPAM. When you are allegedly accusing someone of violating the law that carries a different flavor to the looking at your data, then this, which says on one end, hey, we're not telling anyone this is on device, looks like that's a nude. Are you sure you wanna send that? You wanna send it? Fine, go ahead, that's your deal, right? Maybe you and our two consenting adults or maybe that is like a medical image. That's between you, but if we just wanted to warn you, we're not accusing you of doing anything illegal. And then on the receiving end, it's like, hey, you're about to get an image that might have nude people in it. Are you sure you want that? You don't have to, no pressure, especially you teens out there, because we know how that goes. You can just dismiss this right now, but if you were expecting it, and again, it's consenting or it's medical or some other thing, then we're not gonna stand in your way. And that feels very different than we are scanning for illegal images and we will block them and those will be legally subject to police action. There's a few key differences. Number one, Apple is a privacy focused company. That is something that they have made their branding. They have gone to war with governments on not cracking phones. For them to do any kind of surveillance, not only is a privacy outcry, but also does erode their brand. And I do think that there was part of that was baked into the reason why they wound up backing down. Conversely, we're burnt out on expecting anything from Meta. Meta, we have long ago sent down the river the concept that they care about our privacy. But I do also believe that part of this, especially with AI and machine learning on the backend is something that we're just going to understand happens. It's just going to be like indexing and storage, stuff that matters a lot to folks who work with these kinds of clouds and serves data better for the consumer. But the end result is not anything that we jump up and down and scream about like we did with Google, with Gmail when that first launched. And they were scanning emails so they could serve you relevant ads. I do think this is just going to be more a thing that we see going forward. And if a few photos are blurry that you tap a button and then they're not blurry. I think that users will look at that more as a service than as a bug. Yeah, when I initially read this I didn't have a visceral reaction at all. I actually thought, oh, that's kind of cool because Meta has been being run through the news lately for not protecting children. And that seems like this is one of the things that they're absolutely trying to do. And also one of the things they said is that they want to prevent you from accidentally sending stuff to people who are trying to get you to send them stuff. So they're trying to help people out from the fairest things. And I never got the big brother type of, we're watching you, this is, we're not gonna scan what's on your device is if you decide to send this image and your phone thinks it is a nude, we will then flag it and let you do something. And then the only time that they become aware is if the receiver of it actually notifies them to say, hey, this person is sending this type of imagery. And then they look at it which is the exact same thing would have happened if it was even without this. If anyone could report you for sending something. So I think that this is actually a good thing that Metta is doing, if they could at all get back into our good graces. I don't know if this is enough, but this to me seems like it is a good feature. It does seem to be universally praised. I'm even a little surprised that there aren't criticisms from the other end saying this doesn't go far enough, right? That it should block things or do more, but it seems fairly well balanced in this particular case of, we'll put a speed bump on sending. So people know that they, are you sure you're meaning to do this? We'll put a speed bump on receiving and make it easy to report and block and encourage people like, don't feel pressure to open this, you don't have to, which is a nice bit of counter messaging, but still leaving the control in the user's hands. Which I think is good, but I'm surprised there aren't more people are like, oh, well, if you're 18 or younger, it should just be blocked altogether, which could have been another direction they went. Folks, if you wanna stay up to date on the fast moving world of artificial intelligence, there is a show called AI Named This Show. And you've seen Tasia and Tristran on this show, Tristran Jutra and Tasia Kostodi get together on AI Named This Show, look at all the hype, look at all the doom saying, and then say, well, here's what's actually going on in the world of AI and catch it at aynamedthisshow.com. How many things you think could get progressive darling U.S. Representative Alexandria Ocasio-Cortez, AKA AOC to agree with stalwart Southern Republican Senator Lindsey Graham and right-wing orator Senator Josh Hawley, they all agree to stop deep fake porn. See, there are bipartisan issues. AOC introduced the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024. They really needed to make this thing say defiance. It's the Defiance Act. So AOC introduced that in the House this month. Bipartisan Group of Senators have back to similar bill in the Senate in January. Justin, you've been following all this. So obviously the technology is let's stop people from spreading deep faked nudes and other pornography that isn't with consent. What's the politics behind this? Well, to answer your first question, does it have a chance? Probably not. It is probably going to be going nowhere. But what was interesting is that AOC is putting a whole lot of weight behind it. There was a bit of a rollout for this law in a Rolling Stone article in which AOC said that part of her inspiration was seeing herself in a deep fake naked image that very much a scarred her. It brought up old feelings from a previous sexual assault that she had experienced. And she very much felt for what is undeniable, a growing cohort of people, mostly women, who find themselves in AI-generated images that are nude, pornographic, and maliciously sent on one level or another. The questions that you have with the law itself are really thorny because these get into very, very specific elements of free speech, parody, and specifically when it comes to the origin of this, AOC getting this deep image, exactly how can you interact with very public, very famous figures? The bar for that in America is very, very, very high and almost anything that you do with a politician almost immediately falls under parody. And it doesn't make it a crime. This is not a criminal act. This is providing a clear path to sue someone, to do a civil action, right? It is a civil penalty, but it's stiff. It's $150,000 and it is based on transmission. Part of what you've seen with the groundswell of these arguments and the reason why you do see a bipartisan bent to it have been, specifically there was a case in New Jersey where a bunch of teenage boys were sending around a picture of a female climate that had been run through one of these publicly available filters where you put in a photo from Instagram and it spits out one that purports to say that they are naked. These right now are very crude, but they caused a tremendous stir at this high school and led to questions about whether or not this is going to have any kind of penalty going forward. AOC and the Senate bill are the first two federal pushes into that legislation. I'm in one of those situations where I don't know how I feel about this. On one hand, I'm not a huge fan of the government making new laws when there are laws already on the book that could be enforced, but this is a new age. So maybe there is room for a new law, but I also wonder where does this move out of parody? Where does it move out of First Amendment rights into the rights of the person that's actually being done? Now, if these are pornographic images and stuff like that to the people that are being the victim of this, I understand where they are absolutely coming from. So you would absolutely wanna say, well, in that case, let's make a law. Let's go find all these companies, but you have to be really careful on this just because is it parody? Is it, was it an obscene? Was it just an image that you didn't like? Who makes those determination? Who makes those rules? All of that to me has to be figured out. Certainly so. And I think it's uniquely interesting that AOC wanted to make herself the center of this particular law and they didn't find another specific non-famous person that had gone through it to talk to the journalist that wrote this article in Rolling Stone about it because I do think she's a challenging first example if you are coming into it because it drives right to problems of will this even stand up in court based on First Amendment issues? Yeah, I mean, it's saying if you have done without someone's consent an intimate digital forgery which I suppose could be left open to interpretation maybe it needs to be refined more but I think we all know where it's headed then they can sue you for that. Shouldn't that be against the law already? I mean, is there, there are obviously issues when it comes to public figures and parody and there's lots of established precedent on that but is this not covered somehow by defamatory or some other law? I'm asking without knowing the answer and I don't know if anyone's done it. Really the problem here is scale. Scale, speed and accuracy because people have been drawing crude images of people famous and non-famous on bathroom walls forever. That is not necessarily something that you would get sued in civil court for. We have been making for decades a disgusting photo shops of people that we do and don't like. The issue that she specifically lays out as the inspiration to this is that the barrier to do something extraordinarily accurate is now very low and that needs to adjust how we look at our penalties else wise it will just be more rampant than ever. Yeah, and when I asked the question of whether it's not already illegal it's not because I'm saying this act shouldn't happen or not. Sometimes you need to take something that's already illegal and provide an easier way to define it to meet definitions to speed up court cases. It certainly sounds like something I don't want to happen to me or anyone else I love. So, providing a civil penalty instead of a criminal penalty avoids the accusation of like, well, what if somebody makes a dumb mistake, right? Now you're taking it to a civil case instead of a criminal case. You're not going to throw a high schooler in jail because they use that Instagram filter. It doesn't do anything to go after the tools though. So, I think a lot of people would like to see you go after and say, hey, tools shouldn't be allowed to do this but that also brings a lot of other questions of like, well, do you ban the tool just because of its use? Right? Yeah. That is a very slow when you start banning tools and not the use of the tools. So, in some ways, this is actually fairly narrowly tailored. It's just a question of whether you think this is the kind of speech that should be suppressed and then whether the courts think that this is along the lines of, you know, libel or slander or anything like that. And also, what is the threshold to trigger it? Right now it's by transmission. So, if a high schooler sends it to his friend, that would trigger a law like this. Right. Because it's transmitted. If they just make it and show it on their phone to people? Is that transmission? We get into interesting territory. All right, well, that's territory that we don't really have any answers for. So, let's get to actual answers sent to us by the folks in the audience in the mailbag, Rob. Yeah, so H, he actually sent a email to you, Tom and Sarah and Roger. And he says, I think that your comment about this is spot on talking about EVs and charging and just ranging anxiety and stuff like that. But, you know, he says, I think that your comment about this is spot on. I've had my Hyundai Kona EV for almost five years and have a charger at my home. I've taken plenty of long trips to take photos where I'm somewhere in the middle of nowhere without many charging options. I found that if I use apps like a better route planner when planning the trip, when driving longer sections, I've been able to contend with situations where a charger might be broken. Also, I've noticed that some hotels are starting to install parking spots with overnight chargers. The last time I went from Philadelphia area to Boston, my hotel had eight spots of this kind just for charging and it was free as long as you were staying at the hotel. You could charge a car up to 100%. Final note, in almost five years of driving, I've yet to run into a situation that added a significant travel time to my trip. Perhaps I am a planner. So once again, he says, stay safe and love the show. So we need to thank H for sending that in. I think it's a really good comment based off you guys' conversation the other day. Yeah, thanks, H. If anybody didn't hear it, I suggested that people who are more skeptical about EVs because of situations like this are panzers, taking it by the seat of your pants sort of thing. I just want to jump in my car and go and I don't want to have to plan. I don't want to have to use an app to see where the chargers are. With gasoline powered cars, I know that a gas station isn't ever too far away. Planners don't mind it. They plan anyway. So they're like, yeah, there's plenty of planning tools. You can always find chargers. You just need to look at the app. And so those folks aren't bothered by this. So thanks again to H. And also we need to thank Justin Robert Young. So Justin, where can folks who are listening find you in front of a camera and microphone these days? If you want more conversation about not only the AOC privacy law, which we talked about on our Patreon of where not wrong, myself, Jen Briney and Andrew Heaton, then you can get that there. And then what we talked about on GDI, the FISA approval, we have a conversation on our free episode. So if you haven't tried out where not wrong, then go ahead, give it a spin. Three fine folks having a good conversation. Indeed. In fact, patrons stick around for the extended show, Good Day Internet. The reauthorization of FISA in the US that Justin just mentioned is in danger. And hardly anyone outside DC is paying attention to this. Is the Snowden era spy bill just not a concern anymore? I have a conspiracy theory. Stay tuned. You can also catch the show live Monday through Friday, 4 p.m. Eastern, 2100 UTC. Find out more at dailytechnewshow.com slash live back discussing what life is like after a tech layoff with Nicole Lee and Veronica Belmont. When Peralta will be here too, talk to you then. The DTNS family of podcasts, helping each other understand. Prime and Club hopes you have enjoyed this program. Hehehehe.