 This 10th year of Daily Tech News show is made possible by its listeners thanks to all of you including Andrew Bradley, Dale Mulcahy and Matt Zaglin. Coming up on DTDS using brain scans to settle trademark disputes, a US Supreme Court case that could stop you from liking Reddit posts, and Andrew Heaton let AI models do an episode of his podcast. Talk about how it went. This is the Daily Tech News for Monday, February 13th, Gallentine's Day 2023 in Los Angeles. I'm Tom Merritt. And from Studio Redwood, I'm Sarah Lane. And I'm the show's producer, Roger J. And we are very pleased to have the host of the political orphanage with us today. Andrew Heaton, welcome to the show. Hello. Delighted to be here. I'm coming from Edinburgh, Scotland. How is Edinburgh right now? It's really nice. We actually had the sun today. It was cool but sunny, which is my favorite weather, as opposed to foreboding and brooding, which is what they prefer during winter. So it was very nice. Yeah, so very, very unscotland-like in many ways. It was scotland-like today. It was very nearly Ireland. It was like Scotland plus hope. All right, let's start with the quick hits. Germany's Deutsche Telekom, France's Orange, Spain's Telefonica, and the UK's Vodafone previously proposed creating a joint venture to operate across operator ad targeting infrastructure focused on first-party data. The European Commission Antitrust Division ruled that this venture did not spark competition concerns, giving it the green light to proceed. The carriers say that this ad tech infrastructure will require explicit consent from subscribers to use personal data. The European Commission also said even as it cleared the venture to proceed, data protection rules are fully applicable. They didn't suspend them somehow. Opera posted a demo video showing a chat GPT-powered sidebar tool integrated into the browser. So yes, they are going to put it in Opera. The demo showed it generating a bulleted summary of a web page. This shortened feature is going to launch in browsers very soon, they say. Opera also said it's working on AI-powered features to augment browsing with plans to add popular AI-generated content services to the sidebar. So things are coming is how I interpret all of that. Microsoft began rolling out access to its stuff on the waitlist for the new Bing with chat GPT integration. Currently, Microsoft limited the rollout to desktop Bing, saying they don't have their mobile experience quite ready just yet. The security camera company Arlo had previously announced that its end of life policy will go into effect on January 1st, 2023, which would result in out of support cameras losing access to free 7-day cloud storage of camera recordings. The first model will go end of life on April 1st. CEO Matthew McCray announced on Twitter that Arlo will not remove free storage for any existing customers and will extend the end of life dates of further year. McCray also committed to providing security updates until 2026. Arlo previously announced it would end other features for older cameras including email notifications and E-911 emergency calling. So it's still unclear if these were extended as well. The VR headset maker Bigscreen announced the Beyond. It is a PC-only headset with two 5K 90Hz OLED displays and weighs just less than six ounces. How did the company get the weight down? Well, dieting, exercise and it doesn't contain knobs for fittings or diopters instead offering custom fits through a face scan. So you do that on your iPhone and then you use custom prescription lenses in the headset. It also uses SteamVR base stations for tracking. So that puts a lot of stuff outside the headset as well. And it requires an optional audio strap for built-in headphones. It does not come with controllers but works with anything supported on SteamVR. And if you're interested after knowing all of those things that you have to take advantage of, it's available for pre-order now for $999 shipping to the United States in Q3 and Canada and Europe in Q4. According to a study by Marketplace Pulse, Amazon's average cut on each merchant sale surpassed 50% for the first time in 2022. This included Amazon's sale commission as well as optional logistics servers and advertising on Amazon itself. Marketplace Pulse began sampling seller transactions back in 2016, so it's been doing it for a while. Previously found that while sellers paid more to Amazon per transaction each year, this was largely offset by new customers and strong annual sales increases. However, 2022 saw Amazon's slowest ever sales growth. They're just taking a lot more money. What's unusual about that? Alright, let's talk about trademark cases. Tech may be able to help determine them in a way that involves scanning your brain. Trademark infringement cases are a little tricky. You have to prove in court that two brands would confuse a reasonable person, but there's no metric standard reasonable person. Some cases are pretty obvious. I can call my laundry detergent Linux because no one is likely to confuse laundry detergent with the operating system Linux. But I can't call my computer company Microsoft since people might think I mean the Microsoft started by Bill Gates. I have to pick a different name if I have a computer company. There are, however, edge cases. One of the most famous Apple put out a music product called Apple Music. And even just before that iTunes and there was a record company started by the Beatles called Apple. That one went back and forth for decades. Apple ended up finally winning in the end. A judge has decided has to decide in these cases based on the evidence. But the evidence is usually really biased. Plaintiffs surveys are the most common way. And they always lean one way or another. Plaintiffs surveys always find there's confusion. Defended surveys never find confusion. A 2019 paper from the Georgia Law Review found that just subtle changes to wording in these surveys can significantly impact responses. So there's no standard for the survey. But perhaps there is a better way involving brains. Yeah, marketing professor at the University of Virginia, Zihao Zhang, wrote a piece in the conversation describing why he decided to try scanning the brain to look for evidence of trademark confusion. Zhang published his team's finding in a paper in the journal Science Advances. They relied on an effect called petition suppression where brain response becomes weaker the more times it responds to the same thing. Basically tuning things out. They showed subjects, pairs of images from supposedly infringing brands. And if they were perceived as similar, the response to the second one should be weaker. So Tom, what did they find? Yeah, they compared the MRI results to three surveys. They had one that favored the plaintiff, one that favored the defendant, and then one that was fairly neutral. And Zhang found brain-based measures reliably matched the most neutral survey results. They were able to tell pretty well when there was confusion and when there wasn't. As stated in the paper, this could quote, inform the reasonable person test of trademark infringement. Now, the researcher's caution, this is not the final word. There's lots of factors that go into trademark infringement. A judge would use this as one factor to determine if there was brand confusion. At least they could. Heaton, I know you're the son of a judge. How does this idea of just doing a quick MRI on a few people to solve a trademark case strike you? This strikes me as a kind of boring sequel to Minority Report where they're using this technology in order to assuage trademark disputes. I'm all in favor of it. I think it'd be lots of fun, but I'm curious as to how they're going to wheel it into the courtroom. It seems like pretty big equipment. I guess they'd have to do an order to say like, okay, there must be an association that will go and test a random sample of people, measure their brains, and see what the confusion rating is, but that sounds expensive. I would think that it would wind up being kind of similar to the way the surveys you mentioned are right now, where the defense and the prosecutors would hire teams to go do it. Hopefully the technology would allow them to do it in such a way that it's less biased and less easy to tinker with. I don't know because it probably depends on how they set up that test, but I would assume that something that will wind up coming out of the pocket of the defense of the legal teams themselves. Yeah, who pays for it is going to be the legal teams and then it becomes questionable. Are you getting unbiased results unless we develop an industry where there's independent MRI scanning people? I guess if we started using this for other tests of a reasonable person, there's lots of examples of a reasonable person. If you could scan the brain and be able to independently verify, like, yeah, no, they think that's the same thing. If you could use that in other situations, then maybe that causes an industry to spring up. It could be. Let us hope that we don't suddenly combine the warm and cuddliness of lawyers with the pricing mechanisms of the health insurance industry. That would be, I think, a really met confluence of factors to go into effect, but it could be. Yeah. One of the ideas they brought up in this paper was negligence cases or obscenity cases. Is this offensive to the community kind of cases? You could, again, show people a scan and know what the brain looks like when it's offended, which sounds silly, but you could do that. And then see if these things are considered obscene or not with a more objective view. So it would kind of be like scientifically verifying the, I can't define porn, but I know it when I see it. Like, we'd be able to have an actual MRI litmus test if he saw porn. Like, that's for sure he's not lying. Or he saw offensive porn. Right, yeah. More particularly, yeah, yeah. Okay. As opposed to innocuous porn. Yeah, and we could define a new category with that. Not going to touch that one. The U.S. Supreme Court is hearing two cases soon that could impact Section 230 of the U.S. Communications Decency Act. We talk about Section 230 on the show a lot, but just a reminder, it protects Internet platforms from liability for what you post on the platform. Arguments in one of these cases, Gonzalez versus Google, begins February 21st. TechnologyReview.com has been highlighting this case. Go read the full articles to get the full picture, but let's do a little TLDR. Yeah, Section 230 says that Internet services will not be treated as a publisher of information created by others. Before Section 230 existed, the law was interpreted to mean you could either not moderate at all and then not be liable for anything posted, or if you moderated, you became liable for everything posted. If you're liable for everything posted, you're going to be really strict about what you allow to be posted. Section 230 lets a platform engage in moderation without being on the hook for everything you post, so they don't have to screen every post before it goes up. However, there are exemptions to the protection. One exemption is that if the platform itself creates a post, it's responsible for it. If Facebook wrote something, Facebook is responsible for it, just like anybody would be responsible for what they write. Another of exemptions is if the post is criminal, then the platform has a responsibility to remove it. Gonzales vs. Google argues that YouTube violated the Anti-Terrorism Act when its algorithm promoted content created by Daesh, a.k.a. ISIS. The court will address whether recommendations of content are the same as the display of content. If recommendations are no different than displaying content, then Section 230 applies. And YouTube would then be off the hook. However, if recommendations are not the same as simply displaying it, then what are they? How the court answers that could have widespread implications for how the Internet works. And remember, not all algorithms are the same. An algorithm lets you sort posts chronologically, geographically, alphabetically. An algorithm can also show you posts at things you'll like. They're all algorithms. They just do different things. Reddit filed an amicus brief explaining that its moderation approach relies on upvotes and downvotes, which are used to help determine what rises to the top and that's determined by an algorithm. Yeah, it may not be a recommendation algorithm per se, but it's counting the likes and the not-likes and then deciding where on the page to display that. So it is an algorithm that's doing this. This is coming. We've talked about it on the show a couple of times. It's a chance for us to remind you that it's coming in a couple of weeks. We won't get a decision. It'll just be the hearing. Heaton, I don't know if you've been following this much. What's your impressions of this? Well, I'm really interested in how they're interpreting 230 in this instance. It's not that they're claiming that the algorithm prioritized something and therefore was violating neutrality, but rather that the recommendation itself constituted posting something in violation of 230 because it was criminal, i.e. Daesh. There's a couple of ways they're trying to get the court to rule. One would be to say, like, hey, that's criminal content. And so when they recommended it, they were recommending criminal content. The court would have to decide that recommendation was closer to display in order to count it as criminal. Otherwise, it's just like, oh, well, they got around to taking it down. They just didn't get around fast enough. There's actually a similar case involving Twitter that's going to test that part of it as well. But this is really about, do you cross the line out of your protections if you recommend something? Is that different than just displaying it? First of all, the thing I'm most curious about is if there is some sort of saving grace time delay type thing where you've got three hours to notice that you've accidentally posted something, or allowed something to be posted on your website that's criminal. I assume those statutes are in there. But otherwise, it's kind of an interesting ideological breakdown. I don't think I would see this as terribly different. I wouldn't, on my end, unless somebody gave me a really persuasive position, I wouldn't put selecting something algorithmically as the same thing as posting it outside of 230. Yeah, I mean, the section 230 part that's applicable is pretty short. It basically says, no internet service provider shall be treated as the publisher of things posted by somebody else. If you're being very literal, then it doesn't matter if you show it in a different way. That's still not you being a publisher. It's just how you're displaying it. But it's hard to tell how this court is going to rule. You think about how hard are they leaning into that algorithm. So with YouTube, presumably there's a little thing at the bottom that's like, you liked this video, so you might like this beheading video from Daesh. So it's very direct, right? But as you all pointed out a moment ago, some algorithms might just be in alphabetical order. It could be in chronological order. There's still some sort of preference mechanisms there. So if you have a weaker algorithm that nonetheless is collating things outside of chronology or alphabetizing, then would that also qualify? Yeah, and is the court going to get into the business of that, right? If they're saying like, okay, you can't be treated as the publisher of things as an internet service platform. But we're going to decide that once you start telling someone if you like this, you like that, that's content. Now you are the publisher the way you would be if you wrote a post yourself. And the recommendations are the content of your post, possibly. I could see a niche with a little opening where someone might write an opinion that points to responsibility that way. I don't know. I'd be a lot more swayed for that kind of thing when it comes to platforms being neutral than I would for it actually constituting content. That to me is just a shortcut for finding some way to prioritize things. I'm not too bothered by that, but you can make an argument in terms of neutrality. And section 230 has nothing to do with neutrality. All section 230 is saying is, hey, if you're moderating stuff, we're not going to punish you for that. Even if some of your stuff is wrong, it's the person who posted it in the first place that is responsible for it. It's really not contemplating neutrality or perspective. In fact, all it's saying is like, you can tilt this whatever direction you want. That's beyond our purview. You're just saying, if you go in and start tilting, you're not suddenly responsible for the things you do let up. Well, folks, if you have a thought about something on the show, but you don't know our email address, here it is. Email us feedback at dailytechnewshow.com. ChadGPT has been used to write everything from technical manuals to code cover letters and resumes. Justin Robert Young even uses it to write his podcast show notes. But could it write a whole show? Andrew Heaton decided to find out and had it write an episode of the political orphanage. Heaton, what inspired you to try that? So I did an episode with you and our friend Darren Kitchen on ChadGPT. I was really interested in ChadGPT specifically from the angle of automation and rendering jobs obsolete. That tends to be the thing people freak out about, at least with previous technologies. So I wanted to see how long do I have before I become obsolete? Can I retire? Do I need to move back in with my parents preemptively? Winner podcast host specifically going to be made obsolete. So I ended up doing a full bonus episode on my program of this particular content where I fed in prompts to ChadGPT, then stitched together an entire script, and then I used a service called Descript to overdub my voice. So I didn't record any of it, but it sounds like me. It's a robot doing a very good Heaton impression and had it run through there. And that's basically what I wanted to see. How good is it compared to me? Now, how detailed did you get? Did you break down elements of the show? Did you give it segments? Did you do it all at once? Yes. At least when I was playing with ChadGPT, you are limited in terms of the total word count of the answers that it gives. I think it's 500 words or something like that. So I had to do it in bits and pieces. And so I said like, and I would give it very specific directions. So I'd say, you know, do an introduction to the show, but be sure to include welcome to the political orphanage, a home for plucky misfits and problem solvers. That way there was some regularity there. So I did slightly massage it that way, same with the ending. But I gave it segments. I even had to do jokes. I do comedic advertisements on my program regularly. And I was real curious to see can it replace comedians. And so I told it, write jokes for a whimsical product designed for horses and had it write that out for me. How much of your actual voice do you think it got right? It's pretty good. I'm very much impressed with Overdub with Descript. The bits that are noticeable at this point are any words that aren't super common. The syllable will be the wrong intonation. Or if it's any word that's a proper noun that's not really specific in the dictionary. So I don't know if we're talking about like Star Trek characters or something. But even with more kind of normal words that are secondary or tertiary, it'll kind of mess up on that. But overall, I'd say it does a fairly good job with Descript because you're just putting in a script and you can have it say whatever you want. You can kind of get around that where theoretically if I had like a really bad cold or I was incapacitated vocally for a while, I could put in a script there, listen to it and go, oh, it really, it's not able to get that name down. It just, it can't get Tom. I'll have it say, you know, Tim or whatever the thing is, you kind of work around it. But the pronunciation is pretty good. I think it'll get much better. So we're all out of work soon. Well, here's the thing. It wasn't very good. The chat GPT was, it was, it was like a very like, if you didn't know what I was up to, you would have just thought, man, Heaton really phoned it in today. This is a, this is Heaton's either quit taking his testosterone supplements or he's hungover or something. He is not bringing the heat today. It was, it was all right, but it was just, it was very boilerplate. It lacked the pith and it lacked the zing. I pride myself in being substantive, but funny and part of my job is being an engaging entertainer in addition to bringing ideas to people. Chat GPT was serviceable if it was writing a, an essay for English class or something like that. It's fine for that. If you're, if you're just spitting out ad copy or just copy. The humor was a little off. The, the ways that I would have prioritized were a little off and then it did have bizarre clinks in it where it would go, let's face it. Something, something, something horse joke. Let's face it. Something, something, something next horse joke. Let's face it. And it just over over. Yeah. It had a kind of uncanny valley effect. So I'm not sweating it. I, I, I think I've at least got, I think something else will do me. And career wise, much before the robots get me, I think I'm much more likely for something bad to happen in another sector than for the robots to happen. It's interesting. I think that what's going to happen with podcasting is not that it's going to put us out of a job. I think what's going to happen is basically it's just going to expand the scope quite a lot. So friends that I have that are journalists that write articles and write blogs that aren't podcasters. I think the technology is going to get to the point very soon. Where they also have overdubbed voices. The second they post something on the website, it, it puts it out as a podcast with their voice, them never having read it out loud. And so you'll, you'll find that a lot of the writers you like will suddenly have a quote unquote podcast. They'll sound a tad bit robotic. That's where I see it going in the immediate future. But I'm not really worried about it right now. It's not quite able to get that human je ne sais quoi that you need to, to remain engaging with your audience. As we engage with these more and more, I feel like those kinds of uncanny valleys are coming more often than we expected. I mean, we always talk about the uncanny valley with robots, but you're pointing about one with creativity, with emotion, with, with expression that I think we're going to run into those walls and be like, ah, it's just really hard for it to get past that and sound perfect. It's serviceable for stuff, but it can't, it can't, it can't really replicate us. Yes, I think you're absolutely right about that. I mean, one of the great things about chat GPT, and I think why it is a game changer for search engines and Google is that it's very good at analyzing lots of data and concisely giving you what you want, which is great. If you need research or a recipe or directions or something like that, that's absolutely fantastic because what it's doing is it's saving you the irritation of waiting through multiple options or watching a YouTube video or going, having to scoop away all the ads and swat them away. But when you're, when you're doing creative stuff specifically, I am not relaying information as concisely and quickly as I can. Part of what we're doing is packaging things in a way that are engaging, that are fun. It might be lyrical. It might be comedic. That's going to take a while. Even then I do think that you're going to see like AI augmented comedy very soon, like very, very soon. If I were writing for a late night show where I had to produce a ton of jokes all the time, a lot of the time there are events that we're all going to have to talk about. So I don't know, the British Prime Minister trips and falls down the stairs and it's a really funny video. Well, you know every single late night show is going to talk about that. So you might have a situation where you go to chat GPT and say, write 50 jokes about the Prime Minister falling down the stairs. Most of them aren't going to be very funny. Maybe one of them's funny in and of itself. Maybe one of them might get there. But it wouldn't be like you'd fire all the writers. It would be more like just, it's kind of giving them ammunition to use. We're being like a kind of a crutch that they can use and hopefully we'll let them do something else. An aid or a tool or something. Yeah, an aid, yeah. All right, let's check out the mailbag before we get out of here. Let's do it. Joao and Lisbon wanted to share his experience with Prometheus. That's being AI's beta. Joao says, I decided to ask Prometheus to write a good morning greeting for my team Slack channel. Prometheus gave me three greetings that included some info of the day type information. One of the options said it was Valentine's Day, but that was wrong because this was asked around 6am GMT on February 13th, not 14th. I asked Prometheus why it told me it was Valentine's Day and it responded by saying it was February 14th in some places on Earth. Okay. When I asked what places it was already February 14th, it gave me a list of places which were all incorrect. For example, Tokyo, Japan. When I asked for the date and time for Tokyo, Japan, it gave me the correct date and time contradicting the information it just gave me one prompt earlier. I was disappointed by this failure, but used Bing's feedback options to explain all of this to Microsoft. Hope they can fix the bug and improve Prometheus' accuracy and consistency. By the way, Joao let us know that this entire email was composed with Prometheus using similar prompts, which I would not have known. Yeah. Because I don't know Joao. You know, it's maybe he just talks like I bought. The only tip-off was that the font, the typeface, was unusual, which once we knew he had copied it from the chat GPT. I was like, oh, that's the chat GPT typeface. So I'm going the whole long way about this. I need to try to automate listeners. That's what I should be doing. Automate listeners so that I'm not beholden to them. This is very, very clever. Right, right. It's an end around to the system. Exactly. Yeah. Thank you, Joao, for not only sharing your experience with us, but using the tool to save you the time it took to share the experience with us. I'm curious how much time it saved him if he had to put in the prompts to say make sure to mention this, make sure to mention that. But yeah, it's good stuff. It's good stuff. If you have good stuff for us, send it to feedback at DailyTechNewShow.com. Thanks to you, Andrew Heaton, for being with us today and sharing your chat GPT experiences. Let folks know where they can keep up with the rest of your work. Thank you very much. Always a pleasure to be here. I host a program called the Political Orphanage, so-called, because I am not catering to red team or blue team. And encourage anybody that wants to get fun, nonpartisan political content, give it a try. The Political Orphanage. Thanks also to our brand new boss, and that boss's name is Wade. Wade just started backing us on Patreon. Thank you, Wade. You know, we couldn't do this show today without Wade, and everyone liked him. So be like Wade if you're not already. Patreon.com. Slice DTNS. Wade on in. Speaking of patrons, stick around for our extended show, Good Day Internet. We roll right into it when DTNS concludes. But you can catch our show live Monday through Friday at 4 p.m. Eastern. That's 2100 UTC. Find out more at DailyTechNewShow.com. We're back doing it all again tomorrow with Brian Brushwood joining us. Talk to you then.