 This 10th year of Daily Tech News show is made possible by you, the listener. Thanks to all of you, including Chris Allen, Chris Smith and Mark Gibson. On this episode of DTNS, Dolby wants to make your TV speakers sound good without having to change anything or get a soundbar and nothing. The Verge reports on Google's note taking large language model and why the problem with AI is scale. This is the Daily Tech News for Monday, August 28th, 2023 in Los Angeles. I'm Tom Merritt. From San Francisco, I'm Nicole Lee. And I'm the show's producer, Roger Che. Hey, did you hear Open AI just launched its Enterprise version of ChatGPT? That adds you more data analysis, privacy features, things like that for your big enterprise level business. Let's see what else is in the quick hits. More than 500 flights were canceled and even more delayed because of a network-wide computer failure in the UK's air traffic control system. The problem affected the system's ability to automatically process flight plans, meaning all flight plans had to be processed manually, which, as you can imagine, is a lot slower. That severely limited the number of flights that could be authorized for takeoff. The UK's national airspace controller Nats said it identified and remedied the fault after about four hours, but still pretty widespread effects there. Lots of credible Apple leaks out there today. None of them terribly surprising. We're going to get new iPhones next month. Wow, likely on September 12th. That all tracks iPhone might get USB-C, which would be new, but also not that surprising given the EU laws coming. The least expected leak comes from Bloomberg's Mark Gurman, who says an overhaul to iPad Pro is in the works for 2024, which would be the biggest update to the iPad since 2018. He's hearing it would be the first iPad with an OLED display at 11 and 13 inches. Might be a new magic keyboard that would add a larger track pad if you want the iPad Pro to act more like a laptop. And of course, an M3 chip expect iPads to be announced as they usually are in a separate event from new iPhones. So we'll probably get new iPhones in September. And if we get new iPads, those will come in October. DoorDash will offer a product with some language and voice processing so that restaurants can have a machine take phone orders without making employees answer the phones. They found like something like 50% of phone orders get lost because people just don't answer the phone at the restaurant. They're too busy. The service will include humans who can jump in if the automated system runs into trouble with an order, but not the humans in the restaurant like part of the service. And of course, it can plug into DoorDash Drive, which is the company's white label direct delivery product that lets a restaurant manage its own delivery process. Tech News Taiwan reports that ASUS may have shut down its Zen phone division as part of a reorganization. Now that Riorg did not appear to affect the gaming phone team, so we'll still have our ROG phones. You just won't get the Zen phone. Zen phones are loved for having flagship specs in a more compact form factor. And finally, WA Beta Info reports that the latest Android Beta of WhatsApp lets you select original quality photos or videos, no compression. The BA Beta Info's screenshot shows choose from gallery as an option in the document picker and specifically mentions the ability to send original quality photos and videos. This is different than the option to send HD photos through WhatsApp, which launched last month, but still compresses the image. And that's a look at the Quick Kits. All right, let's talk about this new Dolby feature. It's for Atmos, Dolby Atmos called Flex Connect that lets you use accessory wireless speakers along with your TV's built-in speaker to make your home theater sound better. That way you could put speakers anywhere in the room you want to, you don't have to position them just right, and they will work well with your TV's built-in speakers so you don't need to buy a separate sound bar. Dolby uses the TV's built-in microphone to locate and calibrate each wireless speaker, and they think they've got their algorithm tuned, so it doesn't matter what speakers you have and where, they can make it sound the best that those speakers are capable of. It will know the relative capabilities of each speaker, for example. So if your wireless speakers are better at bass than your TV, then it'll push more of the bass out to the wireless speakers. This is similar to Samsung's Q-Symphony. It does a similar thing. Sony's Acoustic Center Sync also does something like this. Dolby is demonstrating this at IFA in Berlin this week and says it will ship in TCL TVs and wireless speakers next year. Nicole, I know you're not like a big expert audiophile or anything, but you know, how does this strike your fancy for a home theater setup? The thing that strikes, the thing that pops up to me the first and foremost is that this does require getting a new TV. It sounds like the Dolby Flex Connect technology anyway, only applies to newer TVs. The TCL, as you mentioned, will be the one of the first TV manufacturers to come to technology. But I don't know about you, but I don't get a new TV every few years. I just have to still have the same old TV that I've had. Same here. So for the last few years, so me personally, I would prefer it if just all TVs have this technology versus, you know, some of the new ones. But for me, I'll just get a soundbar because I'm not changing out my TV, which you know, it still works fine the way it is. So I don't know. I think this definitely probably more appeals to the people who are shopping for a new TV. But even then, I wonder about the flexibility of such a technology and sort of the benefit of a separate AV system is that you can swap out your soundbar or your speakers or whatever as technology improves. And I mean, maybe they'll be able to like have a little firmware update. Maybe Flex Connect will get better in the future. But I don't know. Well, that's the idea with Dolby is that hopefully more than just TCL puts this in, more TVs come with it. And as you buy a new TV, it comes with it. And maybe it's cheaper than buying a soundbar to just get a couple of wireless speakers. I think, Roger, where I start to wonder is how many devices is this going to be? And even if it gets in a lot of TVs, you also have to have it in the speakers. And I think that those are two really good questions to raise because, you know, as we all know, you can't adopt a product if there's not enough manufacturers and manufacturers that might not introduce a product if they don't see enough people using. So it's a chicken and egg thing. But what this does, at least in my eyes, is it does sort of standardize this sort of technology, which was mentioned in the read. Samsung and Sony have both introduced, respectively, on their own platforms. It does free up people to say, hey, maybe I don't want a Samsung speaker or I don't want to be tied to Sony's acoustic center sync system. And this allows a little more flexibility. The truth of the matter is, you know, 80% of the people are fine with the TV they have and they're fine with the sound that their TV produces. And when you get into niche or, you know, as you get into a higher element of people who want to select and pick and match products, they tend to either do things, as Nicole says, is I'll just get an AV receiver that supports all the formats, plug my TV into it, you know, then I can move my speakers and get whatever I want. It requires a lot of separate, you know, picking and choosing and buying and cobbling together, which I've done and then I just sold like last year because it got a little too much. This allows kind of a halfway in between getting a sound bar or the simplicity of just buying a sound bar and having the audio and then having something a little more expansive without being so complicated where you have to worry about, well, you know, I can buy from 100 different loudspeaker manufacturers where I can go to Best Buy and select from these five and say, hey, this fits my room. Perfect. I don't have to figure out where the sound's going to get adjusted. Like I have an old Denon, you know, from 12 years ago that had the stick to microphone in the center of the room and will auto adjust everything for you. But it was so Clujie and this, this promises to be... Well, 10 years ago, right? That's old. Well, exactly. But this promises not only to take that and make it more simple, but it also offers a standardized or industry standard that everyone could adhere to and not have to worry, well, this is incompatible. Well, it's not an industry standard. It's Dolby's proprietary thing. But to your point, Dolby's not trying to keep it in their Dolby branded speakers. They're willing to license it out. And they're good at that. Yeah, that's what they do. Yeah, I think that's a good point is that this could show up in more products. And I think that percentage of people who are not home theater aficionados who want to max out quality, but do want something probably just bigger than we think because I think a lot of those people show up to the store and say, that looks too complicated. Even a soundbar and two, you know, like a 2.1 system, they're like, wait, I have to put that on, mount that on the wall. I don't want to do that. I don't want to mess with that. So being able to say like, hey, if you buy this TV, if you want a little more surround sound, you could just buy those two speakers over there and they're less expensive than the big soundbar system. Maybe you end up expanding this. It's just a matter of getting it into more products, which is what Dolby wants to do. And Dolby's good at that. So maybe it will. Verges, David Pierce has an article up called Google's AI powered note taking app is the messy beginning of something great. It's a look at Google's Notebook LM. That's the one they announced at Google I.O. in May. You might remember they announced it as project tailwind, but they very quickly changed it to be called Notebook LM. It trains off your own notes. So if you remember us talking about it, we liked it because it was a limited data set. This isn't train off the entire web. This is an algorithm that has been trained to be a large language model that then trains specifically on your things and gives you specific answers related to your own research. So Pierce does an excellent job, of course, at explaining his experience. I encourage you to read that full article, but a few items from it stood out to us. It only accepts imports from Google Docs right now. There are more sources to come, but it is a little limited right now and it isn't beta. For now, each source can only be up to 10,000 words and a project can only use five sources. So again, maybe it'll expand, but right now in the beta, it's a little limited. It generates things like outlines, lists of topics and even comes up with questions you can ask it. Answers to any questions, whether they're the ones that suggests or not, come with citations from your source material. And Pierce said, it does a really good job of identifying the bits of information that are relevant to my question. In fact, he and apparently a lot of people that use this really valued that more than the actual answers it gave. He was like, answers were all right, not anything maybe I couldn't come up with my own, but the citations really revealed some things about my source material that I might not have seen otherwise. It also added info one time that was not in Pierce's sources. So he asked Google about that and they said, yeah, the model has the capability to bring in other information and they're trying to work out what the line is there because they know that in some cases it might be helpful. In fact, in the case with Pierce, apparently it was helpful, but they want to make sure that people know when it's doing that, have some control over when it's doing that, maybe turn it off, turn it on, etc. And they're still trying to work out where that line is. Nicole, what do you think of notebook LM based on this? Honestly, as I was reading this just from a purely selfish perspective, I can see this being a hugely helpful for a lot of journalists out there and the reason is because I remember when I was doing a lot of reporting back in the day, I would interview let's say three to four different sources for a piece and I would have to transcribe all of those interviews, right? So those transcripts can be literally thousands of words long per transcript. Can you, I just imagine just spitting that into this software, notebook LM, and it would just like do a little summary, do a little, maybe pulling out relevant quotes from that transcript and it might be really useful in me doing sort of putting my story together. And not just for journalists, I'm sure like research papers in college or any kind of work and profession that you would use research in. And I think it's just so, it makes things that you would normally do already, but it will make it so much quicker and so much more efficient than you just doing it yourself, you know. Yeah, Roger, you were saying you thought it might be really helpful in educational settings. Actually, now that I think about it, it helps in a couple of ways. One, I was saying like studying, I knew someone who was going to a law school and they just had a bit of not just binders. They had several binders full of notes. And, you know, as you study for, you know, whether it's an LSAT or whatever test you're taking, you know, you often have copious amounts of notes. And I can honestly say sometimes it's really hard to keep track of what's what and what's where, having everything not just only condensed but also parsed and being searchable and have AI to kind of help you sort all that. And interpretable, right? It's not just a dumb search, but it's a search that actually knows like, oh, these are the kinds of things you're talking about. I find that compelling. And yeah, that was this. Sorry. Oh, OK. That was this part in the in the story that you said that the the notebook software was able to pull out that, for example, speed was a crucial advantage of the story that he was researching because someone else, the author of one of the papers wrote it because he and because he quoted a bunch of executives saying that speed was a key factor of spreadsheet. So even though he didn't come up with it, the fact that he quoted a bunch of people in the story that mentioned it was a key sort of analytic factor from the story. I mean, also if you're working on a PhD and you're doing your dissertation and you have volumes and volumes of notes from your research from, you know, other other pieces of state, this really does help. If you're an author, if you're if you're writing a book, I know, you know, from the show, Annalyn knew it's whenever she comes over, you know, she's writing a new book. She has a lot of notes she pulls information from. How awesome would it be to just have instead of like, oh, where did I write it? What did I write and when did I write it? And how does it relate to this other aspect of, say, something like archaeology or history? This this is like the assistant you would hire in many ways to help you who start through all the stuff you've collected. I didn't didn't Pierce call it something like like like it like people at Google are calling it a team of infinite interns or something like that. I mean, it's it's it's pretty cool. I mean, I would totally use this just to help sort out my life. I have so many things that could it's weird because I see this. Oh, good. I immediately thought of when you had to go through probate on a house right and and and this kind of thing is like you need to have a lot of information collected in one place. Be able to make it easily interpretable. The sort of thing could be could be really helpful. And it was interesting that Pierce noted that the Google folks just call it notebook not notebook LM. And I have a feeling that whether that's subconscious or intentional that they look at this as a pro a notebook product like instead of using notepad use notebook so that it can take your notes and expand them. And obviously at some point it's going to have to go beyond five sources. It's going to have to go beyond Google docs, but it would be able to to take your notes and make sense of them. I think that's pretty pretty interesting that this could be the first big AI product for Google not search search has got its own path that that it's on. And I'm sure it'll be interesting, but this feels more more useful to me. And you know, if they can offer assurances to potential customers about, you know, privacy about like whatever you stick in there stays within the confines to that account. I think they have a I think they have a winner. Yeah. Well, folks, I taught a course recently. It was just a 90 minute seminar really on how to make a great podcast. And if you weren't able to be part of that, we did have to limit the number of seats. We have it available at the Patreon store. Daily tech news show, sorry, patreon.com slash DTS slash shop. So if you want to get a streamable or a downloadable version, I explained the foundational elements of product podcast producing. I share ideas, my experiences on making a podcast great, how we do our rundowns, things like that. So you can get that class again as a downloadable audio file or a streaming videos depends on your preference. You can get it either way, same price for either one. And you can find that at patreon.com slash DTS. Long time tech reporter and analyst Benedict Evans has a post up that guy kind of buzzy today. It's up there, you know, towards the top half tech meme. Tech meme described it as a look at the ethical and legal issues around generative AI, which makes things that were previously only possible on a small scale practical at a massive scale. And the scale really is the essential part of this. It's a great article. Of course, I encourage you to read the whole thing. We'll have it linked in the show notes. But for now, I want to focus on this paragraph. Evans wrote a person can listen to a thousand hours of music and make something in that style. If a person did that, they wouldn't have to pay a fee to all those artists. So if we use a computer for that, do we need to pay them? I don't think we know how we think about that. We might know what the law might say, but we might want to change that. And then this question Evan puts forward later on. I think really brings that issue into focus. AI makes practical at a massive scale, things that were previously only possible on a small scale. A difference in scale can be a difference in principle. What outcomes do we want? What do we want the law to be? What can it be? I think this is so well put. This is the difference between saying, hey, five or six friends, let's meet up at a bar and putting a notice on Facebook. If you're like a hugely influential Facebook poster and having thousands of people show up at the bar. One's totally fine. The bar loves it. Nobody cares. The other is a problem that you need to plan for and treat differently. Scale makes all the difference. We obviously see that with email and spam. And I think Evan's very clearly puts forward that the problem with AI is that you can't just make an allegory to what we do. You have to look at the scale as well. Nicole, what do you think of this? I think there's an interesting point he made in the article about Taylor Swift and whether you can make it like the, you know, seeing if you tell AI, hey, make this song in the style of Taylor Swift. That's one thing, right? But if you tell the AI, hey, make a song based on the past 10 years of pop music, it might mention Taylor Swift. It might sort of like pull some of Taylor Swift because it's, you know, she's part of the past 10 years of pop music, but it's not specifically about her. It's just like a generalized sample, the 10 past years of pop music. So if that's the case, do they have to pay royalties to the artists of the past 10 years? You know, like how extensive do you want to give that kind of a thing? And I do agree that like a person might be able to do it, like a person might be able to sample, you know, some music of the past 10 years, but a person doing it doesn't have as much, so if a person does it, they're just sampling music, you could argue that, right? They're not just putting from a specific person that say, but yeah, I do think the laws have to be different because it is different. AI is not the same thing as a person and, you know, yeah, I do think that's true. Yeah, and I think he does a great job of explaining in this article that the data that it trained on is not in the model. So when you ask it to do it in the style of Taylor Swift, it doesn't look in its database at Taylor Swift songs, it's just in the model to know what that means and output things. So it's not the same as copying. It used copies to train on, but it doesn't keep the copies of them. So it's a whole different question than copyright at that point. And I think the scale is the place to get at it because like you said, you could say, make us make a song in the style of the last 10 years of pop music. Who do you owe that to? If you say Taylor Swift, it's easy. Oh, okay. I should, if I'm going to owe anything, if we decide that you should owe anything, we know who to pay it to. Although with Taylor Swift, which one are you copying? Taylor's version or the one owned by her original label. Those are two different royalty holders. Like, so it does start to get complicated there. But what if it is multiple things? What if you have changed it? The parameters at what point do you owe any compensation to people and what point do you not? Is it just being trained? I'll throw my opinion out and Roger and Nicole, you guys can bet this around. I think we need to create a very small royalty mechanism similar to what we do for songwriters. There's something called a mechanical royalty that says you don't have to get the songwriter's permission, but if you cover their song and you release it, you owe a fixed royalty to the songwriter. I think we should do something like that for training data. I think it needs to be smaller than the royalty because training data is infinitesimally small in its inclusion in the model. If you remove one song from the training data, it doesn't materially change the model. So it needs to be a smaller rate, but I think there should be a mechanical rate set for if you're in the training data, you're not really being copied, but let's give you something for the privilege of them using your data to create their model. I think for me as well, the question comes to what does the person using the AI, what are they using the final result for? Are they using the sample to release actual music that they're selling? Are they using the result of the art that they create through the AI? Are they selling it through commercial means? Is it kind of an educational thing? So that's definitely part of it. So I do agree that if it's a commercial use of some kind, I do think some kind of royalty is somewhere and needs to be put in place. Otherwise, the whole thing's... You think not just... Because I was saying the creator of the model should have to pay a bit of a royalty. You're thinking the user of the model also might need to pay something if they're selling something. I think that starts to get tricky though. I mean, again, if you're like very clearly copying, sugar of BTS, then yeah, you're gonna owe something to them. But if you're doing something that's a little less identifiable, I don't know. Roger, what do you think? I mean, it's one of those things where there are so many instances where you can think of, well, they should be compensated for this, they should be compensated for that. I think there's merit to both yours and Nicole's suggestion in that a mechanical license, but I'm wondering if that just applies to... Anyone who picks up the data, you have to use this particular set of data. For example, if you're... There could be data that's in the public domain, just like there's... Yeah, well, that's what I'm thinking. So is there a limit? Is there a lifetime to this? That's a good question. My question is, if you sample it, not on the past 20 years, but everything pre-1900s, and you use that as your training model and it generates works from that, do we, would we have the same... No, because those are public domain. I do think it would... Yeah, no, this is a good clarifying question is I do think that royalty would have to apply two things that are protected already, not things that are unprotected. Because, you know, it's so complicated and I do think that that may be one way to deal with it, but at the same time, I agree with Nicole. It's like the user, you know, whoever uses, like when you use stock images, you use stock samples or whatever, you pay a small fee for it that covers the license and the royalty for all the stuff that you use. So you don't have to worry about it from that point forward. And I think that, you know, that you pay to play. So if you use it in a work, whether it's a game, it's a novel, it's a movie... Yeah, but that, again, go back to our earlier conversation. Those are more clear, but what if you're just using the training data to create something that's the last 10 years of images? Do they get nothing for that? I mean, that's the problem that becomes trickier, is like, it's not about... I think current law can deal with I created something that's exactly the same. I think when it starts to become like, well, it's in the style of, or it's kind of like, that's where it gets trickier. I guess based on the stock image example, it would be like, not a real person, but like an AI generated. Yeah, like... And not a copy, but something that kind of looks like that. You know, make this photo look like Van Gogh, or in the style of Van Gogh. And so... Is there something... Right now, that's fine. If it's in the style of Van Gogh, you don't own any... Well, Van Gogh is a bad example because he's public domain, but if it's in the style of Scott Johnson, you don't own Scott Johnson anyway. But when you can do Scott Johnson at the kind of scale that machine learning does, maybe we do need a different mechanism. Scott would agree, I think. Yeah, I think so. Well, folks, I'd love to hear, what would you do if you were the person in charge of setting policy? What would you recommend? Feedback at DailyTechNewsShow.com. All right, one last story here that I think is really interesting. You might know that magnetic tape storage is not just a cool retro thing Gen Z uses to play old music. Oh, no, it has consistently provided a compact way for large organizations to make long-term backups for a long time. Something you don't need to access fast. You just want it reliably stored. They call it cold storage. And it's still being developed. IBM just announced its new TS1170 drive that can store 50 terabytes uncompressed on one cartridge and up to 150 terabytes if you compress it. And that's a big jump. The previous model, the TS1160, could only do 20 terabytes uncompressed. So I know for some of you, you might not have realized that magnetic tape storage was still a thing, which it absolutely is. And for those of you who do know and work with this, hey, you've got a brand new IBM product that's going to increase the ability to store stuff, they're still developing it. Tape is always one of the recommended long-term storage formats for large database. I mean, I'm talking about your home, you know, two terabyte NAS. No, this is for companies. I mean, you could try to get one. It's a little expensive. Even medium-sized companies say if you're like an internet-facing company that deals with a lot of online clientele, you got to store that data. And sure, maybe only a few years doesn't add up to much, but once you start going into 5, 10, 15, that's a lot of stuff you got to keep track of. Yeah, and I say it's expensive for the individual, but for an enterprise, it's actually quite affordable. All right, let's check out the mailbag. TJ Asher liked our discussion on the extended show on Friday about whether it's better to see the movie first or read the book first. TJ wrote, I have always been and still am in the camp of movie or TV show first. The book is always better because it can explore more and go into more detail than a movie. Take Jurassic Park, a movie which holds a special place in my heart. The movie was amazing, but the entire subplot of the industrial espionage and morality of genetic engineering was represented by one silly scene and a single line of dialogue by Jeff Goldblum. The exception that proves the rule is Song of Ice and Fire. The books are so ridiculously drawn out and overly detailed that the synopsis of the TV show is superior for a good portion of the story season eight excluded. Looking forward to the next Friday quiz or debate. Thank you, TJ. You know, through George R. R. Martin under the bus there at the end. Oh, gosh. You don't like the several pages on lemon tarts. I find that refreshing and delectable. I really do actually. I really like George R. R. Martin's writing. I would say you still want to see the TV show first and then read the books. Some of the books still need to be written. I understand that. But the ones that are, I think that's infinitely preferable. Well, Nicole, first of all, where do you stand? Book or movie first? I would say book first if you can swing it. But I don't. I'm not. I'm not. I'm not. First. All right. First, but I'm not a hard line. I'm not a hard line. You can if you watch a movie first, that's fine. I'm not against it. Well, don't hold that against Nicole either. And go enjoy her other fine works. Where can they go, Nicole? The easiest way to do is just go to my link tree username Nicole nerd. Excellent. Patrons stick around for the extended show, Good Day Internet. Dear Abby, the venerable newspaper advice column is still around and handled the question this week of whether you have to get a smartphone in this day and age or whether companies should just remember there are folks out there with their $30 feature phones answer may surprise you. You can also catch the show live Monday through Friday 4 p.m. Eastern 200 UTC find out more daily tech news show com slash live. Sarah Lane and I are back tomorrow with more tech news. Please join us then. This show is part of the frog pants network. Get more at frog pants dot com. Time and club hopes you have enjoyed this program.