 Daily Tech News Show is made possible by its listeners, thanks to all of you, including James C. Smith, Miranda Janell, and Justin Zellers. Coming up on DTNS, should Apple make Siri easier to activate? Why Airbnb price transparency isn't great for everybody, and how cosmic rays cause software errors. This is the Daily Tech News for Monday, November 7th, 2022, in Los Angeles, I'm Tom Merritt. And from Studio Redwood, I'm Sarah Lane. From the far-to-warm New York City, I'm I.S. Actor. And the show's producer, Roger Chen. Oh, my friends, we are going to solve the mysteries of the universe and why your software just suddenly makes weird things happen. But first, the quick hits. Apple announced it anticipates shipping fewer iPhone 14 Pro and 14 Pro Max units due to COVID-19 restrictions at the primary Foxconn factory in Zhengzhou, China. China ordered a one-week lockdown in the area starting November 2nd. Apple says the factory where many iPhones are assembled, quote, is currently operating at significantly reduced capacity, end quote. Apple advises that customers will experience longer wait times to receive their new products. Shipping is now estimated to take three to four weeks from order. Over the weekend, several people noticed the Twitter app update notes indicating that the revised version of Twitter's blue subscription service had launched for $7.99 a month and included verification. However, product manager Esther Crawford from Twitter noted on Twitter, the new blue isn't live yet. The sprint to our launch continues, but some folks may see us making updates because we are testing and pushing changes in real time. So you could never actually subscribe to it. She continued, the Twitter team is legendary, salute emoji, new blue coming soon. So no new Twitter blue just yet, but it sounds like you might get it by the end of the week. In other Twitter news, Elon Musk said that any Twitter handles engaging in impersonation without clearly specifying parity will be permanently suspended. Previously, Twitter issued a warning before such suspensions. China's second biggest chip manufacturer, Wahong Semiconductor, received regulatory approval for an initial public offering on Shanghai's star market, hoping to raise $18 million in the next offering, which is about 2.5 billion US dollars, which will be used to invest in a new chip fab and upgrade an existing one both in the city of Wushan. TrendForce estimates the company holds a 3.2% market share in the global foundry market, making most of its revenue on mature technology, mostly based on a 55 rather nanometer process. Now that star market is a stock market in Shanghai, very different than the one you find on the hub in the causeway in Boston. Totally different star markets. LiDAR is a key technology used in autonomous driving. Over the years, many LiDAR startups come onto the market to try to get a piece of the driver assistance pie. Now we're seeing the industry consolidate. Two LiDAR companies, Auster and Velodyne, have agreed to merge according to a November 4th agreement. Neither company has been able to turn a profit, but they hope that by combining their not profit, they can create scale to drive profitable and sustainable revenue growth. Over the last year, both companies acquired other LiDAR startups. Auster acquired Sense Photonix in 2021 and earlier this year, Velodyne acquired Blue City AI. MessagingApp Telegram added video messaging transcription now available to its Telegram premium users. Free users also get new features. Topics and groups will let groups with over 200 members create separate spaces that work like individual chats with discrete notifications. It also added collectable usernames which can be purchased and sold, secured on the Taun Ton blockchain. They can be fewer than five characters long and function like regular usernames and search and links. In other encrypted messaging news, Signal also added a feature that many are delighted by, but some annoyed by. It added the ability to share ephemeral messages, videos and text. They expire in 24 hours. Kind of how ephemeral messaging works on other platforms. Something that Signal is very creatively calling Signal Stories. It's available on iOS and Android now and coming to the desktop soon. Now, don't worry if you're not into it. You can turn off viewing stories in your settings. Yeah, but you know, give it a shot. Let's see what people do encrypted. And that's the quick hits. Alright, Apple's doing something that may or may not be a good idea. Yeah, so Apple historically arrives a little later to a certain type of technology making up being late to market with its own polish within an emerging category. You could think of the iPod, not the first MP3 player. iPhone wasn't the first smartphone, but Apple was uncharacteristically early when it came to virtual assistance on smart devices. Siri came out in 2011 with the iPhone 4S, three years ahead of Amazon's voice assistant. And certainly there have been others since then. Being so early, however, its wake word set the tone for the industry with the hey wake word, in this case Siri, approach being used by Microsoft's and Google's assistants as well. Samsung broke the mold with their daring high Bixby implementation. That was wild. But now Bloomberg's Mark Gurman reports that Apple began an initiative to change the wake word for its smart assistant to just calling out Siri dropping the need to say hey before the name. Gurman describes this as a technical challenge that required a lot of AI training and underlying engineering work. The new wake word approach is in testing with employees to gather training data. The company reportedly plans to roll out the switch early next year, but Gurman says this could also slip into 2024. Yeah, this also comes as Apple plans to integrate Siri more deeply into third party apps. The idea being to improve the ways that Siri can understand a variety of requests and perform a correct course of action if it can connect deeper into Apple's app ecosystem. Now, we will get to the technical challenge because I think that's interesting here. But the first reaction I had when I heard this story was my Siri has been going off accidentally more often lately, and I feel like dropping the hey part would make it go off even more, right? Like, hey Siri is a lot easier, is a lot harder to accidentally say than just Siri, as witnessed by the fact that we never say I'll exa all in one go because we don't want to set off everyone's Amazon Echoes who's listening to us. I've gotten a lot of false reactions from my iPad when I was just saying the word serious. So I'm not even saying hey anything. I would try to look at my script and saying what am I saying that's close? Serious. So if Apple's going to do this and they figure out a way to have their machines understand what word you're actually saying. That'd be a great improvement, but if this gets any worse, I mean that would be, I think a real problem because Siri is still collectively thought of as, I wouldn't say it's a big of a joke as Bixby, but it's way behind considering how much Google and Amazon have leapfrogged them. I mean, I don't know. I think it's gotten a lot better since its early days back before there were really voice assistants in general. I definitely set off my Amazon assistant for all sorts of reasons. In fact, sometimes it happens during the show where I'm like, I didn't say anything that even started with the literary. Like what are you doing? My mom, when I was visiting her recently, she's got an Apple watch and we were talking amongst ourselves and her watch kept going off because we were triggering that somehow. And we just kept laughing and not really understanding what was going on. I think that that is sometimes a ha-ha situation, sometimes an annoyance and sometimes worse when you actually trigger something where you're like, oh man, this has got to stop. Where can I give more input on the fact that whatever I said, the next time I say that, do not do this again? And I think that's where I would like to see a little bit more some programming efforts because right now it's like, you just get kind of annoyed and stay quiet for a second and then everything goes off and your life continues. Yeah, and I'm willing to entertain the possibility that reducing it to Siri and all the work they're doing to make that work will actually be a net improvement. That they're going to hone in on that and make it only active when you really do say Siri. How many times have I said hey, Sarah and then complained that Siri not only launched but transcribed me saying hey, Sarah? Like it knew I said Sarah somehow, but then also thought I said Siri? I'm on board. Because there's two different systems going, right? And one was apparently better than the other. This means that there's a lot more advancements in their chips and their actual machine learning technologies because if the home pod gets smarter, the watch is smarter, everything's getting smarter, that means these chips are getting more and more like, just great. So that's the thing about the technical side on it. That also kind of speaks to the fact that when I first saw maybe this wouldn't roll out until 2024, I was like, really? I mean, how hard is it? Clearly there's more going on besides just saying, oh, let's delete the word hey. It's easy to make it work badly. It's hard to make it work well. Well, this is an error that we kind of know why it happens. Have you ever had a weird computer error and thought that the only explanation could be ghosts or maybe cosmic rays? Well, there may be something to that second one. Chris Baraniak wrote an article for BBC Future last month called The Computer Errors from Outer Space. His article started with a story about a security researcher named Marie Moe, whose pacemaker suffered a glitch because of some oddly corrupted data that she believed was probably corrupted by cosmic rays when she was in an airplane. Now, that may sound far out, but particles from the sun and elsewhere in the universe constantly rain down on Earth. Some of them like neutrinos pass right through everything without incident. Some like photons provide life-giving energy to plants and animals. And some, like stray protons or neutrons, may in rare cases cause what's called a single event upset. These are radiation-induced soft errors caused by the impact of energetic particles on circuits. One example would be an errant neutron passes through a chip and disrupts the electrical charge to flip a 1 to a 0. Flipped bits were diagnosed as the cause of Moe's pacemaker issue, though whether they were flipped by cosmic rays or not is impossible to tell. Yeah, it's hard to prove. Single event upsets are exceedingly rare and they don't leave a trace. An error caused by an errant cosmic ray is indistinguishable from just a software glitch or a memory bug or normal wear and tear on an old piece of equipment. And even if you can distinguish particle-induced effects, they are so rare it would be hard to get a representative sample size to do any kind of significant study. One of the largest samples collected is from Mozilla, whose engineer Travis Long noted back in April this year that it routinely sees unexplained errors in telemetry data that correspond to flipped bits. They can tell that it's flipped bits and Long noted that a recent bug associated with such errors correlated with a geomagnetic storm. There was also a study of satellites published in 2020 that showed the data errors in orbiting satellites happened in much larger numbers when satellites passed through something called the South Atlantic anomaly where there is increased cosmic radiation. These are correlative though, not causative. While it's hard to prove if it did happen, you can prove that it's possible. Paolo Rec at Trento University in Italy has conducted lab experiments where they fired neutrons at electronics and induced errors. They are using the data to develop improved autonomous car algorithms to be able to detect and adapt to such errors. Yeah, you can use the particles to create an error. You just can't prove if a particular error was because of a particle, I guess, unless you were the one shooting the particle at it. So what can you do with this information? On the one hand, be aware because the occurrence may be increasing. There are so many more chips being used at any given moment that there is just more of a chance for it to happen because there's more chances out there. There's more chips, therefore more chances. And as chips are getting smaller, we're talking about one and two nanometer processes, it's easier for subatomic particles to affect them because the particles are the same size. You also might check space weather. Periods of increased solar activity raise the amount of particles hitting the Earth. For instance, spaceweather.com reported a solar flare on November 7th that interrupted shortwave radio temporarily over Australia and New Zealand. But the number of incidents is still exceedingly rare and almost always is compensated for by error checking and software. Data centers can protect themselves by being geographically diverse and critical equipment like computers in air and spacecraft are hardened against interference. Yeah, one of the things I found interesting about this looking into this story was that those hardened equipment are protected by a bunch of proprietary protections. So you don't know how they're protecting them. Those are kind of trade secrets. So it'd be really hard and probably expensive to do this on your own, which is why they say, for a data center, just make sure you've got data centers in multiple places if you're a big company like that. Reading articles about this and seeing that they're using this for cars and their ability to essentially identify whether they're seeing an object or a person or not. If a flip bit could cause that kind of scary confusion for a computer, that would be like a terrifying real-life example since this self-driving car thing is supposed to be happening, not happening. You guys have been covering that for a long time. So the real-world stuff, hopefully, those applications are hardened very, very much to avoid any kind of catastrophes. It's not hardened at least to error corrected, right? The car example is one of those stories where it's like, you just need one story of people saying, ever heard of flip bits, cosmic events? Well, you know, say goodbye to smart cars that can drive themselves because, you know, they're not safe after all and people will freak out. And maybe at some times this is warranted, but I think from what both of you just laid out, pretty exceedingly rare. And the fact that it is a known phenomenon that is sometimes real and can at least be researched, if not mitigated right away, is the first step. Yeah, and usually a flipped bit doesn't cause a catastrophic error. Certainly in a complex system like a car, it would be exceedingly rare for that to happen. Even in Moe, with the pacemaker that they use as an example of the BBC story, the pacemaker just reset to default when that happened. It wasn't like it stopped altogether, but she noticed, she's like, this isn't the way it usually feels and so got it looked at. So yeah, usually a flipped bit doesn't cause catastrophic errors and they're rare. But interesting to know that that's real because I've often thought about that. Larry and Elana was joking, it's like, oh, now when I make a coding error, I can just blame it on cosmic rays. It is a real thing. Well, folks, it's time to get our holiday gift card list in order. It is that time of year. Each year, we send every patron who wants one a holiday gift card with exclusive art from Len Peralta. We've seen what he's done this year and it's very nice. I think it will inspire joy in your home as well. If you'd like the card, make sure you're a patron, first of all, and then check patreon.com slash pledges to make sure that we have your proper mailing address if you want something sent to you. If not, no worries. But if you do do that by November 15th, please and get the exclusive DNS holiday card, DTNS holiday card mailed to you from us. DNS, that's something. Yeah, we need your actual mailing address, not your DNS. That's a good point. Cosmic rays need not apply. When browsing listings on Airbnb, potential guests can sometimes be hit with a little sticker shock. That's because the service shows the nightly rate for a booking, but does not show required fees that might go along with it. Things like security deposits, cleaning fees, even extra fees like for pets or peated pools, for example. So that cute little vacation rental might not be quite as much of a steal as it appeared to be at first blush. CEO Brian Chesky has announced that Airbnb is making changes to make pricing more transparent. Yeah, specifically starting in December, Airbnb will now let customers set a toggle switch to show the total cost of stays when they're just browsing around. That includes fees rather than just the nightly fee, which is how it's always worked in the past, though it still won't include taxes. When you check out, you're still going to be paying a little bit more, but many people are used to that buying things online in general. So the pricing will show across the app, including in search, in maps, and also wish lists, as long as the user stays logged in. If they log out, they would have to choose the selection again when they log back in. The service will also factor in the total cost of stays more heavily in search results, regardless of whether or not users toggle on total cost. Now, additionally, in the coming months, the platform is going to roll out more discount tools for hosts, letting them set seasonal or weekend discount pricing, and any host checkout requests will be shown to guests before they book. You probably as a person who rents places like all of these changes, right? You want to see the actual cost of what you're booking. You want to see the checkout requests. Like, am I going to have to, like, change the sheets and throw them in the dryer or something before? It's good to know that before you book, but Sarah, you actually manage an Airbnb and some of these are good and some of these are not from your end of what happens, right? 100%. So as somebody who is potentially going to put down some coin, you know, maybe with my family to stay at a vacation rental, you know, the vacation rental that I manage is quite large, you know, there's a big pool and a hot tub and two separate kitchens and quite a few bedrooms. It is a place that a large family typically books for around a week to stay at. That adds up, especially in high season where our rates go up a little bit, which is a separate conversation. But, you know, when you look at a nightly price, let's say something is $800 a night and you think, well, you know, we're all going to be able to stay together, be able to cook food in the house. It's going to be more fun. If you break that down, a hotel would be so much more expensive and we all wouldn't be kind of in this nice homey atmosphere. Makes a lot of sense. Sometimes when you look at that week, total with the cleaning fee that always, you know, is added on plus, you know, various taxes. And yes, sometimes an extra service fee, if they want to heat the pool in the winter, for example, that costs us money, so it costs them money. And that can all lead to people saying, well, hold on a second, like this is insane. Does it really cost this much? And the thing is, is like, yeah, it's going to cost that much either way. But you might as well know that ahead of time, so that I don't have to have that uncomfortable conversation with you afterwards. And I think that property managers slash owners of any vacation rental knows how this works, where you, you know, you have people kind of wanting to go through and itemize stuff later on where you think, that's the whole reason we were on this platform. So that we didn't have to have this conversation. It was all just sort of taken care of for us. And that's why Airbnb is taking a fee. You know, they take a cut of all of this from the booker and the person who's collecting the money who is offering a house. So I think these changes are really good. But I have already seen some property management pushback on the same. Well, now we have to lower our prices because the sticker shock is going to put more people off ahead of time. I don't really, you know, I'm not sure. I think maybe we all just sort of get used to it. Well, yeah, if everybody on the platform is showing their fees, then the only ones that would have sticker shock is if your cleaning fees are significantly greater, I think. Right. It is a song and dance, though. It really is. One of the other things that you mentioned, Tom, of the platform, rolling out more discount tools for hosts and being able to set seasonal or, you know, higher pricing, depending on, you know, for example, we don't have anybody booked here for Christmas. And Christmas is historically like, you know, it's a pretty big week. High demand. Yeah. High demand. And, you know, a lot of people just aren't traveling. Maybe they're saving some cash. Maybe they've already done another trip earlier in the year. You know, it happens in a looming recession type of thing. But that kind of thing would be pretty great because right now, if I want to set 2023 pricing, it is a manual process, right? Go through and methodically. Each day. Every day. You can say Airbnb just kind of like, give me what you think that pricing for stuff in my general area for the same kind of place is going for and just hands off. But they're not going to... They're not going to adjust it for the weekends or for the seasons, right? Yeah. Or if you know, like 4th of July is like a slam dunk for us. You're going to have to go and do that manually. Exactly. And so that would be nice. But I, you know, again, as a Airbnb user on both sides, I've stayed at lots of Airbnb's as a guest over the years with mixed results. But that's sort of the beauty of it. How much are these cleaning fees anyway, though? Like getting back to the fee thing. Like I know the weekend tools makes it easier on your end. But I mean, is it really raising the end price that much? Not really. In my opinion, if you're paying five grand for a week's stay and the cleaning fees $300, you know, you figure like, I mean, what are they paying the cleaner? Probably around $300. You know, it's like, this is not some sort of a money-making graph. Yeah. As a percentage of what is being paid already. Yeah. But it does add to the like feeling that you've been nickled and dimed. And I get that. I definitely get that. Well, and if you're looking at a hotel rate, the hotel's going to hide any of its like resort fees and stuff too. So then it's not apples to apples anymore. I get that too. I mean, the nice thing about a hotel, though, is that you go, we've splurged, right? And like everything is, I mean, sure, if you get room service or something, that's going to be an add-on. But that is all kind of hidden in the costs. So it might hurt up front, but it hurts less later. And I think that's what Airbnb is going for. Oh, no, I'm saying like hotels do the fee thing too, where you're like, wait, it was only $400 a night, but it ended up being $500 a night because they had a resort fee and a parking fee. And, you know, they pile on the fees after the fact too. Yeah, well, you know, it's a feed happy world. Yeah, it is. Well, well, fees aside, if you've ever listened to a deep philosophical conversation and disappointed that it was too coherent and also just didn't go on indefinitely, well, then you might like to check out a new website from the Italian artist and programmer Giacomo Macelli called The Infinite Conversation featuring an AI-powered chat between virtual avatars of the very real German director Werner Herzog and the Slovenian philosopher Slavog Zizek. Macelli used a popular language model tuned on interviews and content from both of the speakers and then used AI voice generation to imitate how they would be speaking the dialogue. A new segment is added in daily, so it kind of does go on forever. Macelli notes, new segments can be generated at a faster speed than what it takes to listen to them. In theory, this conversation could continue until the end of time. The site is meant to be commentary on the social impact of audio deep fakes. This is the best effort. Oh, I mean, just proof of concept, right? Like, okay. I've got my media conversation that will never end. Immediate cynicism of like, how long before it devolved into some like terrible conversation about like, I don't know, let's talk about, you know how those AI bots that Microsoft tried to come up with and they're like, they turned into like racist really fast or like sexist really fast. When does this break down? I don't want an infant conversation where that happens. I don't know what kind of moderation is happening with these AI folks, but it's something. We are too early. That was them. You are asking about the future. I think that maybe my version of materialism will be our centuries version of Kantian idealism. I think your fears are relaxed, Ayaz, because it's trained specifically on Herzog and Zizek. It probably is more repetitive than it is off the rails. Okay, that's good then. Rails are good for some of the stuff. It's not trained on the wide open internet, right? So there's some rails around it. I feel like Nazism or Stalinism, a palpable experience of how not only God dies, but also humanity itself dies. I feel like this would be soothing to me at night. I could kind of be like, I'm kind of tired, but I want to listen to something for a little bit. How about the infinite conversation? And it just keeps going. And it just keeps going, yeah. Alright, something else that keeps going, thanks to you, is email feedback at dailytechnewshow.com. What do we have in the mailbag? This one comes from Karleen. Karleen is talking in reference to our conversation last week about Amazon's changes to Prime Music. Amazon said, hey, we're adding a bunch of new stuff for you. Karleen says, to me, they've taken away a cool small feature and replaced it with an upsell I'll never use. Like Rich, I have kids that loved asking Amazon's assistant, we'll call her A, to play a song. When we experienced the change the other night, my three-year-old couldn't understand why A wouldn't play ba-ba black sheep over and over. My older kids had a two-go-sleep playlist that's now shuffled with similar songs. When you're particular about something as a kid, that does not work well. Karleen says, we loved having a specific song or playlist on demand played through A, but now it's just frustrating. I've been deep into the Echo ecosystem, but its lack of a straightforward way to connect with YouTube music and now this really makes me consider switching to Google Home. Well, if you're in the YouTube music system, I guess that makes sense. I have always had the Echo connected to Spotify and Apple music, so we've never really relied on Amazon music. But that said, I totally get it, Karleen, where your kids are like, no, but it always works this way. And you're like, well, I don't want to have to pay extra to make it work for my kids that way. That line about shuffled with similar songs would be maddening for adults, let alone children. I just think that's kind of a horrible little switch. I've also used Echoes for a long time. I don't use their music service. I use YouTube music. It works alright as long as your playlist or your playlist, but that allows you to at least upload your stuff. I've had substitutions happen where I had a track of a song that I liked that was on their service and then it got switched for a live version because they had the rights to that. That is horrible. It sounds similar to this method. And there is a solution which is pay $8.99 a month. That's what Amazon's saying. Like, you want these features that you came to rely on? Well, you have to pay for them now. What is this? Verified check mark? Geez. It's a dollar more than a verified check mark. I have the solution when the kids get upset that they can't play Bob-Bob Black Sheep, just put on infinite conversation. I could never direct a film liking Matt Burtman's The Suppense. What's not to like? Yeah, the kids will go right to sleep. That's right. Well, you know who doesn't put us to sleep is IA's actor because he always makes the show better. IA's, what have you been up to lately? I've been working on This Old Nerd. That's a show where we show you how to have the most tech-forward home in life as possible in as little time as you need because the thing is we've got a lot of responsibilities as adults or older people and you might want to take care of them. So, like, the projects are quick and they're simple. And I'm just thinking right now, like, how I would come up with a role in my own solution to not have to pay Amazon to do this, create my own playlists, make sure they're available on all the echos. I'm pretty sure there's a way to roll your own using, like, Plex. I got to think about that. And that's the kind of stuff we do. Fantastic. I also want to thank our brand new bosses, Anastasia and Aaron, who just started backing us on Patreon. Thank you, Anastasia. Thank you, Aaron, for finding the value in independent tech journalism. If you're out there listening and you're getting some value out of us and you can afford, we know not everybody can right now, but if you can afford to pick everybody else up and give a little value back, please do Patreon.com slash DTNS. Indeed. Speaking of patrons, stick around for the extended show Good Day Internet. We roll right into it when DTNS wraps up. But just a reminder, you can catch the show Monday through Friday at 4 p.m. Eastern. That's 2100 UTC. You can find out more at dailytechnewshow.com slash live. We'll be back talking about YouTube shorts tomorrow with Lamar Wilson. He has thoughts. Talk to you then.