 Coming up on DTNs, robots that can take orders. The final report on Uber's autonomous car death and how Facebook pretended to care for your privacy in order to squash competitors. This is the Daily Tech News for Wednesday, November 6th, 2019 in Los Angeles, I'm Tom Merritt. And from Studio Redwood, I'm Sarah Lane. From the Frog Pants Studios in Salt Lake City, I'm Scott Johnson. And I'm Roger Chang, the show's producer. We were just having a lovely conversation about our dogs and their limitations. For once. And benefits that that's all on good day internet. If you want to find out what's going on with that and more yesterday, we had an extended conversation with Patrick Norton after the show. You got to become a member, patreon.com slash DTNs. All right, let's start with a few tech things you should know. Apple has updated its privacy policy pages, adding several technical white papers on how Safari, photos, location services and sign in with Apple actually work. Apple also updated new privacy and security features added to iOS 13 and Mac OS Catalina. WhatsApp announced it will roll out a global update for iOS and Android so that individuals can block other individuals or everyone from adding them to groups. Unless a user specifically says anyone can add them to a group within their privacy settings, WhatsApp will now send a notification asking if it's OK before they get added. Previous users in India or previously rather users in India could select nobody, everybody, or contacts, but not specific contacts to add them to groups for the feature was never rolled. I'm sorry, but the feature was never rolled out globally. The November Android security patches are out. And this month marks the end of support for the first Google Pixel phone, which launched in 2016. Google says the Pixel one will get one more update in December. Google promised three years of security updates for the Pixel one back when it launched. JPMorgan analyzed Apple's 10 K filing and calculates that 31% of Apple's 2019 revenue came from retail, both in store and on its website. That's up from 29% last year and 28% the year before. JP Morgan also points out that the greater direct sales generally drive Apple care sales, which then increases Apple's service revenue. All right, let's talk a little bit more about what's going on with China regarding video games, Scott. China and video games, the unending story. China issued government guidelines on Tuesday. These are restricting when children can play video games. People younger than 18 years old may not play online games between 10 p.m. and 8 a.m. It's a big swath of time. Think of it as a digital curfew. And they may only play games for up to 90 minutes during the day on weekends and three hours on weekend days and holidays. There are also spending limits of 200 yen a month, 22 bucks for us in America for those eight to 16 years old and 401 a month for those who are 16 to 18 years old. Law enforcement will create a unified identification system to verify your identity and age. China was the world's largest gaming market until this year when the U.S. passed it. China halted approval for new games for nine months in 2018, which probably made a big difference there. But yeah, yeah, big restrictions coming. Well, and well, here, actually. And this is obviously China doesn't mess around with voluntary compliance and getting feedback from the industry. They just put rules in place. China would say that's one of the advantages of their system is they don't have to mess around with that. My question, though, is how much good is this going to do? First of all, even in China, people know how to use a VPN and get around things. But beyond that, even the people abiding by these rules, what effect is it really going to have? I mean, the idea is they want to cut down on kids who are playing video games too much. But those are the kids they're going to be most motivated to find a way around this. And I don't know that it does really anything else. I'm not sure that the problem is big enough that this kind of solution is what China needs to solve it. Well, let's even say it's big enough. Here's the problem. China is not only referred to. They're referred to as the biggest upcoming growth market for video games for a reason. And that's because it is the world's largest growth market. There's so much growth happening there, and so much uptake happening in the world of digital entertainment, specifically video games, that you can't push that genie back in the bottle. That interest and that growth is just going to keep coming. And some of it's going to come with some of the things China loves, like national pride in your eSports teams and things like that. So I don't think this is going to do much of anything. And it's a little bit like the Bugs Life, the Grasshoppers can only do so much until the ants figure out that there's more of them. And I don't mean they're going to all have a giant revolt over video games, but they're just going to figure it out and get around it. Yeah, it's sort of like having the adult buy you a beer. I mean, they're not supposed to, but it's, you know, you get the right person to help you log on and you're good to go. It also seems between 10 PM and 8 AM every weekday, I suppose that the country is like, well, kids are in school and they should not be up all night anyway. But that's highly restrictive. Yeah, it's pretty restrictive. The thing is I want to make this super clear. It's easy for people to look at China and go, bah, it's all draconian and awful and this and that. And this is far too harsh. But a lot of it is just kind of cultural. I mean, we have restrictions on certain things to do with age here in the States and certainly in Europe and other places where, you know, you lock things down at a certain age. They're miners until a certain age when they get old enough or they're only old enough to drink beer or do this other thing, whatever it is. You can't vape till you're 19, that sort of stuff. We have kinds of restrictions like that all the time. So it's not that unusual. The unusual part is the idea that kind of, without a whole lot of data, they're sort of assuming that all of these problems, including near-sightedness and uptick in near-sightedness in the country, is the video games are responsible for that. And that seems a little nutty to me. And I think people would just kind of get around it. And you're not wrong about the beer analogy. People would go buy you a beer. In this case, people are gonna teach you how to use a VPN. I don't think this sort of stuff can be just capped for, you know, for too long. And what you worry about, obviously, is enforcement could get real nasty and then it's all fear tactics and that's how they enforce it. But I don't know, it feels like a flood you can't stop, especially at the rate things are growing in China for video games. Airbnb followed up on a tweet from last weekend to announce four new features to discourage house parties after the death of five people at an Airbnb rented house party in Orinda, California. Airbnb will now verify 100% of its listings. A guest guarantee will make sure a listing match fits its online description. You're getting what you think you're paying for. Guests will be provided a renter of greater or equal value if their reserve listing does not match the description or they'll receive a full refund. A 24-7 at neighbor hotline will also be created, staffed by humans. So, you know, if your neighbor doesn't seem right, you know, you can call and somebody supposedly will be there 24-7 to answer your call and help you out. And Airbnb will do manual reviews of high-risk listings as well. Which I suppose means if you get enough kind of iffy two-star reviews and people say, I don't know what's going on, Airbnb has pledged to look into the matter. Well, I think it has to do, the way it was explained in their posting is it has to do with if this risking seems like it's at high risk for people wanting to rent it for parties, they will do manual reviews of that to try to fend that off. For instance, the person who rented the house in Orinda claimed they were escaping smoke fumes, rented it for only one night on Halloween night. And it's quite possible that a review of that reservation would have caused Airbnb to say, well, hold on, wait a minute, are you doing this for a party and maybe alerted someone to go and check on that because it fit the profile, right? So that's the kind of thing they're talking about there. So it's a lot more human interaction, it's a lot more human cost, it's going to use up more person hours at Airbnb. And a couple of these, like the guest guarantee aren't particularly a response to what happened in Orinda, they're just good policies that I suspect maybe Airbnb was working on anyway. Yeah, it's almost unfathomable that all hosts would not be guaranteed by the company, that's the whole point of Airbnb, but yeah, but it's the sort of thing where the company has grown so much that its success creates its own set of problems. We see that with any company that is large. This is particularly an interesting conundrum because you're talking about physical space where people are, it's not a social network, somebody's house or somebody's rental or whatever. And when we talked about the story on Monday, I had completely forgotten a couple of years ago, a friend of mine had a birthday and she rented, she Airbnb'd a kind of fancy house in LA in the hills, there was a pool, it was nothing that any of us could actually afford to live in. And it was the person who was the host somehow got wind of the fact that there are kind of a lot of people there. I mean, we were all very well behaved, nothing bad happened, but it was a party. It was a birthday party and it was an issue and the police came and it was a whole thing. I wasn't actually staying there, so I think I snuck off and was just like, this is not my problem, but I had kind of forgotten all about that and it was unfortunate because we were breaking some sort of a rule that I'm sure the host had in their profile, not doing anything wrong, but it's their prerogative. The US National Transportation Safety Board released more than 400 pages of reports and supporting documents on the March 2018 crash of an autonomous Uber car that killed 49 year old Elaine Hertzberg in Tempe, Arizona. Hertzberg was walking her bicycle across the road in a place not designated for pedestrians. So this was jaywalking. The report says the car detected Hertzberg and her bicycle before she entered the lane of traffic 5.6 seconds before the car hit her. The system initially classified her as a vehicle because she was to the side. They're like, oh, there's another vehicle on the side of the road. Then it changed the classification several times as it realized, well, wait a minute, that doesn't look exactly like a vehicle. It did not, however, predict that she would change her direction of travel and cross in front of the car. Uber says it has now made changes since that to its system that would correctly identify her as a pedestrian and anticipate that she might be about to walk in front of the car, apply the brakes at least four seconds before impact. That is not what happened then because the system wasn't capable of doing that. As was previously reported, there was a safety driver. There was a human in this car, but that safety driver was watching a video on her phone and did not see Hertzberg in time to apply the brakes herself. The report also notes that the Uber Advanced Technologies Group in Tempe didn't have a standalone safety division or a formal safety plan, standard operating procedures or a manager focused on preventing accidents. All the kinds of things that you might want to see to help prevent this sort of situation from arising. The report is going to be used to assign a cause for the crash. They're meeting the NTSB, we'll meet on November 19th. The point of this report is to hand it to the committee so the committee can look at it and say, okay, we are going to officially assign the cause of the crash as this and decide what to do from then on. But in the course of writing up this bit, I changed my mind from blaming the system to saying, no, that is a perfectly legitimate state for the system to be in. Our cars right now aren't good at telling if there's a pedestrian. We have to be those people. That's why there was a human safety driver in this car. And I think that that is the system that failed in this case, in my opinion. Yeah, and also I keep trying to put myself in the place of everybody in this scenario. If this was just someone driving and the exact bike thing happened to me in my Volkswagen, it sounds like one of those errors that was caused by the person on the bike. And I know that the whole point of autonomous driving is we're gonna get to a place where our cars are so smart. Even other mistakes that happen around the car can be accommodated for and react to those in a way that it's the most safe possible way to react. But in this particular case, if somebody was just driving and this happened, I'm not sure the results would have been any different. I guess what I'm saying is this is another case where everybody's gonna look at this and go, ah, see, this is what happens when you try to do these automatic cars. And I look at it and go, I don't know. I mean, these things happen throughout history of automation. And guess what? We make a better seat belt. We make a better airbag. We make a better strut so that doesn't pop when you hit the front of the curb. But we're always iterating and building on it. But it's gonna- The eventual outcome of this is more safety for everybody. We're just, wait, things keep coming up. And I'm with you, Tom. When I was researching the story, I was like, hell yeah, not accounting for jaywalking. I never really even considered that autonomous vehicles had to do that. But of course they do because people are gonna do stupid things sometimes. They're gonna jaywalk. They're gonna, you know, they're gonna dart in front of you. I mean, that's like one of my biggest fears as an actual human driver is somebody doing that and me not being able to predict unpredictable behavior. That happens. But- It's difficult for us to predict that that's going to happen, right? Right, yeah. Yeah. And so it's not unreasonable to expect that an autonomous car at a certain point in testing development wouldn't be good at predicting this unpredictable behavior. It's not that it just didn't hit, that it hit an object it should have detected. It's that the object was moving in a way that the algorithm couldn't predict. And the answer to that is like, well, then you shouldn't have this thing on the road where it could kill someone. It's like, well, we have to have it on the road to test it. That's why we had a safety driver. Yeah. Because we didn't think that this thing was perfect yet. And that's why I go back to saying that the safety driver system is what failed here. Absolutely. Well, perhaps this won't fail. Twitter is rolling out its new topics feature, including more than 300 topics to follow like sports, entertainment, and gaming. If you follow a topic, you will see tweets from individual accounts that you don't necessarily follow on their own that are deemed to have credibility on those subjects to decide where tweets go in a topic's feed and algorithm scans for keywords related to a topic, then checks to see if the tweet is from an account that normally tweets about the topic, and then sees how many people are liking retweeting or replying to a tweet. So there's some popularity involved too. The feature is set to roll out globally on November 13th. Yeah, I was just on Twitter now thinking about how this would affect things. I mean, basically what we're doing is refining the idea of lists in my mind. Yeah, exactly. List work right now are pretty, you got to kind of curate them. You got to keep track of them. If somebody starts veering out of the lane for the reason that you followed them to put them in that list in the first place, you might have to go prune it once in a while or whatever. So in theory, this is supposed to keep us on track and say, well, the topic leads. So we're only going to feed you stuff that is from that topic. But what if I'm suddenly exposed to somebody's weird rantings about a thing I don't care about or disagree with or just plain don't want to follow? I guess I'm not squeamish about this, but I'm a little bit skeptical until I see it in action. Yeah, I mean, I'll put it a different way. I want to follow topics like the St. Louis Blues. I don't want to follow all the reporters or all the people who watch the games, but I do occasionally search for the hashtag. And that hashtag search kind of gives me what I want, but some of it's irrelevant. If this is saying, hey, we'll let you follow a topic that'll give you just the quality posts, that seems to be what they're aiming at, then I think that's great. I think of a little human curation of those feed sources might be in order here. They're trying to do it all algorithmically, but yeah. I think this is worth trying. I'm not skeptical that this is gonna cause anything bad, especially because they're not gonna do a politics topic. As somebody who, and I know you can all, you feel my pain, as somebody who regularly gets chided for not talking about tech 100% of the time on Twitter, which I really don't. I mean, maybe it's 50%, but you know, I kind of just, my tweets are not curated all that much by me, but it's sort of like, okay, well, if you're following a tech topic and I'm not showing up in there, it's probably because I'm not talking about tech enough and that's cool. I'd rather you be happy. You don't have to follow me. The US Army Research Lab has developed software that lets robots understand verbal instructions, carry out a task on its own and report back. That is a difficult thing to do. We're talking about autonomous cars that can't properly predict what a human's intentions might be ahead of time. This is tough stuff. Such robots could do things like reconnaissance, check for IEDs, things done by robots now, but usually by remote control, requiring a human operator on the other end. The software can understand spoken commands and some gestures as well, and of course can take orders from a tablet. It can return data in the form of maps, even with labels, as well as images. It uses deep learning to identify objects it sees, and then it pulls from a knowledge base to help carry out orders. So it might go, that's a car, and then the knowledge base would tell it cars have axels and drive and stuff like that. The combination allows it to handle orders that are a little bit vague and in natural language, like go behind the building. So much so that when in one demonstration, when they said go behind the building, the software followed up with, you mean the building on the right? Because there was more than one building. However, it's not ready for deployment. It's still too slow, and it's not fully reliable yet. They did three demos, two of which went great, one of which had to be rebooted halfway through. So that's the kind of reliability you can't put out in the field yet, but they're definitely got some impressive software going on here. I think this stuff's awesome, makes me a little nervous. So if I'm trying to say delegate to Tom, I hope it doesn't detonate the bomb, if you know what I mean, you know? Well, that's why this is just looking at things, right? These robots just go out and look and report back. They don't carry guns, they don't try to diffuse the bomb. They don't, yeah, they don't interact with anything. It does seem like a really cool idea, because not idea, but a great, you know, strengthening of this technology in a place that we don't normally hear about it. I'd love them to be able to say, look, we have a bomb sniffing robot that's gonna go and see if it can determine what to do, and you being able to send it in there without a bunch of other time it takes to like get coordinates right and make sure all the way points are working the way they're supposed to, which is how they used to have to do it before. It was kind of a big nightmare to be able to say, we need you to go into that building on the left. We believe it's on the second floor, and that's enough information for this AI to then respond to that. That's a huge savings in time for situations that don't have a lot of time. So I think it's cool. Yeah, they compared it to dogs. You use dogs for search and rescue, right? You say dog, go find the thing. And the dog goes and finds the thing, comes back barking. That's the kind of thing they want these robots to do. Hey, folks, if you wanna get all the tech headlines each day, be sure to check out our sister show, DailyTechHeadlines.com. Facebook platform partnerships head, Konstantinos Papa Miltiatis wrote in a blog post that around 100 developers accidentally had access to Facebook group member data that was thought to have been cut off. This is a good story, in my opinion, about Facebook. In April, 2018, Facebook changed its rules and began the process of restricting API access to group members' names, profile pictures, and other data. This was in the wake of people being upset at Cambridge Analytica getting Facebook user data that they shouldn't have had. So Facebook has been proactive. And in fact, they thought they had locked down all access last year, but kept looking to make sure, and in a security review, found that some developers, primarily social media management and video streaming apps, mistakenly retained some access to the data. Facebook says it has no evidence that data has been abused and they have contacted the developers to make sure the data gets deleted. So you can look at this as another Facebook error if you want, or you can look at it as Facebook getting its act together and looking for issues before they cause problems and being transparent about it, telling people about it, partly because of the law, GDPR, but it's a good result. But before you give Facebook too much credit, we should consider a related story that broke today as well. Facebook didn't start restricting user data from developers in 2018. They didn't even start restricting user data from developers after Cambridge Analytica happened. They started restricting user data back in 2012. Reuters recently reviewed sealed court documents from a lawsuit filed in 2015 by a company called 643. 643 developed a now closed Bikini photo app, think of it what you will, that lost access to Facebook user data. They sued saying that Facebook was unfair in the way it kicked them out of having access to data because at that time, not everybody lost access to the data. And in these sealed court documents, some internal emails are found where Facebook executives refer to pushing to restrict user data as the switcheroo plan, a way to cut off competitors from Facebook user data while pitching it as improving privacy in public. The emails describe three buckets, existing competitors, possible future competitors, or developers that have an alignment with Facebook's business model. They called the project PS12N along with the switcheroo plan. Developers in the last group, the one aligned with Facebook's business were allowed to continue to access user data if they agreed to make mobile advertising purchases or provided reciprocal access to user data in kind under private extended API agreements. You give me some of your user data, we'll give you access to ours and no one has to know. The emails describing using an unrelated update to the Facebook login system as a chance to go into public and talk about these API deprecations as part of protecting privacy, even though the login change had nothing to do with the API deprecations. Now, Facebook says these quotes are taken out of context by someone with an agenda against Facebook. They say these sealed court documents shouldn't have been released for that very reason because they are going to be taken out of context. But according to TechCrunch, Facebook executives told journalists in 2015 that the changes were being made to help build confidence in data privacy and these emails seem to indicate that that was not the whole reason for it. How is, what's the context then? I apologize if I'm missing it, but when I read those statements, I don't see how they're taking out of context. Like you called it the switcheroo program. You talked about how well internally we're doing this, but externally we're gonna make it look like this. It doesn't sound like those are out of context. Those sound like damning comments to me. Well, of course Facebook's gonna say that. It's so easy to be like, why do they put this in writing? People in business should be smarter if they're gonna do a switcheroo plan. You just whisper it into each other's ears kind of thing. But when you're a company of Facebook size, even back in 2012, it is not surprising that a company like Facebook that has had data privacy issues for some time now that have not gone away, the company's made some strides to clean up its image. This does not surprise me that this happened. Of course not. Because if you want to get rid of competitors, somebody with a Bikini app is getting too much attention and you'd like people to stay on your platform a little bit more. Yes, there's some smoke and mirrors. You figure out how to make it seem like we're doing a good thing for the public because we're on your side. I'm not saying that's right. In fact, it's super, super shady. But it doesn't surprise me that this happens. Yeah, I think the context could be anything. I mean, I'm willing to say like, it could have been like the guy who always joked about this where in the meetings, they said, no, this is really to protect privacy. And the guy kept saying, yeah, it's also the old switcheroo, right? I mean, this is gonna hurt our competitors. Until you get to those three buckets, those three buckets seem pretty clear to me, but maybe there's some other context around that where they're like, well, it will have this effect. And we were talking about how to deal with the fact that it actually does seem to have this effect. And you don't have those emails. I doubt that. This very clearly points to the idea that Facebook was using a proper thing to hide an improper thing. And that is, as you just said, Sarah, it shouldn't be surprising. I'm also, I'm always very careful not to condemn when we don't have the evidence, right? To say like, hey, Facebook just having a problem doesn't mean Facebook's awful. But this seems to be a situation where we have some evidence. And it now, in my opinion, the burden of proof ships onto Facebook to give me that context and explain why this isn't anti-competitive behavior. Yeah, if you're gonna say it's out of context, give us the context. And maybe they don't need, maybe they don't need to give it to us because there's no reason. Well, it could be confidential part of the case. And I mean, there are reasons why they would not legitimately want to. Lots of reasons, absolutely. But if they want me to feel better about Facebook or the Facebook using, you know, world at large, you ought to give some context. I don't know. Can't just say it's out of, then don't give it. But it's, I mean, but you hear that all the time. If I go like, Scott sucks. And you go, why'd you say that? And I go, hmm, taking out a context. It's like, I think like, maybe I can get away with it. Yeah, no, I agree. I was joking. That's exactly what it feels like to me is that they're saying it just to say it and it's a scapegoat and there's nothing to it. Cause it sounds like there's no context that would explain. But that does happen where people say something and it's taken out of context and it sounds horrible and you're like, what is the context and the person probably shouldn't call it the switcheroo plan if so. Well, it does sound pretty, yeah. Side note, California made a court filing demanding that Facebook respond to a subpoena relating to an 18 month old probe to disclose Facebook user data or disclosures of Facebook user data to Cambridge Analytica and others. The filing said the social media giant was quote, failing to comply with lawfully issued subpoenas and interrogatories. So in other words, California saying Facebook is dragging their feet. They don't want to cooperate. And that seems consistent with this Reuters issue too. So again, I like to have solid evidence before I accuse someone, but it's starting to mount up given the other things. It's getting harder and harder for Facebook to explain this all as a misunderstanding. Well, thanks everybody who participates in our subreddit. You always understand us and we appreciate it. You can submit stories and vote on them at dailytechnewshow.reddit.com. If you hang out on Discord, join our conversation in our Discord where you can link to at patreon.com slash DTNS. All right, let's check out the mailbag. Let's do it. So Ken wrote in, he had some ideas about our story from yesterday about lasers being able to control devices that have microphones, smart speakers, smartphones and the like. Ken said, well, you know, custom wake words would be nice. We know that there are some limitations yet with smart devices. Said personalized voices should be able to help so that it isn't triggered by anyone but the people who should be triggering it. But number three was our favorite. You could do that, yeah. Yeah, number three was our favorite. Ken says, in the words of Starfleet Captain, shields up, place a paper or piece of cardboard or a wood shield, a book, a paper weight, a lamp, anything between the speaker and the window or even a tiny little lamp shade around it to allow sound in but block out laser lights. Another person wrote in and said literally tinfoil hat. Mm-hmm, yeah. He's gonna say that's what that sounds like anyway. So why not? This is great, thank you Ken. We will start putting lampshades on our smart assistants in your honor. Hey, shout out to patrons at our master and grand master levels, including Dan Dorado-Hankens, John Johnston and Chris Smith. Also thanks to Scott Johnson. Almost forgot you, Scott, but I mean, hey, save the best for last. Let folks know where they can keep up with the rest of your work. Sure, we just had a milestone over in my world, 10 years of film sack, a podcast we started in 2009 where we take a look at old, old movies and have fun at their expense. And we sack movies every week as a result. And we love that show. We love how far it's come and we're very proud of it. If you're like, man, I didn't know for the last decade that thing existed, just a good time now as any to jump in and you can get all the archives, head on over to filmsack.com and check it out. For everything else, there's frogpants.com or me on Twitter at Scott Johnson. Hey folks, if you are at the $2 level and you signed up last month, you should have access now to a Patreon post at patreon.com slash DTNS. You have to click on posts and then you have to scroll until you get to November 1st and you have to read the headlines for the one that says cookbook and then you'll find it. It's that easy. Go do it, patreon.com slash DTNS. Also, we want listeners to co-host the show with us. We'll be recording our listener co-host show which will run in December, but we'll record it on Tuesday, November 26th at 6 p.m. Eastern, 3 p.m. Pacific. You'll be available at that time. Email us by Sunday, November 17th. All right, so you have to be available Tuesday, November 26th at 6 p.m. Eastern. Email us by Sunday, November 17th. Put the subject line listener co-host in the subject line, feedback at dailytechnewshow.com and then in the email, just tell us why you think you'd make a good listener co-host and we may select you to be on the show with us. Speaking of email, our email address for all feedback is feedback at dailytechnewshow.com. Special shout out to Chance the Hacker for the cat photos you made my day. We're also live Monday through Friday at 4.30 p.m. Eastern. That's 21.30 UTC and find out more at dailytechnewshow.com slash live. Back tomorrow with Justin Robert Young. Talk to you then. This show is part of the Frog Pants Network. Get more at frogpants.com. The club hopes you have enjoyed this program. Thank you.