 Daily Tech News show is made possible by its listeners. Thanks to all of you, including Michelle Serju, Miss Music Teacher, and James C. Smith. Coming up on DTNS is algorithmically generated content creating too much content. Plus, we have the future of automated trucking and Andrea Jones Roy is back to help us figure out how we can safely use all of those algorithms we're talking about. This is the Daily Tech News for Monday, December 5th, 2022 in Los Angeles. I'm Tom Merritt. And from the Atlanta area, I'm Nika Monkward. I'm the show's producer, Roger Chang. Sarah Lane has the day off, but Andrea Jones Roy, data scientist, comedian, circus performer and host of majoring in everything podcast is back. Welcome back, Andrea. Thank you so much. It's great to be here. It's good to have you. We are going to talk lots of AI and data science today. It is full of those wonderful topics. That's how you pick a week off. That's right. Yeah, going into Monday. Strong. Let's start with the quick hits. U.S. District Judge Anna Donnelly in Brooklyn dismissed an indictment against Huawei CFO Meng Wangzhou, which alleged her of crimes relating to misleading banks about Huawei's relationship with a company operating in Iran. Meng reached an agreement with U.S. prosecutors last year for the case to be dismissed four years after her initial arrest in Canada, acknowledging she made false statements about Huawei's Iran business. The case was dismissed with prejudice, so it cannot be brought again. Apple Analyst Ming Chi Quo reports that Apple's mixed reality headset may be delayed until the second half of 2023 due to, quote, software related issues. Quo previously reported that his sources indicated Apple would announce the headset in January with shipment starting in Q2. Journalist Matt Tybee published a tweet thread detailing how Twitter's trust and safety team determined to temporarily block the 2020 New York Post story that involved descriptions of the contents of Hunter Biden's laptop. That story came out ahead of the U.S. presidential election that year. Twitter provided emails which showed the team debating internally whether to restrict the links to the story under its hacked materials policy. So not about the content or veracity of the story, but whether they were doxing Hunter Biden. Then CEO Jack Dorsey was not involved in the decision, at least according to these emails, and Tybee said he did not see evidence of, quote, any government involvement in the laptop story. Emails largely agree with recent statements on the decision from Twitter's former head of trust and safety, Yol Roth. Lots of people are leaving Salesforce. Co-CEO Brett Taylor announced his resignation last week as the Tableau CEO Mark Nelson. Now Slack founder and CEO Stuart Butterfield says he's leaving in January. Butterfield joined Salesforce when it bought Slack for $27 billion in 2020. Salesforce GM of Digital Experiences Cloud Lee Diane Jones will take over as CEO. But Salesforce saw three different CEOs resign in the last seven years. Yeah, or seven days. Seven days. Seven days. I was gonna say seven days. That might be seven days. Seven days. Yeah. I mean, even seven years is quick, but yeah. Yeah. Yeah. That's a good point. Back in June, Meta began testing age verification tools on Instagram. They included things like taking a video selfie, getting a vouching from an adult friend, providing a state ID. Those tools were offered in partnership with the digital identity startup Yoti. And now Meta will expand the tools to be used on its Facebook dating service when it suspects a user may have lied about their age. Meta says on Instagram, the tools stopped 96% of teens who tried to edit their birthdays to be over 18 and that 81% of users opted to use a video selfie for the verification. All right, let's talk a little bit more about all this algorithmically generated content. As algorithmically generated content becomes commonplace, we've talked a little bit about the first policy experiments on how to treat it. Stock image stall work, getting images, for example, does not allow contributions from generative AI or text to image generators to be used on its platforms. They say the legal risks from copyrighted works and training sets is undetermined, so they'd rather just, you know, be safe and not allow those to be sold. Adobe, on the other hand, opened its stock image service to images made with generative AI. And to deal with the legal implications that are still fuzzy, creators must affirm that they have the appropriate rights to the work being submitted, and that the work will be labeled as made by AI when you're searching for images on Adobe's platform. But algorithms aren't just making images. PIN AI just released its chat GPT tool, which can create text from a variety of prompts, including code. So now the coding Q&A sites that Overflow has temporarily banned users from sharing responses generated by open AI's chat GPT. This ban isn't, however, a result of legal concerns like with Getty, but rather quality. Yeah, it's a mod problem. The mods say chat GPT produces code that has, quote, a high rate of being incorrect, even if the code might look good at first glance. If Stack Overflow believes a user has used chat GPT, it's going to take measures to prevent that user from continuing to post it, although it's not clear what that might entail. For now, the rule is temporary. Stack Overflow is just trying to stem the tide because chat GPT launched, and of course everybody's using it, so everybody's posting code from it. They'll make the final ruling on chat GPT responses after consulting with its community. A lot of this topic touches on things we're going to talk with Andrea about later, but I'm curious, Andrea, what you make of this particular story. I mean, my main selfish takeaway is I'm a professor of data science and now do I need to look for my students code being generated by AI? The answer is yes, because they're very crafty. And so I anticipate that this is going to be a big problem. I am not surprised that the code that's coming out is not great. I imagine that the way to think about it is, you know, AI generated text from that's supposed to be on behalf of humans. In some cases is very, very good, but it probably in this particular case looks more like the kind of chat bots. If you're trying to like text with your airline and it starts off okay. And then you're like, I don't think I'm talking to a human. I'm pretty sure the code looks like that as well, but it does pose a really big problem. And then the last thing I'm thinking about is on this front as far as what comes to mind first is it's kind of like a Turing test squared or a Turing test of a Turing test, right? Like we want to see if the computer can imitate a human, but now we're asking the computer to imitate a human talking to a computer. And it kind of poses a big philosophical problem as well. And I think it's a huge breakthrough, but there's a lot to work out. And frankly, I'm calmed that Stack Overflow is like, let's just take a minute and see what's going on before we do anything. Let's try to put a break on that. Nika, do you code anymore or do you just let AI do it for you now? No, unfortunately, I have to do a lot of it myself still. But I think Andrea is spot on, especially on the student front. Students are crafty. And I think what Stack Overflow is doing is honestly trying to protect their brand. They don't want a bunch of junk code on their site. People going to look up, you know, if they hit an error or they're looking for some help to get some junk code and start saying, hey, I pulled this from Stack Overflow and it was crap. So I think it's probably a bit of branding, brand awareness and making sure that, you know, they don't get a lot of junk on their site as well. Yeah. To me, I look at this and I say, okay, I try to back up and I'm like, humans also put bad code in. So Stack Overflow doesn't want bad code from humans anymore than it wants it from chat GPT. So what's the difference and the difference of scale? Stack Overflow has had a long time to get used to just about how much bad code might show up from a human in the course of any given day and assign moderators appropriately. Chat GPT suddenly made it possible for humans to be posting way more code than they could generate themselves because they can just let chat GPT go to town. And there's a little bit of like, hey, I've got a new toy. So let me let me make it do the thing and then post it somewhere. I mean, we've seen the same effect on Twitter where everybody's posting screenshots, including myself. I'm totally guilty of that of like, hey, this is the funny thing I made chat GPT do. So I think two things. One is the amount of this is probably going to recede, which is why it's very wise that Stack Overflow said temporarily doing this, but we're going to see what happens once everything settles down to like actually create our policy. Yeah. So there you go Stack Overflow. Free advice from us. You're doing the right thing. Not that you needed to be told that. All right. Last week, Joanna Glazer wrote an article for Crunchbase about autonomous trucking company Embark, which has seen a 98% stock decline since it went public in November 2021. It's currently valued lower than its cash reserves, literally less than nothing. Like when you're valued lower than your cash, that means you have negative value. There's lots of factors combining to bring down Embark's value, including the general downturn in tech. But the main problem seems to be confidence in the product. Autonomous trucking. Starsky Robotics Peloton technology, which is the trucking Peloton, which is not the same as the workout bike Peloton and Ford backed Argo AI are all involved in autonomous trucking or were because all of the ones I just mentioned have shut down. Another company called TwoSimple, T-U-S-I-M-P-L-E is still going, but it also is trading below its cash value like Embark. The question is, why? After all, autonomous trucking is supposed to be easier to do than personal driving on city streets because autonomous trucking happens mostly on highways. Trucking was expected to be the easy win for automated vehicle companies. To be clear, Embark hasn't given up. It still says it's on track to begin commercial operations in 2024. It currently has nine transfer points in its coverage map, including Dallas, El Paso, Atlanta, and Jacksonville. Is it all bluster or are all the investors just being a little bit impatient? The Verge talked to Cornell Associate Professor of Information Science Karen Levy about it in an interview titled, Why Automating Trucks Is Harder Than You Think? Yeah, most of that interview is about the surveillance of drivers, which is an interesting topic. You absolutely should read it for that. But for our topic purposes here, Professor Levy weighed in on autonomous driving. She thinks autonomous trucking might be possible in like 40 years. But in the meantime, she's convinced after talking to a lot of truckers for a book she wrote that we need the humans, that humans have to stop several times a day to do safety checks. They look at things like frayed ties. Make sure the refrigeration system is working properly. If you've got to keep something cool, signs that a tire might be about to blow that maybe a sensor could catch, maybe they wouldn't. Humans also provide security to make sure somebody's not trying to break into that truck. Obviously, some of these procedures can be automated. You can put security systems on a truck. They may or may not be as good or better than a human, but not all of these things are replaceable. And the momentum, as we just heard, seems to be against autonomous trucking as a business right now. Even Tesla, when they announced the Tesla semis, didn't announce enhanced autopilot last week. When they first announced the Tesla semis back in 2017, they at least had a slide about it. The equipment is all still there to do it, but they just didn't see fit to make that a big deal. Nika, I know you work in a related industry on driver assistance systems. Where do you fall on expectations regarding autonomous trucking? Well, I know it's hard enough teaching vehicles, consumer vehicles, how to drive on interstates, on streets. I could only imagine the additional level of complexity that goes in with these commercial vehicles. You have to also take into account not only just the safety of the truck and the person in it and the cargo, but you also have to think about all the other drivers and all the other barriers. With these commercial trucks, you have to stop periodically whenever you see a waste station to weigh the cargo. Additionally, just the amount of camera and sensors that you need on a consumer vehicle, just a car that you would normally drive. It would have to be exponentially greater. The sensors would have to be even better and more just to fit around a vehicle as large as, say, an 18-wheeler. Because if you think about as a person driving a car, when you look in your mirror and you see a truck coming up behind you or beside you, you have to keep double checking, double checking just to make sure that you're still in a safe space. I think that just adds the additional complexity into making sure that the truck is going to stay in its right lane, is going to make sure that it's going to be safe for the driver, is going to be safe for other people on the road. So it's a lot to take into account because, like I said, teaching just a regular car to drive in these instances is extremely difficult and complex. Andrea, what do you make of this? Yeah, I mean, what Nika just said made me think of the last time I rented a car. So I live in New York City and so almost never drive, but that means that I experience innovations in car technology in like a punctuated way every time I rent a car and only on the occasions I happen to get a relatively new one. But it's those, you know, the lane correction ones where if you start to swerve but then sometimes it can be not so safe because I'm trying to change lanes and it won't let me and that sort of thing. And that's just little old me on the highway going not that fast and call me old fashioned. But if I saw a huge tractor trailer coming behind me with no one at the wheel, I would not feel great. But look, I mean, there is a potential future where this could be safe and I'm certainly open to it. And I know that we've had a lot of changes in air travel that has made it more safe with more automation, though with some very, very, very notable issues that made it extremely unsafe, cough, bowing. But I also am finding myself thinking about something that I am going to say a little bit later about AI more generally is that when you were first introducing this issue, I just learned more about trucking than I knew in my life going forward. I didn't know that they had to think about, you know, the cargo and the content and all that. Of course you would. It just never occurred to me that there was much more to it than driving because I don't know the industry and I don't spend a lot of time thinking about it. And that's one of the issues with AI generally is that we need to partner with people who live and work and have real experience and expertise in the areas that we're trying to automate. So partnering with truckers is encouraging to me and it sounds like we need to do a ton more of that because as Nika said, there's tons and tons of micro decisions you're making all the time and if you're an experienced driver, there's a lot that we would need to, the complexity becomes overwhelming when we think about training and algorithm to do that. And so partnering with humans I think will move everything forward, literally. Yeah, I think one of the things that gets me about this story is that the tech has been demonstrated to work and to work safely. I think it's fair to address all the concerns that y'all are talking about to keep human drivers on board as safety drivers in the meantime, but when you're talking about freeway driving, you have a very clear lane, you're staying in it for a long way, you've got control to access way easier than the weird streets of Boston or even San Francisco that these autonomous vehicles are conquering. So I'm not as down on the ability for it to be done safely, which leaves me wondering is what is left out of this story? What else is going on here that causes investors to devalue it? Is it they're worried about regulation? I don't think they're worried about worker revolt because trucking companies have a hard time filling all the slots right now. Like they can't find enough. So if you keep human drivers, you can actually probably keep truckers employed for the most part. We got a lot of truckers in the audience though. So I bet you've got a perspective on that feedback at DailyTechNewsShow.com. I wonder if it's cost. I wonder if these things just aren't any cheaper to operate, especially if you have to have a safety driver on board, then it is to just run a regular freight business. Like is the transition just too costly? Well, Tom, I think you raised a good point. And I think there's also a potential cost either in terms of lawsuits or at least in terms of public reputation. Suppose you have a truck driver AI system that is 90% great. And if you have truckers in the audience, please tell me what the overall safety records for humans is. Maybe it's 95, I don't know. But 90 is still pretty good. But what are the consequences of errors? We see chat, GPT, all this stuff where they're great, but it's not perfect. But if you get an artist made AI or AI made art and it's not that great, there's no consequence. Whereas if you have AI do something really dangerous on a highway with a truck, it's really, really, really bad. Even if it's just as accurate or more accurate than humans, there's a lot of, I think, understandable. And I share its skepticism of AI. And so if you use an autonomous vehicle, you need it to be way better than humans in order to not get lawsuits or complete loss of trust in your customers. And I think that cost and that fear is very real as well. The Verge has the number 5,000 people die in crashes in U.S. related to trucking. That's 12% of all crash deaths. So if you said, yes, 5,000 people will die in autonomous truck driving accidents, people would freak out even though that's the exact number that happens. Go ahead, Dika, sorry. And I was going to say in the work that I do, 90% would not pass. It's so far below what our expectation is. It's nowhere near what it would need to be. And that's just for consumer vehicles. Even though our numbers are much higher, I would expect it would probably have to be at least what we expect in consumer vehicles on these big truckers. And then again, on the cost, I definitely think that is a factor because again, as I mentioned before, you have to have all this additional equipment that works along with the algorithms that's feeding it data. You have to have cameras all around this thing. And you have to have large cameras because the standard LiDAR cameras that you may use on a vehicle is not going to work on an 18-wheeler. Yeah. So the combination of having to be better than humans. You can't just be equally. You have to be better for public perception, for liability reasons, with the cost of operation. I don't know. I feel like that might explain why these companies are getting so much resistance from investors who were previously very bullish on them because public sentiment is getting harder on these sorts of things. And the economy is not doing as well. So their ability to just kind of skate by until they profit, that runway is running short. Well, I do imagine we're up for a very fun, if and when they do figure this out, some very fun reboots of like Breaking Bad and Mr. Robot, where you hack into these trucks and can move drugs around much more easily. So that's a whole other set of costs. The return of this 70s trucking disaster movie. There you go. Still in cargo is going to be maybe a little bit more easier. Or speed, but it's an AI bus. Eastbound and automated down. Folks, if you have a better title for a show about trucking, email it to us. Our email address is feedback at DailyTechNewsShow.com. And of course we want to hear from the truckers as well. Feedback at DailyTechNewsShow.com. So the last time Andrea was here, we discussed the story and vice called scientists increasingly can't explain how AI works. We had a great conversation about the way algorithms work, being a bit of a black box. You know what goes in? You know what the algorithm does, but we explained why you can't really tell how it makes its decisions. This time we wanted to pick up that conversation and talk a little more about whether that's a problem, right? And what we can do about it if it is, if we can't tell how an algorithm makes its decision. Andrea, does that mean we just shouldn't use them? I think it depends. I wish it were that simple. What if I was like, yes, goodbye. Yeah. Thank you. That's the end of DailyTechNewsShow. Nailed it. Nailed it. I think that it really matters on how serious the consequences are. If your AI's model either makes a mistake or if otherwise the decisions being made by this model really impact people's lives or whatever it is that we think is super important that it's impacting. You know, if AI is feeding me a certain set of targeted Instagrams, we could imagine a world where that could be really dangerous. And certainly we see this with YouTube, rabbit holes to all kinds of scary content. But on the whole, I'm less worried that I don't really know how that works, though I am worried. I'm less worried than I am about things that make really important decisions that can really affect someone's life and their ability to survive and get by in the world. And I think there are things like their candidacy for parole, which is one of the big classic stories that's come out in this space, whether or not someone should be hired. We have, you know, Amazon reading resumes that way, whether or not we're giving someone a loan. These sorts of decisions are the ones that I with the social science background tend to think about a lot. And I think if you're making those sorts of decisions, and you can demonstrate that there are outcomes that are biased and there are ways and we talked a little bit last time that there are some ways to check it. Not always, but there are some ways to start to get a sense of whether or not your algorithm is biased. Then I think we do need to know how it's working. Asterisk also, if we want to just understand the world better and not just predict stuff, but really explain how things connect to one another, then we also might from a philosophical scientific curiosity want to know how they work. Yeah, it seems to me, and tell me if this makes sense, that the solution isn't not using algorithms. The solution is doing what we're doing here, learning what the limitations are, what the imperfections are, how they go wrong, and understanding that you can't just put a question in and rely on the answer. These are advisory tools. Right. And again, even if you say you put in a question into the code automation program of your choice, and it generates a whole bunch of code and 90% of the time that code is right. Fun, great, you do well on your homework, but if that's code that's going to then impact people's lives, those are 10 people who are not getting hired, not getting parole, not getting approved for a mortgage, or other sorts of things that could really impact their lives and their health and their ability to make money and all that stuff. So the ship has sailed on algorithms. It would be very, very difficult to go out and say, I don't think we should use them. I think that we should, in many cases, algorithms are outperforming humans and some medical diagnostics and other areas, like let's go. But it's exactly what you all are talking about, which is we have to be aware that this is not the same thing as capital T truth that I talk about this a lot, right, that the data itself is biased. We're training the algorithms on that bias data. And then because we think that it's somehow true and unbiased and objective, we're implementing it without even giving it any thought we're not being very critical of the predictions if the algorithm says hire this person and don't hire that person we say well it must be objective so it's, it's biased every step of the way but indeed I'm excited to be back because there are some things we can do. In addition to awareness though I'll be honest with you I think that's the biggest one because so few people are aware of the limitations and a lot of it is because it's come out in the last 10 years so unless you took a data science ethics course in college which is true for very few of us, you probably don't have access to much information on a daily basis about this, except for if you watch daily tech news show. Of course, or listen to Andrea. Yeah, I'm always ranting on some street corner so there's that. And I think I is definitely here is definitely here to stay, but I think people can't miss the fact that human interaction is necessary I think when people hear things like artificial intelligence or machine learning, they just hear let me put something in here boop boop boop boop and splat I get you know exactly what I want out that's in that beep boop boop there's a lot that's going on in there to make sure that is ethical to make sure that it serves the the greatest you know point of population that it's safe that bias is mitigated as much as possible. So I think, once we get it into the ethos of society what AI really should be doing. I think that's the will probably start to see a shift in in the thinking behind AI I think it really is, you know, education to the masses behind it and one of my best courses in college was AI ethics class it was the last class that I took before I graduated and you know the professor now is the Dean of Engineering at Ohio State, but it was one of the most fascinating, you know, classes, because you do get to see how people integrate biases and use data manipulate data to get their specific point across, you can change the data to fit the narrative that you want the same with algorithms as well so it's one of those things where we have to make sure we don't take the human element out of it, and that we were consistently training and retraining to ensure that we get the best output possible. I think your line is perfect and I need to have you come in and talk to my students because that's exactly it right we spend so much time in my class and whenever you do any programming you, you whiteboard the problem and you think about what it is that you want to do. You know how to interpret the results but what goes into the actual model itself is absolutely touched by humans at every single step of the way as well and I go through the code you know line by line with my students for even a simple you know supervised string algorithm or something like that and a human is making choices yes you could run the model with lots and lots of different specifications of those choices but your training set size the data you include in the first place, the type of model that you're using the distance metric that your humans are making those decisions and you could you could automate that but you're still telling the computer how to automate it so it's just it's it's we've forgotten that humans are at the center of AI whether we like it or not. Yeah, well it reminds me of astronomy in some ways which is this astronomical model is going to tell me an approximation of this, which will help me the astronomer, then figure out an answer. I don't just say oh the model gave me the answer. Same thing here we need to get used to the idea that, you know, first of all you want to get your data set in as good a shape as possible but no there will always be biases and be aware of what those might be. So then once the boop boop boop is done on the other end, you're you're not just saying and there's the answer you're like okay now we take it and we we use it appropriately. Before we will talk about this more on good day internet but before we wrap up the the DTS section of this Andrea do you have any recommendations of things that people who are interested in learning about this or in the industry can can go to to kind of work more on that. Yeah, so there are a lot of resources and more every day which is a great sign. Nika your professor sounds awesome. I really like the work of one of my colleagues at NYU her name is Julia Stoyanovich, and she has a consortium of responsible data science and responsible AI that comes with a free online course. She has like five different modules and if you go to I think just Julia Stoyanovich stoyanovich you'll find her resource that's one of the best free online resources that I've found and knowing to look for it right if you if you are seeing the results of any kind of AI out there in the world of art or whether it's various chatbots or whether it's yeah that's the that's the the resource there's there's a free course that's that's built in there under courses. And if you're interpreting you know do a Google image search and look up words like feminine or success and you're going to get some really biased looking images I mean even that is how we're encountering these sorts of biases so just as you know to look for it. I'm going to focus it everywhere around you which again is depressing but it can also be empowering. The other things that you can do is if you are working in a space that is AI related or you work near AI in your company your company's using AI in some way. Make sure that there are humans on those teams just like we were talking about with truckers. I would invest in the trucking company the AI automated trucking company that is partnering 5050 if not 6040 with real human truckers. Not the ones that are a bunch of engineers who are like in principle this is how it could work nothing against engineers but make sure you've got humans every step of the way and make sure you've got humans from a diversity of backgrounds. And that means all the dimensions you can think of and make sure you've got data that reflects all the people I mean lots of times if you look at things in social sciences. You see oh okay we don't you know the algorithm isn't recommending any people of color. You go in and look at the data and there's barely any people of color even in the data set you fed them in the first. Yeah so there's lots of places to look outside of the algorithm as well as of course in the algorithm. Yeah I think people always think race gender and those are important but I'm glad you said you know all the different dimensions rich and poor rural and urban like you need to have lots of different ways of looking at that. Andrew Jones Roy as always thank you so much for chatting with us if folks want to follow what you do where should they go. You can find me on all the social medias at Jones Roy J-O-N-E-S-R-O-O-Y one word and Jones Roy dot com. Excellent there's two O's in there. Don't forget that. There's two O's. That's on purpose. Sometimes people come up to me and they're like someone spelled your name all wrong. You're like oh no my grandparents did I don't know who did it but someone did. I like to think of it as you having a wealth of O's. I'm going to say that from now on I like that. That's an episode title. No that would be totally derailing. It might be it might be over promising. Vika what what have you got going on where should folks go to find out where you what you're doing these days. I'm at tech savvy diva on all the social media sites you can also head on over to snubwestcast.com for our weekly tech show where we talk all things Apple and then some. Thanks to our brand new bosses Chris and Mahathir who made this show possible. They started backing us on Patreon. Thank you Chris. Thank you Mahathir. If you would like to be that like them. Join the flood at daily tech news show dot com slash Patreon that's patreon dot com slash D T N S patron stick around for that extended show. Good day internet. We're going to carry on the conversation there. You can also catch the show live Monday through Friday 4 p.m. Eastern 2100 UTC. You'll find out more at daily tech news show dot com slash live back tomorrow talking about the subscription model for car features. Does John C. Dvorak like it or not. Tune in and find out this show is part of the frog pants network. Get more at frog pants dot com. Diamond Club hopes you have enjoyed this program.