 Daily Tech News Show is made possible by its listeners, thanks to all of you, including Tony Glass, Philip Less, and Daniel Dorado. Coming up on DTNS, Getty Images bans AI-made art from its platform, Google lets you edit search results about yourself, and should vehicles try to detect drunk drivers and lock them out, this is the Daily Tech News for Wednesday, September 21st. Do you remember the 21st night of September 2022 in Los Angeles? I'm Tom Merritt. I am not Guy Fox, but I am Scott Johnson from the city next to the Great Salt Lake. And I'm the show's producer, Roger Chang. Let's start with a few tech things you should know. Amazon refreshed its Fire HD 8 tablets, I guess they weren't good enough for next week's announcement, but they've added a 30% faster six core processor, they have generally thinner, lighter design, 13 hour battery life. The standard $100 HD 8 will give you two gigabytes of RAM and wired charging. And if you pay an extra 20 bucks for the $120 HD 8 plus, you get three gigabytes of RAM, wireless charging, a little bit better camera. Both support tap to Alexa features so you can control the assistant with touch commands, a ruggedized kids version, which is kind of normal for Amazon to offer that costs you $50 more, so either 150 or 170 depending on whether you want the plus. You got to serve those ruggedized kids, I always say. As Tom mentioned yesterday, Windows 11's first big feature update is here with a customizable start menu, voice access, system wide captioning, touch gestures and more. Check Windows update in the settings to see if it is available yet for you. If you are on a machine that can't run Windows 11, Windows 10 version 2020, sorry, 22H2 will arrive in October. That will be more than just a security update though. Microsoft is not detailed what features it will be though, so you don't have to wait for that. Microsoft also announced it will hold a surface event on October 12th at 10 a.m. Eastern to show off new devices. Yeah, just a surface event. Just going to touch the surface. Never get tired of that one. Framework and Google have announced the Framework Laptop Chromebook edition. So it is a user-upgradable framework laptop like their Windows laptops but runs Chrome OS instead of Windows. It supports Framework's expansion card system. If you don't know, that gives you four hot swappable slots so that you can decide at any moment which ports you want and which side of the laptop you want them on and swap them out as you need it. You can choose between USB-C, USB-A, microSD, HDMI, DisplayPort, Ethernet, and high-speed storage and mix and match. You can also upgrade the RAM inside of a framework laptop and the storage that's inside and swap out bezels on the display if you want to change colors. If you want this Chrome OS version of the laptop, pre-orders begin today in the U.S. and Canada, starting at $999 shipping in early December. YouTube did a cool thing, I think anyway. They announced a beta for Creator Music, which will offer a catalog of songs that can be used for monetized videos. Creators can either pay for songs directly or split monetization revenue with the license holder of that song. Terms of the licensing and revenue split are spelled out when selecting tracks. YouTube also updated its partner program to share ad revenue with YouTube shorts creators. That's a long time coming. Early next year, qualified creators will get a 45% cut of ad revenue from those shorts videos. Oh, good. Yeah, giving people a path to, you know, legally pay for the music instead of forcing them to break the law if they wanted to use it. I think that's a smart idea. NVIDIA announced the launch of Omniverse Cloud with services that'll let artists and developers design, publish, and operate what they called Metaverse applications, basically 3D stuff. You get the power of big data centers running NVIDIA GPUs to do your 3D workflow. Pretty cool stuff. If you're working in 3D animations and avatars and that sort of thing. NVIDIA is betting on universal scene description or unfortunately acronym USD. Not USD, universal scene description as the future standard or as they called it the HTML of the Metaverse. Omniverse Cloud is based on the USD standard and it's NVIDIA's first big software as a service offering. But the big takeaway, if you like me and in the know on buzzwords and acronyms, in my opinion is before it gets big, you know that USD does not mean USD. In the Metaverse, it's going to mean universal scene description. I'm actually legit excited about this and I'm sure this will come up on later episodes when it becomes available. But I think it's a bold move in a new place that people like Adobe aren't ready for. Yeah, it's too early to call USD the standard of anything that might eventually be called the Metaverse. And yes, it's really hard to keep my eyes from rolling when I say the word Metaverse. But if you strip all that away, having a standard that's not from NVIDIA, it's an outside standard. And having a cloud service for making 3D stuff, I think is really cool. I'm with you. This is exciting, buzzwords aside. All right, let's talk a little more about Google delivering on a promise. Promises made, promises kept from Google. Let's find out at Google I.O. this year, Google announced results about you. Some of you may remember this, a feature that would let you remove some information about yourself from search results. Well, now that feature is rolling out to users in Europe and the US. So I guess promises kept. We'll see how it goes. In the Google app for Android, if yours has been updated, of course, you can tap on your profile avatar, select the results about you as the menu item. That takes you to a page that explains how that you can do this actual request for the removal. And that takes you out of the search results. If they contain a phone number, home address or other personally identical information, identifiable information like ID numbers, account numbers, that kind of stuff. Yeah, that's a good point. This is not right to be forgotten. That's a European law about, I would like that embarrassing story about me being arrested taking down. It's not untrue. I just don't think it should be up anymore. This is about doxing. Somebody put up my address by phone number, my bank account number, and I would like that not to show up from search results. You can make a request from a page while you're searching. If you're looking at search results in the app, you can click on the three dot menu by the result and then choose remove result. Once you do that, you can track the progress of your request on that results about you page that Scott just mentioned. Not all Google app users are going to see this right away. If you don't see it in your particular Google app, I didn't see it in mine on my Android phone. You just hang in there. It'll slowly roll out to everybody. However, if you do have an immediate need to remove results, there's a web page you can go to. We'll have a link in the show notes. It's not real snappily said out loud. It's support.google.com slash web search slash answer and slash 9673730. You probably search for it too, though. I've got a lot of questions about this. They've been swimming around my head all morning, but I do wonder about one thing in particular. This obviously must be fielded by AI and robots and bots and stuff because this is too much for some individual at some terminal to probably manage given the potential size of how many requests you'll get per day. That only concerns me a little bit because maybe not that much more than if a human being was doing it, but it feels like this might be a situation that gets a little in the weeds when you say, sure, I can't take down an article that's about me being arrested because that's just true and I don't really get to control that. But what if it's an article that somebody wrote that was slanderous about you as just an example? If they said a bunch of things that were absolutely demonstrably not true and you have very little recourse to have it taken down. I suppose you go through legal hoops to do it, but maybe you don't have the resources to do that. Can this remedy that at all? I don't know. I don't know what the granularity is because they don't really detail it. It makes perfect sense to me that if it's personally identifiable, doxable information, super simple and easy. I don't know how the robots decide you're telling them. There's no robot. This can't be people, right? It's too big. Google says it's important to note that when we receive removal requests, we will evaluate all content on the web page to ensure that we're not limiting the availability of other information that is broadly useful, for instance, in news articles. When they say we, I think they include them. They're doing manual review, my friend. Really? Yeah. That seems enormous. Okay, if they do, I say props to that. I'm not going to say it's entirely 100% manual. There's probably some filtering on one end. You're probably right. But they are definitely putting humans in the process here to make sure. Well, that would be a nice change because most of my interactions with YouTube and other services where I have issues, it's all robots all the time. So if that's true, that's actually, that's really good news to my ears, because I think this is a more personal thing. This is the kind of thing that needs the touch of another human being, for lack of a better way of saying it, to intervene and say, yeah, this is absolutely what he says it is or whatever. Because if computers are deciding that we're going to have false positives, false negatives just going to get weird. And this won't be perfect. But I think I think I'm leaning toward liking this. I just, I just don't know what the granularity is. Like at what point can I stop getting successful requests? Not that I'd be doing it all day, but you know, like what kinds of requests seem reasonable that aren't going to get chosen and which ones aren't again, it's the weeds. So that's why that's why I made the point that this isn't the right to be forgotten. So this is not about this article is embarrassing, but it's true. Those don't count here. That's an entirely different process and it's only in Europe. This is it shows my personal contact info. It shows my contact info with an intent to harm me. It shows other personal info about me personally. This is all about personal info or it contains illegal information information that is illegal for people to show or it's outdated. Which I've that's the one category that I'm like, oh, so if somebody's saying mail Tom at 608 Maple Street, Arlington, Virginia. I'll be like, yeah, I haven't lived there in years. That's out to get that get that out of there. That's not right. I feel like that's that's interesting that that's in there as well. But most of this is there's data about me that shouldn't be seen or is wrong to be seen. That makes sense. I wonder if there's a way to like if you had an about Tom page and on that page you had some personally identical information that you wanted there. Yeah. For whatever reason you wanted it there. Could I is some, I don't know, outsider go into there right click that and go, hey, this should be taken down to mess with you and your discoverability. Yeah. That first of all, you have to do it from your from your logged in account. Remember that you mentioned you have to tap on your personal profile. So you're already telling Google, here's who I am. So it's going to look and go like, well, Scott, that information isn't about you toss it right. And so I think that's a that's a huge part of that. If you were able to like create an account where you're pretending to be me. There's still other ways that can go away. This account wasn't created recently. And that's where the review comes in and stops that stuff. So I feel like I'm not good. I would never say what you're talking about is impossible because there are always ways. But I feel like it's unlikely, at least unlikely to happen a lot. I like our chances better now that I know there are humans involved. So that is good to know. Yep. Moving on. Let's check this out the US National Transportation Safety Board or NTSB may have heard of them is recommended measures that are leveraging new in vehicle technologies that can limit or prohibit impaired drivers from operating their vehicles as well as technologies to prevent speeding. That's a direct quote. So what are we actually talking about here? Yeah, OK, so here is what they said in their published proposal incentives to get manufacturers to adopt speed adaptation systems. These are systems that range from simple warnings. You're going too fast to actual electronic limits that would slow you down. This would not be required. This is also not the first time the NTSB has recommended it before. Feels to me like they're just tacking this on like we're doing this other thing that's going to get a lot of headlines. So maybe we'll run this in front of you again. The one that people are really paying attention to is require. So that's different. Not hey, we suggest this is a really good idea. Require all new vehicles that are sold in the United States to implement passive alcohol impairment detection or advanced driver monitoring or both. That would prevent or limit vehicle operation if it detects that you are impaired. Now, if you're out there saying like, well, wait, we have vehicle breathalysers that are attached to starters. A lot of people if you get convicted of drug driving have to do those to get back the right to drive. That's existed for years. The NTSB is recommending a more passive approach than that, not a blow into your breathalyzer every time you want to start your car. But instead have some kind of system that passively monitors you. And if it thinks you look drunk, then it intervenes. Interesting. Well, Volvo announced that its EX90 SUV will include a driver monitoring system that reads facial expressions to detect distraction or drunk driving. I will attempt to warn distracted drivers. And if it doesn't get a response, slow down and stop the car automatically right over to the side of the road. Yeah, this, I can see all the sides of this. Where are you at with this one? I'm kind of, oh, man, this talk about one I've gone back and forth on. I'm worried because as I read this, I hear that facial detection or remote detection with other technologies, whatever those technologies may be. What do you do if it turns out it's somebody in the early stages of Parkinson's and they are still allowed to drive legally, but you are detecting some abnormalities in the way that they drive. I would hate to see these people get lumped into this kind of impairment where you're intoxicated. So I'm worried about false positives and things like that, like with anything, I feel that way. But I also really like the idea of these technologies getting better and better that can actually detect, you know, motion and translate that to whatever it thinks it means. I mean, it feels a little bit like, well, we're going to decide for you. And that feels bad to me. But at the same time, I'm like, yeah, well, what if I, what if we do catch somebody who's way drunk doesn't realize it got behind the wheel thought he was fine for the mile and a half home isn't actually. And then we're going to save lives maybe his and maybe those he comes in contact with in the same way that mandatory seatbelt save lives. But the difference between these two is that seatbelt you can see that through line of we did all this testing. It saved this many lives. We now have this data. It's easy to find and read. Therefore it makes sense to make these a mandate slash law where it comes to this. I don't know what that I don't know what this through line is like what's the straight line there really isn't one. Yeah, I go, like I said, I go many ways on this on the one hand. It's a it's a little bit like those those edge cases where you're like, well, I don't want the result. Right. I don't want someone who is drunk driving a car. I don't want someone impaired driving a car where it gets weird is is the. So how good is this how how good is this at detecting because what I also don't want is me getting in a car and the car saying you're drunk and me going to have a touch to drop. You know, I've been sober for a decade or or all your life, you know, like what what's going on. I feel like those are knee jerk reactions to any kind of proposal like this. And that they they there are always ways around them to say like, OK, well, we'll come up with ways to deal with individual cases where someone has a condition that just tends to set this off and we'll figure that out. That is important to do. And that's why the NTSB is just proposing this not putting it in place. Right. This is this is something that that they're they're testing the waters on. But but yeah, I I I like the idea of it detecting and saying, hey, you're drunk. I feel like once it does that, if it does that with a very high level of accuracy, it's irresponsible not to have it reduce the ability of the car to drive. Right. Sure. Yeah. Well, it'll be interesting to see how this plan pans out. And I love the idea of the tech and I hope the implementation is not, you know, too much of a headache. Let us know what were you following this? Like, are you like, you know what, even if it means drunks get to drive? Sometimes I don't want the government requiring cars to have this. Or do you feel like, you know what, let's prove that it's accurate. But if it's accurate and maybe that's like a seatbelt like Scott was saying. What do you want to hear us talk about the show, too? Another way to participate is get in our subreddit. You can submit stories and vote on them at Daily Tech News Show dot reddit dot com. Daily images told the verge it no longer lets users upload and sell illustrations generating or using rather text to image generation stuff like Dali or Dali Mini or Mid Journey or any of those. It's the largest image marketplace, by the way, yet to put in such a policy. Newgrounds, Purple Port and Fur Affinity have all similar restrictions. Shutterstock, though, seems to be the only one who's letting you still have the AI generated art. It's limited search results, though, so that you can't find AI generated art. Oh, that's it. Oh, see, I missed that part. That's very interesting. Okay. Getty is doing this because they're concerned about the legal implications. The question of who owns a piece of art generated by an algorithm has not been tested in court. The terms of service of these companies grant ownership to the user in almost all cases. So if your mind jumps to that, like, oh, is Mid Journey going to try to claim they own a thing if it gets really popular? It doesn't seem like that's going to be the problem. Since the machine models, though, are trained on copyrighted works, there is some concern that the derivative works might be considered to infringe on the original works that the model was trained on. And sort of a similar concern is that if the algorithm can generate something in the style of, say, Scott Johnson, especially if it was trained on Scott Johnson's work, does Scott get to say, like, oh, that's a that's a not a fair use of my art? This has not been tested in court. Like I said, collecting the images for training appears to be protected under the same laws that allow you to index websites like search engines. So there's no nothing illegal about collecting the art and training your model with it. But what is less certain is whether the art that results from that being trained on copyrighted works is a fair use of those copyrighted works. And the thing to remember about fair uses, it's not a right. It's a defense. You are when you're using fair use, you're saying, yes, I infringed on copyright, but I had a fair use case to do so, and I shouldn't be punished for it. It is not certain which way a judge would rule regarding algorithmically generated art based on training for copyrighted works. Well, in the meantime, sites like Getty are playing it a little bit safe and I don't blame them. Getty will rely on users to identify and report images that violate this rule. That's also working with the Coalition of Content Providence Providence rather and authenticity to create some filters, which I also think is a great idea. One could presume an algorithm could be trained to look for words created by an algorithm, but mostly Getty's avoiding liability by having the policy in place. Yeah, just putting the policy up gives them some defense and then making the effort really helps if somebody did try to sue Getty. They could say, no, it was against our policy. Here's what we did to try to stop it. Go after the person who made the thing. Go after majority. Go after somebody else. We were trying to stop it. So Getty's just trying to take themselves out of the equation right now. W. Scottis1 says, someone should try a lawsuit so we can test it. And that will happen. I think what I would remind people is that if you're an artist, you may want to use these as a tool. In fact, we've talked before on the show that creating the text that gives you the result you want is harder than a lot of people think. If you haven't tried this, go try it. Scott, I hope it's okay to say, like if you try to imitate Scott Johnson's art by telling Mid Journey to make it, it's probably not going to make it as well as Scott yet unless you get really good. If you're able to make it do Scott's artwork credibly, you probably will realize you had to put a lot of work in that. That itself is an artistic skill. It's a skill of description and a skill of imagination and and and a skill of being able to work with the AI. But but it's not as easy as like, well now anybody can just say, make a Scott thing and then, you know, all of his work comes out. Yeah, if you said draw a tree like Scott Johnson, you got to figure out which one. There's a popular Marvel Comics artist named Scott Johnson. There's a bunch of other other artists with the same name. So you got that whole thing. These little three, four word sentences, you'd see people use on Dali Mini on Twitter or whatever. It ain't going to cut it. It's not going to work that way. There's their skill in it. You're absolutely right. And there's also so much more to training these AI models to to to match up with certain styles. And sometimes it's not just an artist style. It's an era, an era style like we want to be classic, you know, a post Roman rule of era of art. Okay, cool. What does that mean? Has it been trained to do that? On the other hand, there are a couple of situations where I think there's real value in this. I may even brought this up on the show before. So forgive me if I brought this up, but if you're hearing it twice, the idea that you are, let's say a small game developer and you've got a computer RPG plan CRPG. That's a lot like the Baldur's Gate games. You've got characters to create and you've got profiles to choose and those include avatars. And so when you're setting up your character, you're like, I want them to be a big old, you know, orc looking dude, or I want to be this, this kind of mysterious looking hooded figure human who's going to be my thief or whatever it is. Those games give you sometimes hundreds, maybe less, but usually hundreds of choices of what to choose, different genders, different faces, different races, different, all these things within the D&D universe and construct. If you told that small game developer, you no longer have to pay somebody to draw and paint every single one of these images. A computer model can do it in a certain era's style, a painterly oil painting style and cram those all out in about five minutes. Why wouldn't you do that? Like that's amazing. It doesn't address any of these legal issues and then some of that's got to assess itself out, but that sounds like a huge boon to that level of creator. And it doesn't necessarily mean that at higher levels, artists are going to be pushed out as well. I think a lot of this is just unknown, but I'm optimistic the more the closer we get to this stuff. You're not worried about people imitating you. I'm not because they can't I don't think it's possible yet. I think it will be possible one day. It may not be me they focus on the focus on, you know, I wonder if we need a law and I and we can argue about where the law should draw the line. But I wonder if there's something in there similar to protecting your likeness. So for instance, I can't if even if I look like George Clooney, I mean, God bless me if I could look like George Clooney. I don't even if I looked like George Clooney, I couldn't go make a commercial and pretend to be George Clooney because that would be using his likeness. Right. If I implied that I was him. Maybe there's something similar to that that needs to be set for this like you you can use text to image generators as long as you're not trying to pass yourself off as an actual artist because it can imitate their style. Right. I feel like that's the template here and that everything else should be fine. If I if I'm an artist who used text to image to generate my art. There's an art to that maybe I then did a lot of refining on it which requires artistic skill of it more traditional sort. And yeah, we talked about yesterday's show about the kinds of things like the game game designers wanting to be able to use it to just speed up things and do more things. So I do think I do think there's going to be there's going to need to be a law and I also think that before we get a lot there'll be plenty of court cases that will make a version of what should be legal and not out of what the laws that exist, which is never as good as actually creating something new fit for purpose. Yeah, the only other thing I would add is it's possible that we'll get to a science fiction end on this where a famous artist is about to pass and they want to pass on their skill to another and then that another might be an AI. We do that with real people all the time. So why not let a computer do it. I don't think that's interesting like like what if if I'm just going to keep using you. I'm sorry. What if Scott says when I die mid journey gets the rights to my style of art and they can they can charge people to make art in the style of Scott Johnson. Then you have block block chain payments to go to my family and everything's good. Totally. All right, real quickly, Logitech officially announced its G cloud gaming handheld running Android. It has a seven inch 1080p touchscreen with a 16 nine aspect ratio 60 Hertz refresh rate. It's not a monster runs on a Snapdragon 720 g processor four gigabytes of RAM 64 gigs of storage. The big draws the battery 6000 mAh battery. They're saying it'll get 12 hours of battery life. Now it'll come with the Google Play Store. You'll be able to play Android games. But the other thing they're positioning it as is cloud gaming. They've worked with Microsoft and Nvidia to optimize Xbox cloud and G force now to run well on this. You can buy it right now. The retail price is $350 that arrives October 17th, but there's a hundred dollar discount if you order early. Yeah, quick, quick comment on this. I'm always interested in these devices just picked up a steam deck not long ago. I've been loving that thing. But they're they're obviously aimed at a very different thing. The fact that they're aiming this as just like almost entirely a cloud gaming device, despite the fact you can play Android games on it. It is it is powered for that it is not powered for high end gaming really of any sort. These are this is all pretty slow. I think the only thing that will hold this back from being a huge hit is I think that price is ridiculous. It's too high. Yeah, for this thing. And I think if this was $199 and there was a $50 discount for ordering it earlier, but your ultimate price is $199. I think they would sell these things like hotcakes and it would go crazy. I think $349 is already in base steam deck skew territory. So I don't know why you would do this. This seems way overpriced. And I love these sorts of things. I am into these portable gaming devices. Likely this thing will also be great for, you know, retro gaming and other things. But so is a steam deck at that entry price. So I just don't know. I'm not sure why this is so expensive. It's my only hang up. Otherwise, I think it looks like a fine piece of hardware. I mean, if it was Windows at 350, then it would make maybe more sense. Yeah, it would be more capable. I don't know my perception changes. But the fact that it's Android at 350, which is kind of silly since, you know, a Samsung Galaxy phone is less capable in this in many ways, gaming wise anyway. Yeah, it's more capable in other ways. But you know, that's going to cost you $1,200. So my expectations will be built really well. And they logitech knows how to make a nice little piece of hardware. Like none of those things are concerns to me. This sounds awesome. I just think that price is too high. It's not a loss leader for logitech. Probably, you're right, they're not going to take a big loss. They're not going to make money off steam. They're not. I mean, they'll make a little money off of affiliate revenue from recommending xCloud and GeForce now, I suppose, but that's not the same as selling a bunch of games. Yeah. I think you're right. The margins are slim on everyone else at the moment with maybe Nintendo being the one exception, but they've also got this, you know, advantage of still making the same seven year old hardware or whatever it's been now for the Switch. So it's just a weird handhelds are starting to become a big deal again. And with the Odin hitting the market and the four mentioned Steam Deck and devices like this, there's tons of it on Amazon from various Chinese companies, like we're in a bit of a renaissance for handheld and portable gaming. Yeah. I just wish it was a little more position price wise, but you know, whatever, I'll probably get one of these because I want to see and I want to review it. I'll let people know what I think. And that sounds like something you might talk about on one of your other shows. What else you got going on these days? Well, you're right. The show would be core. I do actually play retro as well. The core is the show where we talk about all the big goings on in the video game world and myself, Beau Schwartz and John Jagger host that show and have so much fun that it's entirely possible that we go like two and a half hours sometimes. That's how much we love doing it. We do it every Thursday. You might want to check it out if you're interested in any of the goings on in gaming, including this device. We'll be talking about that and so much more. You can find core or every get your podcast. Just search for core. And if you're looking for the website, it's at frogpants.com slash core. Now, usually this is the part of the show where we thank new patrons for joining us. I don't know if we don't have any new patrons or if Patreon just has a glitch where it's not showing us because it's not showing me anything happening yesterday at all and sad as it is. Usually you see somebody canceling if you don't see somebody signing up. And so if you have signed up, hang in there. I'm sure the data will arrive and we will thank you then. Thank you for your patience. If you haven't signed up, we'll sign up. Patreon.com slash DTNS. In the meantime, I want to thank Matt DeFriedes, one of our lifetime supporters, been with us for a long time, been supporting at a high level. Thank you for all the years of support, Matt. All right, patrons, stick around. I want to talk to Scott Moore about this AI art stuff on Good Day Internet. You can get that as a patron. You can also catch the show live Monday through Friday, 4 p.m. Eastern 200 UTC. Find out more about that at dailytechnewshow.com slash live. Back tomorrow with Scott Johnson. Nope, Justin Robby Young. Talk to you then. This show is part of the Frog Pants Network. Get more at frogpants.com. The Diamond Club hopes you have enjoyed this program.