 This 10th year of Daily Tech News show is made possible by you, the listeners. Thanks to every single one of you, like Jim Hart, Logan Larson, Mike Aikens and our brand new boss, Organic Computer. Coming up on DTNS, do you want to chat with non-player characters? How about learning a language from a chatbot? How about learning a language from a non-player character? This is the Daily Tech News for Wednesday, May 31st, 2023 in Greenville, Illinois. I'm Tom Merritt. I'm lovely Cleveland, Ohio. I'm Richard Raffalino. In Salt Lake City, I'm Scott Johnson. And I'm the show's producer, Roger Shea. Ah, we almost had a no-Californian show today with me out here in Illinois, but Roger, Roger stepped up to represent. Yeah, holding down the fort, as they say. Good job, Roger. The Fort Funston, up in Northern California. All right, let's start with the quick hits. Apple posted a reminder for its upcoming developers conference saying a new era begins. A lot of people made a big deal about how they don't usually say new era. Join us for WWDC 23 on June 5th at 10 a.m. P.T. That's Pacific Time. Apple also published a blog post titled Code New Worlds. A lot of people think that relates to augmented reality. Bloomberg's Mark Gurman expects the keynote address to be longer than two hours and include several new Macs and information on a mixed reality headset, and that I will be in tears if it goes past three hours. Well, if you ever wanted true wireless earbuds with customizable RGB lighting, Razer has filled that hole in your heart. The company launched the $200 Hammerhead Pro HyperSpeed true wireless earbuds and true to Razer's focus on gamers. These can connect either over your standard old Bluetooth connection or use a USB-C dongle for a lower latency 2.4 GHz connection. The buds also support active noise cancellation and come with a wireless charging case. Lots of products today. Garmin announced a new fitness focused set of wearables, the Epex 2 Pro and Phoenix, spelled FENIX 7 Pro. Both are smartwatches and both feature an LED flashlight, an updated heart rate sensor, weather map overlays, as well as new endurance score and hill score aggregated metrics. The Epex 2 Pro offers a traditional OLED display in three size options rated 31 day battery life, while the Phoenix uses a memory and pixel display that supports solar charging and gives you 38 days of battery. The Phoenix Pro starts at $800, the Epex Pro at $900. Logitech is keeping the product train rolling. They refreshed their popular MX Anywhere 3S mouse. This now features an 8K DPI optical sensor to better work across surfaces. I guess if you want to, you know, mouse it on your jeans or something like that, as well as quieter mouse buttons similar to the MX Master 3 mouse. The mouse will only work with Bluetooth. There's no dongle included in the box and Logitech said they couldn't fit the components for the receiver into a small USB-C unit. The company also updated the MX Keys S keyboard that now offers teleconferencing shortcuts in the function row and includes some backlight customizations. Logitech also released its smart actions feature for its Logi Options Plus app and this lets users automate tasks across programs with a single command basically sophisticated macros. Yeah, I always think I want to travel mouse until I travel with one and then don't want to bother with it. So I was attracted to this until I remembered that about myself. Google ended support for the first gen Chromecast. The OG. The dongle. The thick one. Not the one that looked like a puck. Had a good run. It was originally launched in 2013 for 35 bucks. This means no security or system updates. So use at your own risk, but the device will remain functional as long as it remains functional. Although Google did warn users you may notice a degradation in performance over time. It wasn't like Google rolled out a ton of support for it in recent years anyway. Last received an update in November 2022. It's first at that time in three years. Twitter has a program called Community Notes. If you didn't realize it launched back in 2021 before the current regime. The way it works is volunteer contributors can add context to posts. You may have seen this like community notes say like this may be in question or check this link for more information. Stuff like that. They're trying to combat misinformation. If the notes are voted as helpful they will show up for all Twitter users. It's not just somebody in the community posts a note and everybody sees it. There is a system. And recently they improved that system after an image of an explosion at the US Pentagon building went viral. Community Notes is getting the ability to add information specifically related to an image. In addition, that info will show up next to any matching images on other posts as well. Twitter intends to expand the feature to videos and posts with multiple images at some point. The reason they're expanding this is because they put a regular note on the text of the first image, but then people started posting the image without the original text. It was hard to keep up with it. This way, once they identify an image, they'll be able to track it down and put those Community Notes on more of those kind of posts, which is a good expansion. Scott, did you know Community Notes was functioning like this? Do you keep up with that? Yeah, I do. It's actually my favorite feature of Twitter, especially currently. There's just so much on there that can be misconstrued or especially stuff that's big and goes viral for whatever reason. That Pentagon photo is no exception to that rule. Community Notes just feels like this one really smart thing on Twitter that will just make sure everybody can post whatever they want, have you fun and do your thing. We've got this one little lifeline to accuracy. I really, really, really like it. To hear that they're still working on it, adding things to it, making it more functional, especially in light of the future, we're all facing of AI images that look so real that we have no way of really knowing the difference or whatever it may be, I am very thankful for it. I'm mostly thankful that it runs the whole span. It can be the owner of Twitter, Elon Musk, getting something wrong, and he gets Twitter noted, which has happened plenty of times. I've seen it on the far other end of a small user. It just happens upon a concept that he thought was real, and it turns out he wasn't. I think it's really great. I don't know how it works though, because I've never had anything go that viral and had a Community Notes attached to it, but I'm curious if somebody ever saw that and went, oh, shoot, I was so wrong. I'm going to delete that tweet now. I don't know how that works. Does the thing stay up? Let's poke holes in this a little, because it may not need it to come down. What has happened is they've noticed if there's a note there, it gets shared less, so it slows down the spread. If you're worried about, well, people will misuse this, they'll try to game the system. They did a lot of things to help try to stop the system from being gamed. You have to get highly rated as a community member before your posts are seen by all of Twitter, and also they do a thing where they only show the post, I think, for the images to identify notes that are helpful to a wide range of people. Notes require agreement between contributors who have sometimes disagreed in their past ratings. I'm wrong. It's not just about images. That's for everything. They try to discourage groupthink by doing that, to say you're more likely to have this note succeed if you and someone you've disagreed with in the past on whether it's a problem or not agree in this case, which I think is an interesting way of doing it. Yeah, I agree. It reminds me of weirdly Wikipedia. I know it's a very comparable situation, but it's this like, hey, a bunch of people contribute, and this guy corrected some and this person recorrected because that was wrong. And now they've agreed, no, that's you're correct. We've made the turn and we pivoted or whatever. It's got that similar vibe to me. And that's been something I've always, you know, as much as Wikipedia takes heat for not being perfect, I've always appreciated that method. And that seems like it's working here. Wikipedia by way of Reddit a little bit with some of the gaming and voting and stuff like that. But I totally get your point. The one thing I have a question specifically with the images stuff, because this was rolled up because images like the Pentagon photo that was going around like swagpope or something like that, like have a way of like grabbing you and being like this, I don't know, at least right now, like generative images like that still feel novel, right? Whether we know them or not images have a weight to us more than just text. My question is this seems designed to scale for like a Photoshop page where it took time to create that image. Whereas someone else could create five million other Pentagon attack photos or something like that and share that. And this feels like that goes very slowly. I know this is only for the photos that are going viral. But if someone has seen that Pentagon picture and either realize that a fake or not, and then generates a whole bunch of others, it's like, I feel like the system moves is always going to be moving slower than the stuff that goes viral because there is there is almost infinite scale to the ability to generate images. It's not going to be perfect. No, it's a good point. It's not going to be perfect. It's not meant to stop variations. In fact, they even said they're going to air on the side of caution in automatically applying notes because they want to be more precise. But it's better than what they have now. And the idea is to stop the easy copies, not the variations. Like once they've noticed this photo is wrong, what happened with the Pentagon one is they put the note on the original one, but everybody copied that image and posted it, bots copied it and posted it. So it's really only going to be good at stopping that. There are other problems, as you rightly pointed out. All right. Well, the language learning app Memrise has been partnering with OpenAI to use GPT-3 technology in its language learning. Back in December, it added a Membot to its app and website. And now it's launching a Discord integration for everybody. The idea is to force you to converse in real-world scenarios like you would if you lived in a country that speaks the language you're learning. You can find Memrise's app in the Discord app directory and add it to any server. Just go ahead and do it. Once you do that, you can summon it with the command slash learn solo or slash learn together. The difference is everyone can see what you do if you choose the together option, which can be, maybe you don't want to do that right away if you're still learning a language. But Scott, I'm curious, are you going to be using this to bone up on some foreign languages? I really should. My brother and his whole family, they're all fluent Korean speakers. He's from there. And I feel like I've never, since he came here at nine years old and he's in his 50s now, I've never really gotten around to doing that. So maybe this would push me to do that a little bit. I really like this stuff. And I think it's even more interesting that we keep seeing Discord be the place for these bots and add-ons happen. Mid Journey functions that way. Some other things like that. And to me, it just seems like such an interesting happenstance that Discord is the happy place for trying out some of this AI stuff and creating it with slash prompts and having that kind of community aspect built in. I don't think anybody really thought that was going to happen. But language is interesting and our ability to, I guess, learn a second language or a third language to me has always been hindered by the methods that were out there for me to use currently, whether it was learn it on tape or here's a book that'll help you do it or take a class. Those all seem arduous and weird. For some reason, this seems better. And I don't know why. I think it's just because it's cool and it's on the edge of tech. Maybe that's why, but I'm more interested now than ever. This is for somebody who already knows a little bit, I think, but it's also meant to help you feel like you're in the country that you're in. So I tried it with Korean, which I'm trying to learn. And I quickly realized I don't have Hangul set up on this laptop. So I switched to my phone where I do. Then I quickly realized I'm really slow at typing in Hangul. So having a different alphabet, that would be true for Russian, Japanese, Thai, so many languages, I think adds a little difficulty level. So I went and I tried it with Spanish. And with the Spanish, it worked great. I did a hotel check-in scenario. And it was lovely to just feel like, okay, I'm just going to say what I know. And that worked. I was like, hello, my name is Tom. I'm checking in. Where is the room? And everything was very smooth. And it really was a confidence builder. I like that. Yeah. And as someone who has attempted to start to learn, I've done the Duolingo for a week or two or something like that and kind of fallen off that, one of the problems with that or in high school language classes is that if you don't have someone to converse with, it just all slides off the brain. And having this at your fingertips already integrated where it's like, all right, I'm going to be in Discord for DTNS or another project or something like that. It's like, oh, I wonder, just having that so available in a place that you're always going to be checking like that seems like it would be super valuable for someone that has made a couple of steps but wants to keep that knife a little sharp when they don't have other resources to do that. Yeah. Because it pushes you to think about words that you might not think about otherwise and call them up on the spot, which anytime I've actually been traveling and been in another country, I've chickened out a lot of times trying to say stuff because I'm like, ah, I can't remember suddenly. So this is a little like a safe space for that because you're typing, right? It's like a half step. You don't have to actually have performance anxiety when you're practicing. Well, it's interesting though that Monica Chin from The Verge was trying it out and she was saying like sometimes it can go from like real informal to very stiff manners, you know, in different languages where that's like, I mean, all languages make that very apparent in some context or another. I wonder though, if this takes off as like a popular way to learn languages, if we will see this feedback in where we where we like have this this kind of mishmash of of styles, like whatever the the GPT effect of language learning, right? Obviously, these models will get better. But I wonder if there will be a feedback effect to that of you could be like, oh, I could tell you learned it on membot. Yeah, where it goes from like Chaucer to like, you know, like, you know, you cockney. Well, yeah, especially because more most languages, English is unusual in that respect, have a formal version that you're supposed to speak at certain times and to certain people. And an informal version that you're supposed to speak in certain situations. And it may be that Chatbots just don't know the difference. So they just kind of flail back and forth between. It's radically egalitarian. Actually, that's how you tell that's how you tell the membot. Listen, learner is because they just wildly flail between informal and informal. This is one of those I would love to jump 20 25 years in the future and just see because by then we will have who knows what AI large language model integration happens in schools and education in general. I'd be really curious, has everybody got a new version of Spanish they speak? And does everyone notice it or do you not care anymore because it's so prominent? That's fascinating. I would love to. I don't have that time machine yet, but I'm working on it. Oh, please hurry up. The other thing they mentioned real quick is that they see this as cooking. When you when you learn to cook, you don't say like, well, I'm really studying the theory still and the structure of chemistry and the no, you just start cooking and you make mistakes and then you learn, you burn a few things and you go from there. They're like, we want language learning to be more like that, let you do actual practice, make mistakes and learn from those. Right. Folks, if you would like to try out the membot while it's free because eventually they're going to charge for it, we've got it active in our discord for a daily tech news show. You can join the conversation there by linking to a Patreon account at patreon.com slash DTNS. Oh, non-player characters, NPCs, they're essential. You got to find out how many hogs you need to go kill in World of Warcraft by talking to one. They dole out other important information. They lend a little depth to the world with their backstories and their lore, but they can be a little predictable. You may get an option or two to choose from when you're talking to them, but they're not exactly free form. You just click to advance their predetermined script. Large language models promise to free the NPCs from those restraints. Maybe that's not a good thing. If you're one of those people who's like, just give me the quest. I don't want to talk to you. But at Computex, NVIDIA showed off the ability to talk in natural language to NPCs. The demo showed off the Avatar Cloud Engine or Ace for Games. Ace includes NVIDIA's NEMO LLM deployment tools, so all their large language model tools, Reva speech-to-text and text-to-speech, and can run locally, if you wanted to play it that way, or in the cloud. They worked with a company called Convey to do the AI part of this, so you want to judge this partly on Convey and partly on NVIDIA, but let's listen to a bit of the demo and then, Scott, you tell me what you think of this. All right. Hey, Jen, how are you? Unfortunately, not so good. How come? I am worried about the crime around here. It's gotten bad lately. My ramen shop got caught in the crossfire. Can I help? If you want to do something about this, I have heard rumors that the powerful crime lord, Kuman Ioki, is causing all sorts of chaos in the city. Please feel ten pigs for Kuman Ioki. I'll talk to him. So it goes on from there, but it's a little stiff sounding, but if you didn't catch that, the first voice was just a person. That was not recorded. That was a person talking to it. The more my ramen shop has crime, that was the AI responding to what the person said. Yeah. I'll tell you this. I think I've said this on the show maybe multiple times. My great golden calf of AI has been from the beginning when hearing about all this new technology was when are we going to see this in video games as part of NPC interaction, story interaction, this sort of stuff. That's really fascinating to me. And putting aside the long conversation we can have about writers and who writes what and when they should write it and that sort of thing. This is a really impressive demo at a very base level. It's saying, look, a player can interact directly with these non-player characters, and those conversations can be more dynamic and more meaningful. The way it works now in your typical RPG will use something like Baldur's Gate as an example. Baldur's Gate will say, a bunch of cool dialogue, and then you will have a bunch of choices to choose from to respond to them and some of that has to do with your alignment as a D&D character, other factors, but you can make your choice about what you want to say. You can be rude, you can be nice, and then they'll respond in different ways depending on how you responded. That's great, except that's always been a finite process. There's only so many things the NPC is going to say to you and so many responses you're going to be able to say, and there's enough of them that it feels immense, but it really isn't that immense. Again, I say it's finite. In this case, the potential anyway is for you to talk to this dude at the sushi bar and even go off on tangents. Now, the game designers are going to probably want to hone that in and keep a fence around things a little bit, and I think they can get creative with how they do that so it doesn't feel as fenced in. I think though, there's really no limit to how far and how cool this could get. The only thing I would say about this early demo is there's not a lot of conjunctions being used or other natural methods of speaking in these sentences. He did say it's at the beginning, but then he stopped. Which is again, could be a character thing and that's fine. I mean, design a character like that. You'd want data to talk that way and other characters like that or robotic sort of characters, but if you really want dynamic stuff with accents and a belligerent troll in a tavern or something, they're going to have to go a little bit further with that stuff, but the only other thing I would say is I think, at least I feel in my guts that most players, it could be wrong about this, so please write in about this, but I think most players do not want to talk to their games. This has been proved out through other texts and other means. Like when it came to the Xbox Kinect, there were a lot of games that were like, well, just talk to the game and it'll answer. People hated that. They just want to take a controller and they want to scroll down to an option. I think that's still a big possibility here, because it doesn't mean, just because you're choosing from text solutions, it doesn't mean that those aren't also cool and dynamically generated. Yeah, that's what I was going to say. You could still type to the game or even have dynamically generated stuff from a chatbot, but I wonder how much of that resistance do you think is because it hasn't been good, that they talking to a game was never as good as it is now. It's never been very good. This could change that. That's true. I just think we have a natural inclination to not want... There's a very uncomfortable feeling of, what do I even say to this thing that isn't real and I know it isn't real? That is maybe a generational thing where a whole other generation of people will figure that out and it'll be fine. Yeah, I'm sorry about the crime. I just want to show you Ronan with extra chashu, thanks. What it really does though is it means that they can not... I don't want to use the term control the narrative, but that is kind of what it is. If they limit what you can say back, that could be a creative way for them to say, look how dynamic and strange this conversation is, but also we need you on this track of the story that we're trying to tell in this game. There's going to be a lot of that discussion in devs, in meeting rooms this week or beyond when it comes to this demo because at the end of the day, it can't just be everything, whatever it can be, but compelling gameplay and what we understand about game theory needs some structure and how will they maintain that structure without it flying off the handle because I asked the guy how his cat's doing it before you know it. It's six hours of cat conversations. On the other end of that, the writing and then constructing the game mechanic of that is one thing, but also what this demo clearly shows is that voice acting is a huge part of selling the reality of that game and that's honestly the visuals of that demo. It's in Unreal Engine 5, it looks fantastic and the voices, it's kind of flattened a little robotic, but what I actually think is exciting is the idea that voice actors can come in, they can do a core set of samples, you could then be able to auto generate a speech based on that and one that would allow you theoretically to maybe move out DLC where you don't have to bring in the actors and have that recording and stuff like that would maybe speed development time there, but also provide passive income for those actors depending on how that's structured of being like, yeah, I'm the Fallout 8 voice guy for this mayor of this town and I get a little check every once in a while. I actually think that could be a boon for voice actors. Obviously it all depends on, yeah. It depends on a lot of things, but I've been talking to my, this is not a name drop, but Liam, I'm Ryan and I've been friends for years and he and I talk about these issues all the time, like what do voice actors do in the face of this sort of thing and I think that that's the right answer. The idea of taking, let's say Liam and when we need him for a dynamic exchange that happens in World of Warcraft as he plays various characters in there and so we'll take one of his characters and we'll say, all right, do all this dialogue and we'll train the AI to do it. The answer is that sounds awesome. That sounds great. It's also less work for him, by the way, but if there's passive income in there and these contracts are set up in such a way to take care of these voice actors, this might be a really great path toward that equity that we're all a little bit worried about. Yeah, I don't know how we do that for visual artists. That's still a big question in my mind. That's still the most murky end of this spectrum for AI for me, but I think that's a really great idea and you'd have the best of all worlds. You'd have dynamic Liam O'Brien voicing a ton of content and at the same time, real Liam O'Brien is actually being compensated for his work and that's a gap right now. That's where the fight should be. I feel like a lot of people and probably because they're not actually involved in the fight are like, I don't know, this sounds like a bad idea. It might ruin X, whereas this is actually time savers and we shouldn't look at every time saver and every efficiency included and every quality razor as not worth it because it could change things. What we need to do is look for solutions like Rich just mentioned and say like, hey, let's fight for that. Let's not fight against the technology. Let's fight for using the technology in a good way that is good for everybody. Yep. All right. Well, Scott and Tom, have you ever played Tetris and thought, I wish this was a nerve racking in a less geometrically satisfying way. Anybody? Okay. Well, then even if you have it, you might still want to check out Cetris. This is a new game on itch.io from developer MS Alivo. The game initially looks like regular old Tetris. When you start it up, you see the familiar falling block ships coming down from the top, but when they hit the bottom instead of tidy rows that are satisfying to clear, they turn into pixelated sand. You clear rows by connecting contiguous lines of the same color across the screen. So they don't have to be in straight lines. They just have to go across the screen. It's available on a pay what you want model on Windows and Linux. Shout out to Linux. I'm playing this. I just want to say, I think it's rad. And it's really not, what's great about it is the shapes don't matter anymore. Only in one way. They matter in volume of sand predictability. Yes. When you've got a teap or an L piece coming down, you no longer care about rotating it left, right, or if it's the right or left version of that piece. You just want to make sure that when you lay it down, that big L bump at the bottom kind of has a little extra sand and the physics pull it this way where you're missing some brown sand and the blue sands in the way. Like it sounds a little weird. I think it's brilliant and amazing. And I love it. It's certainly pretty. I'll give you that. The physics are amazing. It looks, I haven't tried it, so maybe I'll change my tune if I try it. But yeah, it looks like a pain. Yeah, it's good, though. It's real good. And keep in mind, like, I think your brain would work well for this. I think you're pretty good at predictive visual stuff, like you see a thing and go, I think that's probably would hold two gallons or whatever. That's the trick with this game is just going, well, that piece isn't enough sand for the volume I need to fill this gap. And so you'll use it somewhere else, which is a weird thing to do with a game that looks like Tetris. Like it is totally different. Yeah, I don't know how they came up with this idea. It's super brilliant. Wow. All right, let's check out the mailbag. Yeah. So yesterday, Adam wrote in, he was asking about setting up commercial EV chargers, like he was like, I have this spot. I think it would be really good. How do I do this? I have no idea. Well, Bodhi wrote in responding to the email. He says, if Adam wanted to partner with an existing EV charging network, then contacting ChargePoint, EVGo or Blink would be a good option, utilizing their network. But he says, if he wants to own and operate the chargers themselves, then Sholes has an e-mobility division. It would be a good place to start. He says they don't make L3 chargers, but they do consult with how to do it and they have partners with manufacturers as well. He also says some states offer incentives for installing new charging station and there might be some federal money as well to look into. And Bodhi actually did an interview at CES with Sholes and Outel Energy, which is an L3 charger manufacturer. And we'll have links to that in the show notes as well. Oh, Bodhi from the Kilowatt podcast. If you don't know him, been a guest on the show many times. We should get him back one of these days soon. But thank you for writing in Bodhi. And we forwarded this to Adam as well. So between Chris and Bodhi and the rest of y'all in the audience, Adam's going to get some help. So I'm sure he appreciates it and we appreciate it too. Nice. All right. Well, a big thank you to Scott. You had to think about whether you wanted to thank Scott. I was like, you know what? The generative AI stuff. You know what? It was good, Scott. Okay, fine. All right, fine. Okay. I appreciate the thanks. Where can people check out more of your great stuff online, Scott? For more great Scottisms. Well, this stuff with NVIDIA's tech and a bunch of other stuff are going to come up for sure on a show I do called CORE. We do it on usually Thursdays, but this week Diablo 4's early access starts tomorrow. So we are pushing the show to Friday because we want to have lots and lots of Diablo content. If that sounds interesting to you, plus all the other stuff going on in the game space, then do check out the show. I think you'd really enjoy it. I'm not just saying that. I think we have something special with CORE. So go check it out. That's at frogpants.com. Or just search for CORE on all the podcast apps. They all have it. We'd love to have you there. Excellent. Real quick, before we get out of here, I want to thank Greenville Smart Center, GreenvilleSmart.com. If you're traveling through Southern Illinois, it's in conjunction with Greenville University, a little innovator, entrepreneur space, and they were kind enough to lend me their podcast studio here. So go check it out. It's my hometown, so I'm a little proud that there's something cool like this going on here. But go to GreenvilleSmart.com and check that out as well. Now, patrons, don't go anywhere. You're getting the extended show, Good Day Internet. We're going to talk about XDA developers Tim Contesano's argument that Sony should bring back the Xperia Play phone. Bring it back. It's time. Yes, the time is now. And the time that you can catch us live every single Monday through Friday with DTNS is 4pm Eastern, 2100 UTC. And if that wasn't clear, you can find out more at DailyTechNewShow.com. We'll be back tomorrow with the man that planned the canal, Justin Robert Young. See you then. This show is part of the Frog Pants Network. Get more at FrogPants.com.