 This 10th year of Daily Tech News show is made possible by you, the listeners, thanks to all of you, including Brad, Kevin, and Paul Teeson. Coming up on DTNS, we know AI isn't always accurate, so why does that bug us so much? Well, Dr. Mary Ann Gary explains. Plus, has E3 gone to be Comdex in that great conference in the sky? This is the Daily Tech News for Friday, March 31st, 2023 in Los Angeles. I'm Tom Merritt. And from Studio Redwood, I'm Sarah Lane. I'm the show's producer, Roger Chang. And joining us, Dr. Mary Ann Gary, professor of psychology at the University of Waikato. Where are you coming from? I'm coming from Studio Sheep, Tom. Studio Sheep joins Studio Redwood. I love it, yeah. I mean, if we could only make Redwoods and Sheep work in tandem, Mary Ann, you and I have a business model. That's true. All right. We'll talk more sheep in GDI, I have a feeling, or maybe we'll just talk sheepishly, but let's start with the quick hits. There is a new way to get verified on Twitter. The company launched verified organizations globally, which means that a company that you work for can pay to be verified and then add a verified badge to any of its accounts or brands or employees, which might include you. It costs $1,000 per month, plus $50 for each affiliate account. Twitter also published two repositories on GitHub with code including the algorithm that the company uses to populate the for you timelines. And one last thing, Project Blue Sky, the independent project spun out of Twitter, made an announcement. CEO Jay Graber said that Blue Sky intends to create a marketplace of algorithms. Maybe Twitter could put its new open source algorithm up there? We're not sure yet. Anything is possible with the modern day Twitter. A blog post that went along with the release explains that the algorithm takes the best tweets from recommendation sources, ranks those with a machine learning model, and then filters out any from accounts you blocked. Virgin Group filed paperwork with the U.S. Securities and Exchange Commission saying its satellite launching subsidiary Virgin Orbit will cease operations immediately and lay off 675 employees, about 85% of Virgin staff. CEO Dan Hart told staff the company could not get funding for the operation, so one more competitor out of the satellite launching business, Virgin Orbit carried out 6 missions to launch two stage rockets from a converted Boeing 747-4 of those missions were successful. A Pew Research Center survey shows that around half the people in the U.S. are in favor of banning TikTok. However, 22% opposed and 28% were still unsure. But for those people who actually use TikTok, only 19% supported a ban. Most people know that TikTok has ties to China, but of those who didn't, 27% still supported a ban. It's worth noting that Pew only surveyed those 18 and older, so TikTok's really large younger base was not factored into these numbers. So if you're old and you don't use TikTok, you want a ban, got it. Chinese automaker Geely acquired the smartphone maker Meizu in June 2022, and Meizu has now announced its first phone since that acquisition, the Meizu 20 Pro lineup. They all offer 2023 Android flagship specs. Standard 20 offers 144Hz OLED screen. The Meizu 20 Pro and 20 Infinity have larger LTPO OLEDs, with variable refresh rates up to 120Hz. And the Infinity offers slim puzzles, if that's your thing. Prices are in the $400 to $700 range, although in Chinese currency. Geely also announced its car OS called Flim Audio, F-L-Y-M-E, offers a desktop with customizable widgets, supports multiple windowed apps on screen, and can sync with a mobile phone displaying video content from a phone on a car screen during a video call or syncing media playback when you leave your vehicle. Up until now, Google's chatbot Bard has used Lambda, that short for language model for dialogue applications and focuses on conversational dialogue. Today, the company announced it will incorporate POM, that's Pathways language model, into Bard for improved math and logic capabilities with coding coming soon. POM uses an AI architecture called Pathways that can train a single model to do thousands or millions of things compared to current individualized approaches. I have been told it should be forgot to swipe. Not Flim, my apologies to Geely. Tom, you won't have to apologize for this one. It was no fault of your own. IGN was the first to report Thursday afternoon that the Entertainment Software Association, or ESA, canceled this year's E3. In fact, right after we finished GDI on Thursday's show, we all saw the news. The ESA had contracted out production of the show to read pop, but it was still in charge of the thing entirely. ESA sent out an email to members saying that the 2023 version of E3 simply did not garner the sustained interest necessary to execute it in a way that would showcase the size, strength, and impact of our industry. You know, kind of boilerplate stuff. The last E3 was held as a digital version back in 2021, so some people were excited about this, certainly maybe you. The last in-person version, though, was held in 2019. And I've read a lot of death of E3 prognostications, even though E3 hasn't quite called it yet, or the ESA hasn't yet. Kyle Orland at Ars Technica wrote one, J. Peters at the Verge, not terribly optimistic about E3's fate. His article is called E3 Isn't Coming Back. Pretty clear there what he thinks. Definitely worth a read that one. But one of the telling details that Peters cites on the Verge is something E3 president and CEO Stanley Pierre-Louis told Games Industry Biz about E3 in 2024. They asked him, hey, are you going to do an E3 in 2024? And he said, we want to make sure we find that right balance that meets the needs of the industry. We're certainly going to be listening and ensuring whatever we want to offer meets those needs. And at that time, we will have more news to share. Yeah, in other words, we don't know. That's a lot of words saying no. Yeah, exactly. I don't know. It sounds like Comdex might need to get ready to welcome E3 into conference heaven. But Scott Johnson wasn't available to be on the entire show today, but we asked him for his thoughts. Hey, DTNS, it's Scott Johnson here, everyone's favorite Wednesday talking head. And I just wanted to pipe in real quick about this news, the breaking news that for at least the foreseeable future E3 is gone. They have not officially said forever, but the 2023 event that was scheduled for June is no longer happening. This after a bunch of, as you are aware, developers have pulled out, developers and publishers, big ones. And the most recent news was Ubisoft and then Tencent and Sega, Microsoft Sony and of course, Nintendo were all out anyway. It was starting to feel like there was going to be two booths, a ball pit and some COVID at this year's event. And so I guess nobody should be super surprised that the axe dropped finally and they decided to cancel this. But my quick take on it is this, we've mentioned this a bunch of times, so we don't have to talk about it a lot. But the idea that this was going to be around forever was never a guarantee. And we've been talking about it for 10 plus years about whether or not its viability would remain in the light of internet and YouTube and everybody being able to get directly to their customers the information they want to in a much cheaper, more efficient, less logistically challenging way. And that's kind of what's happened here. I don't think this moves the needle for any other kind of events. PAX is doing fine with their focus on community. Other events are fine in that same way. If E3 wants to make a comeback, they're going to have to figure out what their role is in it. Is it more community based? Is it more focused on devs? Do they become another GDC by changing that focus? They also have a branding problem. People look at E3 now and go, huh, didn't really work out, did it? Not sure how well in the future. My personal prediction, we're done with E3. As sad as that sounds, I think we're done. Thanks, you guys. We'll see you soon. I have to say, Scots, you know, he's certainly not the first person to say it and I doubt it'll be the last. Sarah, do you have any hopes for E3? I have one possible route for E3, but I'm curious what you think. You know, I think that E3 and as Scott mentioned, we have talked about this, you know, events and conferences in general and how, you know, when they work, when they don't work. E3 had a rough few years, but I know a lot of people were into, you know, 2023, E3 coming back. What's it going to look like? Who's going to exhibit there? And with so many major companies saying fairly close to the show, you know, going forward, this isn't June after all. I'm saying, you know, we're not going to do it. I'm sorry, I'm about to sneeze. Thank you. E3 became something to sneeze at. Kind of, yes, yes, I'm a poet. I didn't know it. But yeah, I don't really necessarily think that if you listen, if you were like, I want the E3 of the late 90s or, you know, the early 2000s, the E3 that was, you know, a big spectacle and lots of things were announced. And so, you know, so many, so many, you know, big companies and, you know, people to talk to whether, you know, you worked for those companies or you were just an enthusiast who was walking the floor or somewhere in between. You know, that is fun. That's, you know, that's what I feel about CES still, even though I haven't been there a few years. That's what, you know, people might say about Comdex or NAB or PAX. You know, there are, there are a lot of conferences out there that are still, you know, going strong. But I wonder how much E3 was just sort of a victim of kind of the content involved and how it might affect other conferences going forward. Yeah, that was Kyle Orland's point on Ars Technica. He was basically saying, look, GamesCon does better. Jeff Keely does better. There's a lot more competition for a video game industry that has changed since E3. It was created in that more centralized time you're talking about. So that makes sense. My hope for E3, or at least for the ESA is maybe, and maybe when you talk about the brand, this is a way forward. They create something new. Don't put it in Los Angeles, maybe put it somewhere else so that it feels different. Call it something else. Design something that someone needs. Now, you may say there isn't a need in which case, yeah, they're out of luck. But if there's, I'd be curious if people want to email us feedback at dailytechnewshow.com, you're like, you know, the one conference that nobody does is X. You know, maybe that's an idea for the ESA. Yeah. I'm curious before we wrap this up. Marianne, I know this is not your space, but we have been without conferences because of COVID and lockdowns for a long time. Are you, are you excited to get back to one in your area? Oh, you have no idea. You have no idea. I'm finally going to go to a conference in Japan this year, and I haven't been anywhere since 2019. New Zealand has for a while now been like living in a terrarium with a jar on it. And I just can't wait, right? You know, like so much of what happens in science. People think scientists are like Ernest Rutherford, you know, in the basement firing electrons that aluminum foil, you know, but it's, it's really very social. And if you don't have the social aspect, a lot of science doesn't happen. Maybe there's an idea for the ESA there. Well, you might be looking to fill an E3 sized hole in your gaming heart. And if so, you're not alone. The Verge also has a good rundown of other gaming events this summer, including, as Tom mentioned, Jeff Kealy's Summer Game Fest. That's happening on June 8th or starting anyway. And we will have a link to that article in the show notes at dailytechnewshow.com. If you're looking to get your conference life on. Yeah, I just want to see what else is out there. Whether you go or not, you just want to know what's still kicking. Well, folks, if you hadn't heard, Top 5 is back. It's my fourth attempt at it. I did one for CNET. I did one for Revision 3. I did one for Tech Republic. And now I'm doing one for Daily Tech News Show. It's called Tom's Top 5. We break down five things you need to know about the world, mostly about technology. This week, it's all about what to do if, say, your password manager became a victim of a data breach. I mean, hopefully that would never happen to anyone. But if it did, what would you do? Well, you can catch the new episode at youtube.com. Daily Tech News Show. Just look for the Top 5 playlist. I'm really sorry, folks, to be the bearer of this news. But AI is not leaving the news anytime soon. Here are just a few examples from today. Italy gave open AI 20 days to respond to an order to immediately stop processing people's data locally on suspicion of violating the GDPR and over concerns about unlimited access by minors. So they're in trouble in Italy. A preprint submitted March 22nd argues that open AI's GPT-4 shows early signs of artificial generalized intelligence. That's when it passes the Turing test, so to speak, although the Turing test is really not applicable in modern thinking, but it's sort of like, yeah, it's showing signs of consciousness or sentience. It's a machine that actually thinks. That's a preprint. It's early signs, but people are getting upset about that, getting concerned. We talked about that open letter calling for a pause on the future of AI research. A lot of folks who were behind that letter were accused of having suspect motivations. But lots of folks, including myself and Casey Newton on Platformer, have said that maybe some kind of slowing down isn't such a bad idea. But why? Why are people so worked up about this particular chat? Oh, okay. So large language models or LLMs, you might see that in the news a lot, are things that power tools like chat GPT. They're trained by humans. They also reflect human biases. Partly because of that, they can't totally be trusted because humans. They can mislead people with false information stated in a confident and sometimes convincing way. But we do know this. It says at the top of every one of these tools, why are we still worried or even fooled? We know that it's machine learning. It's not a human. We can't expect it to act totally human-like, even though that's where it comes from. What is it about being human that makes us so bugged and potentially fooled? Now, Marianne, you obviously study how we humans think for a living. So what are your thoughts here? Yeah, Sarah. Well, from the view of psychological science, and here I think it's like, let's talk about the chat-based models like, and I'll use just chat GPT as a kind of container. They have some features that make things really difficult for us, right? First and foremost, chat GPT's responses are smooth and fluent and confident, which is great, except that there's a ton of research showing that we see smooth and fluent and confident statements. The signals of those statements are true, that they're plausible, that they're trustworthy, even when those conclusions aren't warranted. And we've evolved to detect those signals since the Flintstone era, and they get pretty baked in by the time we're about five years old. So in a way, we're really grappling with the chatbot version of an optical illusion. And as you know, illusions are really hard to overcome. Warnings don't work, and we are easily misled. I mean, also, what you describe, listen, if somebody's really smart, they might be smooth, fluent and confident, but that's also sometimes an indicator that somebody is lying and lying very well. And that's, you know, that's sort of a human thing that we pick up on. I wonder how much, you know, we're all, you know, adaptable to this idea of saying, hmm, this machine isn't, is not acting correctly, is not telling me the truth. Yeah, well, I mean, hopefully we get there, right? But there's also something about chat GPT that adds this additional layer of difficulty, which is that it does all those things that we were just talking about. But it follows everyday rules of conversations, right? So when, if you and I are having a private conversation, which we're not having right now, right? But if we were having a private conversation, we would assume some things about each other. We would assume that we were each being helpful and truthful and clear. And we don't tend to approach most conversations being like super skeptical and looking for errors. And, and, you know, if you've ever had a conversation with somebody like that, they're really irritating, right? And sure, yeah. Yeah. So, like, of course, these chat based models follow that kind of these kind of rules, right? And it makes us more comfortable with them. It makes it easier to learn how to use them. Like, you know, when the iPhone came out and it was like have all the skew morphism. But of course, as we've, as you've all discussed, right, chat GPT isn't really trying to be helpful or truthful or clear because it's not really trying to be anything because it has no awareness. It's just basically fancy predictive text with a problem. The chat format probably makes it more likely that we're just going to nod along treating the situation like it's a private conversation and not being as skeptical as we otherwise might be. So it sounds like from what you're saying that the problem with generative AI with chat GPT like stuff is that it isn't perfect. That it doesn't it doesn't give you precise answers that you could say, well, this is obviously a machine that it does those sort of they're not nonverbal. I don't know if maybe there's a word for it, but those sort of subtle clues in the way it says stuff is that was that close to what you're saying. Yeah, it's it's it is it is a machine but it's sending off signals that it's a human. And so we haven't learned to accommodate all that technology yet right so that's really the question right the really really the big issue then is okay well if that's true and that's true and that's true then, you know, you could speculate that we're in a really difficult point as a as a culture and then the issue is how we're going to accommodate this kind of technology right so against this backdrop of all this stuff about how we've evolved to detect credibility and truthfulness and trustworthiness. And, you know, one possibility is that these consequences might not fully shake out for a generation right because we as adults. It's, you know, we're always going to remember before chat GPT and after chat GPT because we're spending time and effort right now trying to figure out what this thing can and can't do right and we're all like fascinated but I hope a bit analytical and skeptical. I'm worried about also is maybe young children who don't know before chat GPT and they might come to accommodate all this technology, maybe even with some flair like some previous generations have accommodated there whatever their new technology was I mean it's not like the telephone destroyed humanity, right, even though there were a whole lot of people chicken littling about it. But here we've got to the extent that these chat models, which are not human are giving off some signals of being human. You know it's possible that children as they start growing up might also start ascribing other like human like characteristics the chat GPT and not want to offend it or hurt its feelings and maybe want to give its rights and protections. I could see it going both ways I am almost guarantee there's going to be a section do what you just said and I think there will be another section and I'm not sure what sections going to be bigger. I'll say no no we've always grown up with these and they're clearly machines we've always known that because you grow up in a world where that's possible where we're because we didn't grow up in a world where that was possible. I think more disconcerted by it because it's like no machine has ever sounded like this for the reasons that you get. And that's true right I think that's true and so that's why I said I think we're at this kind of crossroads, and I think we need help navigating through this period of instability and transition and as much as I would like it to be universities who do this. I don't think it's going to be. I think what we really need is a strong independent and tech savvy media like like the TNS right to help us be cultivate this kind of skepticism scaffold us into developing these skills to help us draw together good information at the intersection of technology and psychological science and help us sift through and determine what's reliable and trustworthy. Well thank you for saying we can be an important part of that but I feel like our our role here is to help people understand each other. Is there is there something that you can give from the world of psychology to help people when they're dealing with these chatbots to help them kind of get over that feeling. Yeah well it's it's it's a challenge right because warnings as I said earlier warnings don't work so like if you look on Facebook right and they have like oh unknown source or whatever right those kind of warnings do not work because once the information is encoded in your memory. You start to lose track of where their sources so eventually they become disconnected and all you remember is the claim and not the source so that's a problem. But so I think what has to happen is it has to happen is at the time that you're using the technology right so it has to be that you maybe slow down and this is what this is the problem like we have to develop new skills about skepticism that we do not tend to already have on board. And that's where I think research needs to focus. Yeah. No that's that's good. That's that that is helpful in and of itself because the stuff is so new. Well Tom Marianne let's go to space shall we. In fact it's a real far away part of space a team of astronomers led by Durham University in the UK has discovered one of the biggest black holes ever found taking advantage of a phenomenon called gravitational lensing. That's where a foreground galaxy like ours bends the light from a more distant object and then magnifies it. Super computer simulations on the DRAC DRAC HPC facility helped the team examine exactly how light is bent by a black hole inside a galaxy that's hundreds of millions of light years from Earth. It's real far from us but yet we can see it. And for the first time using this technique they found a really really big black hole that is more than 30 billion times the mass of the Earth sun 30 billion with a B. Even though this was achieved using a simulation the path taken by the light from the black holes galaxy to reach Earth matched the path seen in real images that have already been captured by the Hubble Space Telescope. The astronomers published their findings in the journal monthly notices of the Royal Astronomical Society. You were telling me something about this study Mary and not specifically about the study because you're not an astronomer but but about this type of study and how it affects us as humans that I thought was really interesting. Yeah when you can inspire all in people there's some evidence that suggests that people have increased well well being and certainly appreciation for science. What and I think the one of the explanations is that you've got to take something that you have no idea about how it works or you didn't think the world work this way and now you've got to suddenly reconfigure what you know about the world and how it works and in the process you feel maybe in a case like space smaller more awestruck and and all of that goes improves your maybe well being maybe not permanent I mean briefly. So we're all inspires research. Yeah, I think inspires people. Yeah, that's great. I like it literally inspires people not just inspires me to stay far away from that black hole. I'll tell you what. Yes, that's that's a good move. Yeah, well now that we can find them it's easier one way travel landmines no thank you. It only feels like it takes a second from her. All right let's check out the mailbag. We got a good one from Scott so on Wednesday show we were talking on GDI after DTNS for patrons. We were talking about the idea of lab grown meat and the fact that if a human might be lab grown. How would we all feel about that we had differing opinions. Scott says I got behind a day but your discussion about lab grown humans and the large Tom steaks because we were talking about eating Tom's legs had me picturing Tom as the dish of the day from Douglas Adams restaurant at the end of the universe. Thanks for the laughs. You all see what you miss on GDI. Good stuff. Yeah. I'm still laughing about swag Pope. And the travel chalice every 90 minutes I laugh. I saw I saw the swag Pope photo and did not realize until days later. Okay, that was an AI thing. Got it. Yeah. Yep. Well, Professor Mary and Gary. So nice to have you on the show today. Thank you for being your wealth of knowledge to us. We know you you're. Yeah, absolutely. Please come back early and often. But until then, let folks know where they can keep up with all that you do. Well, I am on. I am Dr. Lam chop on just about everything. That's a choice I made in graduate school that I now regret. And you can find me on my website at Gary lab dot com. That's g a r r y l a b dot com. Well, again, thank you for being with us. Also a special thanks to Monty Marvion. One of our top lifetime supporters for DTNS. Thank you for all the years of support. Monty. If you would like to join Monty in the ranks of the patrons now is a perfect time. You can jump in, get in line for some loyalty merch, get the bonus episodes. I just put an episode yesterday out of the editor's desk that went to the patrons talking about answering a question from somebody. Like, have you ever thought about making DTNS a nonprofit? I have the answer. You could be a patron and get that answer. And of course, patrons stick around for the extended show. Good day internet. We're going to talk some more with Mary Ann about the human mind reacts to technology. So stick around. You can catch our show live Monday through Friday at 4 p.m. Eastern 20 hundred UTC. You can also find out more at daily tech news show dot com slash live. We hope you all have a great weekend. We will be back Monday with Nika Montford telling us all about the inventor of the internet's first search engine. Don't mess it. This week's episodes of Daily Tech News Show were created by the following people, host producer and writer Tom Merritt, host producer and writer Sarah Lane, executive producer and Booker Roger Chang, producer writer and host Rich Strafilino, video producer and Twitch producer Joe Coons, technical producer Anthony Lemos, Spanish language host, writer and producer Dan Campos, news host, writer and producer Jen Cutter, science correspondent Dr. Niki Ackermans, social media producer and moderator Zooey Dettardine, our mods Beatmaster, W. Scottus 1, BioCow, Captain Kipper, Steve Guadadorama, Paul Reese, Matthew J. Stevens and J.D. Malloway, mod and video hosting by Dan Christensen, music and art provided by Martin Bell, Dan Looters, Mustafa A, A-Cast and Len Peralta. A-Cast adds support from Tatiana Matias. Contributors for this week's shows include Aaron Carson, Chris Christensen, Scott Johnson and Justin Robert Young. Guests on this week's show included Blair Basderich, Will Smith and Professor Mary Ann Gary. And thanks to all our patrons who make the show possible.