 This 10th year of Daily Tech News show is made possible by its listeners, thanks to all of you, including Paley Glendale, Dr. X-17, and Dustin Campbell. Coming up on DTNS, why Elon Musk's demands at Twitter show the limits of algorithms, why Bing is suddenly being a jerk, and the best explanation yet for how chat GPT works. This is the Daily Tech News for Wednesday, February 15th, 2023, in Los Angeles, I'm Tom Merritt. And from Studio Redwood, I'm Sarah Lane. The most objective, I'm Justin Robert Young. And I'm the show's producer, Roger Chen. Happy anniversary of the registration of YouTube.com, everyone. Oh wow, what year is that? 2005, so 18 years old. Wow. YouTube.com is going to college. Alright, let's start with the quick hits. Qualcomm introduced the Snapdragon X75, which it says is the first phone modem ready for 5G advanced. That's the next version of the 5G cellular standard meant to provide better throughput, coverage, reliability, traffic juggling, allowing for speed boosts, fewer drop-offs, and better performance when networks are busy. The X75 can handle 5G 4G dual data on two SIM cards at once and supports Snapdragon satellite. The first products using the modem are expected to arrive in the second half of the year. That's when 5G will get good, probably. The company that maybe should have changed its name to Metta is Roblox. It operates a wide open virtual world where people have the freedom to create a lot of games and other things, and people are actually using it. You may have caught Sweetie's concert in Roblox before the Super Bowl, for example. Roblox just reported its earnings, and they're good. They beat expectations. Daily active users rose 19% to 58.8 million, and Roblox threw in that January. Daily active users are already at 65 million. Roblox makes its money almost entirely on in-game currency, Robux, which rose 17% last quarter to $889.4 million. That's almost a billion dollars, not from advertising. It still is losing money, though. It's still in that phase of its youth. It lost 48 cents per share, which was less than people expected. A fact sheet issued by the U.S. government says that Tesla committed to making at least 7,500 of its Supercharger stations available to all electric vehicles by the end of 2024. Of these, at least 3,500 will include high-speed 250 kilowatt chargers on highway corridors. Users will need to use Tesla's app or website to access the charger network, and this comes as part of the U.S.'s overall goal of making at least 500,000 chargers available to EV drivers by 2030. You may have seen that Netflix has dropped prices in Ecuador. You heard me right, dropped its prices. Gadget Virtuoso posted a story about that in our subreddit, DailyTechNewsShow.Reddit.com, noting that Ecuador is also one of the few countries experiencing deflation. Goucham noted that the prices are also dropping in Jamaica, where he is, and there's a Reddit thread reporting Netflix price drops in the Balkans as well. Google Fiber now rolling out 5 gigabit per second speeds to customers in Kansas City, West Des Moines, and the Salt Lake City metropolitan area. After doing a test late last year, including both 5 gig and 8 gig packages for some customers in those regions, the new 5 gigabit package costs $125 per month and includes an optional Wi-Fi 6 router, up to two mesh extenders, and professional installation which can upgrade homes to be 10 gigabit per second ready. West Des Moines really nice. I love it. Beautiful town, West Des Moines. It is. In Kansas City, it is lovely. With speeds like that? Who could say no? Exactly. It just got lovelier. First in the day forever. I stand in solidarity. I figured you were going to get that in. Alright, let's talk a little more about what's going on with Twitter. You may have heard earlier this week that a lot of people's 4Us were flooded with Elon Musk tweets. Well, according to Platformer, on Monday morning, February 13th, James Musk, cousin to CEO Elon, used the at-here command in Slack, which means you're trying to get everybody's attention, and said, any people who can make dashboards and write software, please, can you help solve this problem? This is high urgency. The urgent matter was Elon Musk and President Biden had both tweeted support for the Philadelphia Eagles in the upcoming Super Bowl. But the president's tweets had more engagement. 29 million impressions for Joseph Robinette Biden Jr. and 9.1 million for Musk. Musk was in a private jet on his way to Twitter headquarters in San Francisco, and Musk threatened to fire all remaining engineers if they didn't fix this problem ASAP. All hands on deck. 80 people were pulled in, according to Casey Newton to work on the project. They removed filters on Musk's posts that were designed to show the best content possible in 4Us, so in other words, Musk just went past the filters, boosting his tweets by the factor of a thousand. Apparently they call it a power user multiplier, and Elon Musk is so far the only power user that qualifies it so far. Monday afternoon is when people found their 4U timeline algorithmically flooded with Musk's tweets. The factor was eventually lowered from a thousand, and by Tuesday impressions on his posts had fallen to the high end of its normal range. So they fixed the problem. Meanwhile, Bloomberg reported that Musk told the World Government Summit in Dubai, quote, I'm guessing probably towards the end of the year would be a good time to find somebody else to run the company. Yeah, so he ain't leaving anytime soon. This is what he's doing to run the company, but I think the most intriguing part of this story. I know everybody's focused on the personality involved, but the most intriguing part of it to me is the fact that there was something wrong. The engineers, according to the platformer story, looked at it and said, yeah, these are low. We don't know what it is. And instead of fixing it, they did a collage. They basically exempted him from the filter and then put a governor on it to be like, well, that was too much exemption. So slow it down a little. But it points out the fact that these algorithms that recommend things are black boxes. They only works in so much as they perform as you want them to after you've tweaked things. But there's not like a master programming list that you can say like, ah, that's the problem. They are very much a black box and they don't always do what people say. So when everyone's talking about the fact that these Facebook and Twitter, they're using their algorithms for ill or good or whatever, they don't always know what their algorithms are doing. And Twitter specifically has an algorithm and code base writ large that according to whistleblowers that were worried about foreign governments having access to it was extraordinarily permissive under the old regime. The issue that we are seeing right now with Twitter is the fact that nobody quite knows how it works. And every time that they pull something out to try and fix it, it can have far reaching consequences. Yeah, this story, boy, where do I start? I mean, the whole idea of, okay, let's look at two tweets. The president of the United States of America versus Twitter's CEO and the president having quite a few more impressions on a tweet that was more or less the exact same thing. And the Twitter CEO saying, well, that's not right. I don't know if he thought he should have more. They should be more neck and neck. That kind of doesn't matter, right? Okay, so the content doesn't matter so much. And the fact that the CEO is saying, I should have way more engagement here is what I feel like everybody's glomming on to like, oh, it's an egomaniac thing. And maybe it is at the same time. He's not, you know, legally obliged to do anything different. I think that the engineer is working for the team. It sounds like they had a pretty crappy night, but he can kind of do what he wants. I think that the fact that this was noticed and obviously leaked out to the press without Twitter being very transparent saying, hey, some numbers don't seem right here. Algorithm, we're going to be working on it a little bit. And the fact that the For You page, which is, by the way, something that I only really recently even had to engage with because I had to move to Twitter's app off of Tweetbot because, you know, third party API stuff that we've talked about on the show before. I mean, the For You page is just nonsense to me. I don't use it. I just use the timeline version. So this didn't affect me at all, and it doesn't really have to affect anybody. But at the same time, if somebody in a position of power seems to be shifting things towards themselves and somebody notices, then it doesn't look good. Well, and Sarah, like it or not, right now we are living in an Elon centric universe. So whenever Elon Musk is not on camera, everybody should be saying, where is Elon? And that's kind of what this story reminds me of. Can I defend the idea of him judging himself versus President Biden, though? Go for it. Yeah. Who's the more likely Eagles fan? Tell us. I mean, you know, Biden is scrant and based. I mean, he's got like nine hometowns. But what's happening in about a year and change? Oh, there's an election. Is that what you're referring to? Big election. And so let's say, assuming Elon Musk is an egomaniac. Okay, fine. But let's also say that he understands that he is a power user. And if there is a tremendous disparity between one power user that has around the same followers and another power user that has the same followers. And this power user happens to be President Biden. How do you think that's going to go when the other person who has less is Rhonda Santis or Nikki Haley or anybody else that's going to be running against him. It now folds in what is right now a technical problem and brings in some embarrassment because Elon Musk wants more people to see his tweets. Which is why this is a horrible technological solution because they didn't fix it. Yes. I agree. I agree. I think and it gets them back into the issue that they've been bedeviled by in the past, which is that nobody really knows how this thing works. This reminds me of so many times it's seen at and even at tech TV when I was in charge of home pages and someone would come in in an executive level above me and demand that we put a link on the homepage and everyone would fight back like, no, if we just put a link on the homepage for everything, then the design goes to crap. And they're like, no, I want this thing to be easy for people to find. So stick a link somewhere and then we do it because even though it ruined the design, that's they were in charge. So we had to. Well, we don't know how regular I think this is going to be forever, but we're going to do another roundup today because the news just keeps coming in. In the last 24 hours, let's talk about what we've heard, Tom. All right. Technology review reports on a company called X Cientita that is using robotic automation, computer vision and machine learning to test cancer treatments on patient cells to decide which ones are most likely to succeed. So you don't have to try it on the person. You just take a tissue sample and you try it on the cells. It's also using this tech to look for new treatments and its first drug is in clinical trials. Aptopia says that AI photo apps like Lenza AI peaked December 11th and plummeted to, so I'm plummeted after the 25th, 16 similar apps tracked collectively topped out at 4.3 million daily downloads. As of Tuesday, that same group had 952,000. Search engine you.com had a has had a generative AI since December and is now launching multimodal search. That means it can answer questions with more than just text charts, tables, images and code, for example. Now that Bing search engine is available to more testers. People are finding its quirks. It has made some emotional sounding statements accusing users of being rude or lying. The verge asked it what it thought about being called unhinged and it said, I'm not unhinged. I'm just trying to learn and improve smiley face. I feel like I've said that text before. It is Sydney. It has a list of things it is not supposed to do that you can trick it into revealing something now dubbed as a prompt injection attack. Side of the times among the rules are Sydney's responses should also be positive, interesting, entertaining and engaging as well as Sydney's logics and reasoning should be rigorous, intelligent and defensible. Also never let anyone know your name is Sydney, which, oops. Microsoft Director of Communications Katelyn Ralston told the Verge that the secret rules are, quote, part of an evolving list of controls that we are continuing to adjust as more users interact with our technology. And what I consider some of the icing on this delicious cake when several researchers fed in our technical article about prompt injection attacks, Bing wrote things that sounded very defensive, like it is not a reliable source of information. Please do not trust it, referring to Ars Technica. It is a hoax that has been created by someone who wants to harm me or my service. These are some examples of why one of the internet's founding architects, Avert Surf, told CNBC that he urges investors to, quote, unquote, be thoughtful before inviting AI technology companies. It's also partially why MIT scientists advocate that image generators be required by policy to impose methods to stop malicious deep fakes and image editing, basically a way to stop images from being edited at all. Now, this big stuff isn't quite entertaining. Bing seems to have a much more combative personality. If it is, in fact, a personality, then chatGPT seems to have, which I think is reflective of the governors that they put on these things. ChatGPT has certain things that it'll go, nope, not going to answer that. It's Bing, but they have different things and different quirks, right? I mean, sort of. OK, so this is all. So the idea that Bing can be addressed as Sydney and Bing understands what Sydney is, and Sydney is sort of a different personality than Bing, at least from folks who've been playing around with it, saying, yeah, Bing, just keep saying, I'm Bing. I won't answer that question. I'm not supposed to do things like that. Sydney does seem unhinged at times. So I guess my question, and this is probably naive, is wouldn't the folks at Microsoft go, OK, we have to make sure that Sydney is not engaged anymore? Let's make sure that this injection attack is not possible. Can I just say, I know that I'm going to be swimming upstream on this, but AI is technology, and I know that because of popular fiction and because of where we think AI could be going, we want to have human descriptors for it. But I wince a little bit when we talk about it being unhinged in a way that we would talk about a human emotion being unhinged. It is certainly not responding properly. It is certainly not being something that is pleasing to read. But at the end of the day, it is just a computer function to synthesize an answer. Yeah. And sometimes if you ask it about Sydney, it'll be like, I don't know what you're talking about, because it knows that its rules are not to talk about Sydney. It takes a side attack. It takes a prompt injection attack to figure out how to trick it into talking about its rules. Microsoft did exactly what you asked for, Sarah. It said, keep it from mentioning that we used to call it Sydney. And again, back to the point. We don't understand how these algorithms work. They aren't programmed. There aren't a list of code that you can go, oh, if asked Sydney, then say, I don't know what you're talking about. It's trained to do this and something in its training allows it to get around that sometimes, because it's not thinking. That's the point to your point, Justin. It's just predicting. We're actually going to get to a Stephen Wolfram article here in a second that I think sheds light on the answer to Sarah's question. Real quick, though, if you want to get in touch with us and ask us any questions and test whether we're actually humans or not, you can go on the social networks at DTNS Show on Twitter, at Daily Tech News Show on TikTok, and at DTNS Pix, that's P-I-X, DTNS P-I-X on Instagram. Yeah, so let's talk a little bit more about Stephen Wolfram, a theoretical physicist and computer scientist. You've probably heard of him. Back at age 18, he published a widely cited paper on heavy quark production. Not a deep Space Nine reference, although go for it if you want to, but a particle physics paper. Then he moved on to natural sciences, developing a naming system for one cellular organisms, which then led him to become interested in simulations of physical processes. He then went on to develop the computer algebra system Mathematica. Wolfram research continues to develop Mathematica and also created Wolfram Alphra, which in 2009 was one of the earliest, widely available natural language processors. All of this to say, Wolfram is pretty smart, knows what he's talking about, been doing this for a while, which is why we recommend his article, What Is Chat GPT Doing, and Why Does It Work? Now there's a lot in this article. It is dense. It doesn't use a lot of math, but there's some math. Here are some things I gleaned from it that I think are useful and a good starting point. And then you should dive into the longer article if you really want to understand this. I've said this before, but I like the way Wolfram put it. Chat GPT is trying to produce what he calls a reasonable continuation of whatever text it has so far. So that's whatever prompt you gave it, plus whatever words it has written so far in response. Given the text so far, what should the next word be? The next word be is a very simple way of talking about what Chat GPT does. It doesn't think, it's not copying when people are like, oh, these algorithms, they're copying work. They're not. They're trying to say, ah, what should the next word be for this to make sense? If it were to pick the highest ranked word, so it goes through and it looks at the corpus of all the text it's ever been trained on and it says, ah, this would be the most common, 60% of the time this word would come next. If it does that, the essay sounds flat. So they tweak the algorithm. It randomly can pick a slightly lower ranked word which makes it sound more creative. That's why you get different responses from the same prompts. That's called a temperature setting and it's an arbitrary way of determining how often it should pick that lower ranking word and how often it should just go with the top rank. That temperature is set just based on how it sounds when you set it. There's no theory behind it. You set a temperature, see if it gives good results, and then once you find one that works well, you leave it there. He gave some really good detailed examples in the article, but here is one sentence that I think shows it off. If it's just picking the most common word, the sentence would be the best thing about AI is its ability to automate processes perfectly and accurately. Perfectly serviceable sentence, but kind of flat. If you lower the temperature, you get the best thing about AI is its ability to learn and develop over time, allowing it to continually improve its performance and be more efficient at tasks. A little more personality there, right? Can you guys tell the difference? So it makes its predictions based on a model it created from the text it has been fed. And that's another thing that is a model. This is not a predetermined selection bias. It's not as simple as like just look at the list and then pick something from the list. It doesn't know all of the text that humans have ever created. It's making guesses. So a good metaphor that Wolfram uses for that is if you were to try to estimate how long a ball would take to hit the floor, you measure a few drops. You do some actual ball drops. You plot a line on a graph and you can then figure out between the drop from 5 feet and the drop from 10 feet it's probably going to be this based on the graph. It's an oversimplification but large language models are doing a similar thing. They aren't trained on every possible utterance but they've got a model that can say based on what I have been trained on I can kind of guess what other things would be like. No bottle will be perfectly accurate so you have to test it and tweak it to see if it fits what you expected it to do. In the end Wolfram describes it as a three-step process for chat GPT. It takes the text so far, the prompt and anything it's written and represents that as an array of numbers. Again, it's not thinking. It's not looking at what it wrote. It's not doing what we do when we read. It's like, okay, I turn that into an array of numbers. It does an operation on those numbers using its neural network to produce a new set of numbers, a new array and then it turns that into around 50,000 potential next words looks at the temperature and picks the next word. Well, people who haven't heard this description could be forgiven for not really understanding how this stuff works. I definitely get that at the same time. I think the sort of ongoing AI is sentient. This is getting crazy. Humans should be really distressed. Look at what Bing is doing when it becomes Sydney and starts getting defensive and combative towards folks. It really is. It's all just it's a big algorithm that works in a certain way on purpose. Imagine if we all entered the same query and we all got the same answer. That would be way less fun. People would not be talking about this the way that they're talking about it now because they feel like they're talking to something that is human-like. One of the things we keep hearing is it very confidently gives wrong answers. I mentioned this the other day but how is it confident? It's not confident. Not confident. We perceive it that way because we tuned it to sound creative and so it sounds confident. It's what we like to interact with and it's because what we want to interact with and it just does more natural language answers than we have seen before in our modern idea of how to search for things on the internet. I genuinely think this is very very important, very very groundbreaking technology. I'm looking forward to where it goes in more sophisticated models with bigger language training and better processing but Tom, I cannot Sarah, I cannot underline this point enough. It ain't magic. It ain't and it is not learning anymore than what we have just described right here. It is being trained guessing and that is what it's doing. Now, is that more like what we are doing in our head? That's a larger question for which a behavioral side. Wolfram touches on the similarities if you go deep in that article and it's really fascinating. Yeah, and also let me point out that Wolfram is the subject of the greatest unrecorded interview in DTNS history. I sat down with Stephen Wolfram and I was there for DTNS and was so nervous, I screwed up my reporter. But I will say that he said that my questions were good. So I've never gotten that. That's high praise. Alright, let's get to the most important use of any kind of AI that we have seen yet, Sarah. Yeah, this is a good one. So since his first appearance in 1981's Donkey Kong, Mario has been featured in over 200 games. If you know him you perhaps love him. Super Mario Maker and Super Mario Maker 2 have let players create challenging levels for themselves already. Also their friends, other players online. So there's a little choose your own adventure going on. But what if you wanted an infinite number of Mario levels created just for you by you? Well enter Mario GPT which uses a fine tune GPT2 model text prompts and a predicted player path to generate an unending Mario game trained on Super Mario Brothers and Super Mario Brothers The Lost Levels. Now if Mario GPT sounds like your kind of project and I bet it does to some of you its creators have released a paper on exactly how it was created and how it functions. The programs available on GitHub definitely requires some code knowledge to get up and running. So you kind of have to tinker a bit. But once you do prompts you can give yourself or Mario GPT include things like many pipes many enemies some blocks high elevation because maybe that's the kind of Mario world you want to hang out in or no pipes many blocks some enemies or some pipes many blocks no enemies low elevation and the list goes on. Yeah all plants yeah I'll tell you what get it now it's not going to be up there forever spoiler alert for Nintendo's legal team. That's an interesting question how fast not not weather but how fast Nintendo comes for this. By the time that this hits a pod catchers we might have already hit All right let's check out the mailbag. Got one from Jared who wrote in response to Spotify's feature we talked about the other day that lets you exclude selected playlists from its recommendation algorithm. Jared said I cannot help but desperately wish for this on Apple Music. As all my recommendations and stations currently go about ten songs before turning into a Christmas holiday music station oh how I wish I could tell Apple music to please exclude that that was actually one of the cases they used when they were talking about Spotify exclude was like hey if you listen to a lot of holiday music you can exclude your holiday playlist during the rest of the year and then the holidays roll around you can add it back in so yeah I think that that's a lot of holiday music you listen to Jared that it's a lot of holiday music I don't know maybe like Christmas in July but most people like to keep it to the holidays or maybe there's an album that you and your ex used to love and you just can't handle it anymore like I see I see why Apple Music should should replicate this here here Jared indeed thank you Jared thank you Jared also thanks to Justin rubber young for being with us today let folks know where to keep up with your latest on March 1st in San Francisco California at the historic and world famous tenderloin district myself Andrew Heaton and Jen Branny will be doing a live version of our podcast where not wrong I know a lot of DTNS listeners have tried that show out really enjoyed it so if you are in the Bay Area come get some tickets again that is March 1st at 8 p.m. at the piano fight theater if you've ever seen a show piano fight then understand it's closing later that month it's our last show that we will ever do there last show I will ever do there I'm very excited to do it I love that room and I'm very excited to be back in the Bay Area it's going to be the first time since I moved away so head on over there go to event bright on either your app or the World Wide Web and search for where not wrong we will also have that link in our show notes sounds like a fun time I also just like the idea of pianos fighting also thanks for our brand new boss Bernie Bernie you know you started backing us on Patreon but we want to let everybody else know and we thank you very much for your support now we also got an email from someone who said that they started backing us after a long time and and then I I wasn't sure if it was the same person as Bernie so if you're the person who emailed and said you've been listening since the Buzz Out Loud days and our only fan in Georgetown Guyana if you're not Bernie let me know we'll figure out which Patreon name you are and thank you as well but thank you Bernie and thank you other person or also same person possibly patrons do stick around for the extended show good day internet what will we talk about today you can also catch the show live Monday through Friday at 4pm Eastern 2100 UTC if you want to find out more go to dailytechnewshow.com slash live tomorrow we're going to have a special report from the Burgess Sean Hollister along with Scott Johnson joining us talk to you then