 This 10th year of Daily Tech News show is made possible by you the listeners, thanks to all of you, including Paul Teeson, Ollie Sanjabi, Andrew Bradley, and lifetime supporter Michael Alt. On this episode of Daily Tech News Show, Dr. Nicky explains whether it's a good idea to use AI instead of people in some scientific experiments, plus Meta open sources its large language model at Microsoft's Inspire conference. We can explain all of this and more. This is the Daily Tech News for Tuesday, July 18th, 2023 in Los Angeles, I'm Tom Merritt. And from Studio Secret Bunker, I'm Sarah Lee. From Sweet Home Alabama, I'm Dr. Nicky Ackermans. And from where I usually am, I'm the show's producer, Roger Chang. Roger Chang, he is wherever he is. E-Bike maker Van Moof, we talked about it Friday, has been officially declared bankrupt. It's now under administration, looking for solutions. Let's see what else is in the quick hits. Maybe it's better news. Well, back in April, the 9th Circuit Court of Appeals upheld a decision in Epic's lawsuit against Apple, finding that Apple's closed app store did not violate antitrust rules. However, the ruling also said that Apple couldn't maintain its anti-steering rules. That's what prevented developers from pointing users to alternate payment options. The 9th Circuit has now granted a motion by Apple to hold off enforcing that ruled for, on that ruling for 90 days, so that Apple can file a request for the US Supreme Court to take up the case. So Apple says, we are not letting go. Microsoft has a July 18th deadline to finish its merger with Microsoft's or with Activision Blizzard. Oh, that's right now, as I'm recording this. However, it's still waiting for UK approval. It could ignore the UK, that would risk a big fine. It could try to carve out the part of Activision Blizzard in the UK and acquire everything else in the world, but that would be very complicated. It could also just give up on the acquisition, which does not seem likely. Or it could get Activision Blizzard to agree to extend that July 18th deadline. Now that may sound easy, but deadlines exist for a reason. With the deadline passed, Activision Blizzard's shareholders could be free to entertain other offers. Sounds like a headache for other companies, but it's a possibility. They could also ask Microsoft to pay the breakup fee and just say, forget it, you know, but it doesn't sound like they want to do that. More likely, they would negotiate an extension. And if they negotiated an extension, the shareholders might say like, well, since we're giving you an extension, maybe increase the price a little bit. Activision Blizzard shareholders seem to want the merger to close, so they might just agree to an extension without asking for a price increase. But this is why you have the deadline, so you don't have to deal with all these questions. Right now, as of this recording, Microsoft is in negotiations with Activision Blizzard on an extension. And it's also talking to the UK Competition and Markets Authority about a way to get approval there. The UK extended its deadline for making a decision from today, July 18th to August 29th. So conceivably, if they can get Activision Blizzard to agree to August 29th, then it could all line up, except there's the outside factor of the USFTC having an evidentiary hearing on its objections to the acquisition, and that is set for August 2nd. So if they get an extension from Activision Blizzard, August 2nd is the next date to pay attention to. The 16 gigabyte variant of the GeForce RTX 4060 Ti is now available from Nvidia's ad-in-board partners. Apart from having twice the memory, the new model is more or less identical spec-wise to the 8 gigabyte RTX 4060 Ti that was released back in May. Nvidia shared benchmark results at 1080p, showing an increase in performance in several games over the 8 gig model, but didn't make pre-release review boards available. The 16 gig RTX 4060 Ti lists for $500, which is 100 more than the 8 gig version. Alright, we've finally got that one out of the way. The US announced a new thing called the Cyber Trust Mark, which is meant to signify that smart home devices and fitness trackers meet security standards set out by the National Institute of Standards and Technology, or NIST. Devices with this label should start becoming available in 2024. The Federal Communications Commission will administer the voluntary program, and a lot of retailers are taking part of this. A lot of device makers taking part of it. Amazon, Best Buy, Google, Logitech, Samsung, the Connectivity Standards Alliance, aka the Matter Standard folks. Devices will come with this mark on the box, as well as, and I found this part interesting, a QR code that you can scan to see if the device is still certified. Maybe it falls out of certification. You can check on that. The FCC has not determined how often it will go through and re-certify devices. Instagram head Adam Massari announced that due to an increase in spam on threads, the Twitter competitor, the platform would get tighter on things like rate limits. Saying it may unintentionally limit heavily active people. Massari also said any legitimate users impacted by rate limits should contact Instagram support. All right, and that is a look at our quick hits. Speaking of Microsoft, they're having a little conference today, aren't they Sarah? They are. So at Microsoft Inspire Conference, the company announced details on some chatbot features coming to Microsoft 365 business users. The Bing chatbot is getting support for visual search, meaning that you could upload an image as a prompt for more questions or discussion with the chatbot, that kind of thing. It's available on desktop and mobile first, coming to enterprise later, although Microsoft didn't say exactly when. But speaking of enterprise, Bing chat enterprise is going to give subscribers the same GPT-4 powered chatbot that you get in Bing search, but with added protection for commercial data. So chat data isn't saved. It's not used to train the models, the company says. Companies have been concerned about that when working with other companies on these sorts of things, when employees use the public version of Bing in Microsoft's situation in particular. It is available in preview now for Microsoft 365 business users at no additional charge. And Microsoft says it might sell it as a standalone product in the future for let's say $5 per user per month. That's what they said, yeah. Microsoft is not including co-pilot for Microsoft 365 in Microsoft 365 for free. You have to pay to add it. If you don't remember, Microsoft 365 co-pilot is powered by GPT-4. It can generate text in office documents, create PowerPoint presentations from your notes, help create things like pivot tables and spreadsheets, stuff like that. Yeah. So Microsoft also announced that co-pilot is coming to Microsoft Teams. So it can do real-time summarization of things like ongoing voice calls, dates, names, key points that you were taking from a meeting type thing. It can also suggest next steps. In Teams text chat, for example, it can pull out key points from chat threads and create lists or tables if you want it to do that. That sounds good, but it's not going to be free, right? Nope. Not going to be free. Microsoft 365 E3 E5 business standard and business premium customers can get Microsoft 365 co-pilot for $30 per user per month. That is on top of the existing cost of Microsoft 365, which for business standard is $12.50 a month on up to $36 per user per month for the E5, the three folks. So more than double the amount for some users of these Microsoft 365 products. Now, 600 enterprise customers have been in paid testing of co-pilot as an early access program. Still no official release date for widespread availability. They're just telling us how much it's going to cost when it does become widespread availability available. I'm curious what Google, Zoom, and Salesforce are going to charge for these chatbot additions because this is pricey and it makes sense. It's expensive to run large language models and it shows. Nicky, how useful are these kinds of productivity tools? You work in a large setting. It's an enterprise-like setting in an academic institution. Does it seem like these are worth the money to you? Well, large enterprises sometimes have money to waste and sometimes just try out stuff like this to see if it'll increase productivity. I assume if we ever get something like this, it will have guardrails in terms of even though they say they're not using your data, like not being allowed to use it for any kind of federal grant writing or things that haven't been made public yet. I could see it being super useful for things like automating email responses or like it said summarizing and then I can see it in other instances where the stuff that we do is so precise that a LLM won't be able to answer it because it's not something that other people have talked about before, if that makes sense. So I could see it on the advancing productivity front for really basic stuff. I don't know if the university is going to shell out for it. Yeah, university budgets can be wildly varying, so it kind of depends on which university, what department, all of that. We won't have to wait to see what meta is going to charge for this kind of stuff. Free is going to be Meta's price. At Microsoft Inspire, Meta announced it will open source its large language model Lama 2 under a Meta created license. So it's not one of the standards license, but it is an open source license. It'll be free for commercial and research uses. Qualcomm announced that it's going to take advantage of that and bring Lama to its chips. So laptops, phones, headsets starting in 2024 will be able to take advantage of Lama from Qualcomm hardware so that apps running on those devices could use Lama without having to go to the cloud to do a large language model. That keeps all your data local. That keeps it out of anybody else's hands. Lama 2 will be available through Microsoft Azure as well, AWS, hugging face, and a bunch of other providers. So again, you won't go to a service like you do with OpenAI or with Bard. You'll just go to Microsoft Azure, AWS, whatever you already have your account with, and you can install Lama 2 on those services. I guess that's one way to catch up to the competitors, right? Yeah, I mean, definitely. The whole sort of on device, let me give you the answer. You were looking for a type thing. I was trying to think about this earlier of like, so what scares me about this? I mean, nothing really. I think we're in such early days that we're all just trying to figure out how these tools are making our lives better rather than worse. But yeah, I could see a few years down the line a scenario where it's like, well, why'd you tell me that? Was that a Lama thing? Give me a human response. Reading your email and knowing that you're a part of the device that you have in your pocket that you use for so many other things, which we'll talk about later in the show. Lama won't be the only one available in your pocket. Certainly not. No, it's going to be coming to that. Talking to AI. I think what's more interesting about this to me is that Meta's Lama is definitely by all accounts not as good as Bard or GPT-4. So Meta decided to say, let's throw it open. Let's get some transparency around it. We can say it's about safety and reliability, which sounds great and it will be, but it's also a way to try to make up ground to get a bunch more eyes on it. The company wants people to use it so they know how to make it better. Exactly. 100%. Well, speaking of things that have possibly gotten better over the years, the Wall Street Journal's Ann Marie Alcantara wrote a story called People Have Begun to Love Apple's Most Hated Product. Okay. So you might say, hate a product. What would that be? In this case, it's Apple Maps. Apple Maps, Maps launched back in 2012. Seems like a long time ago, but yeah. That's when it launched. Suffice it to say, people were not initially thrilled. Quite a bit of backlash to the point where Apple CEO Tim Cook issued an apology to customers in response to people saying this does not work as well as Google Maps or other Maps products. Also fired his head of software at the time. Now Alcantara mostly uses anecdotal evidence of people switching not hard numbers in this particular article, but she does mention that Canalus, which does a lot of data crunching, says that an overwhelming majority of iPhone users have installed Google Maps over a period of time. You know, you got Apple Maps, but Google Maps is better. For a long time, Google was the map to use unless you wanted to get lost or be late. I can remember a time where Tom and Veronica Belmont and I were meeting up and I was late and I used Apple Maps and Veronica was like, why'd you use Apple Maps? I was like, I don't know. But Apple Maps isn't necessarily awful anymore. In fact, some users think it's pretty great, has competitive features with Google Maps, looks good, has a nice design, very Apple. Apple's also adopted some functionality that Google Maps has had for years, Street View features, not exactly the same Street View that you would expect from Google Maps, but some kind of bird's-eye stuff. Also, the company announced at WWDC just last month that Maps features were coming to offline maps later this year. That's something that's really, really important, depending on where you are in the world. So smartphone maps have been essential for years now, but we also have maps that are part of our car software. You might be using maps on a desktop. So Nikki, I want to start with you. What do you use and why? So I have a Google phone, so I like to keep it in the same brand. So I use Google Maps and back when I tried out Apple Maps was a few years ago, I have to admit, but I found that it didn't work when I was in the middle of nowhere, when I was in a foreign country and I didn't know where to go. It kind of just defaulted me to the highway instead of the little country roads. So I just kind of from then on stayed with Google Maps. I've been told by some people that Apple Maps is okay. And I have a good anecdote that I've seen both the Google Maps car and the Apple Maps car, so I'm on the maps for both of them. You're literally on the map. I'm literally on the map, yeah. Yeah, I use Waze mostly. Actually, I go back and forth. If there's traffic, which in Los Angeles is most of the time, and I'm curious like, okay, I don't know what the best route is going to be with traffic and I want it to adjust on the way, I'm using Waze. If I'm just not sure how to get there, but it's close and I'm not worried about traffic, maybe it's 10, 15 minutes away and it's a new place, I will tend to use Google Maps for that because I don't need Waze to save me 30 seconds by, you know, routing me through six right turns around stuff and Google Maps tends to not do that. But a few times I've had iOS prompt me. I remember one time I was coming back from the vet like, ah, would you like to head back home? And I was like, oh, yeah, sure. And I found myself using Apple Maps on the way back home instead of Google Maps. One of the cool things about Apple Maps is if you're wearing an Apple Watch, it will kind of rumble and give you directions on your watch when you're near turns. Google Maps just started doing that too on the Apple Watch though, so they're starting to catch up on that feature parity. And the other thing that was cool is it was doing lane guidance better earlier on. I think Google Maps is getting better than that too, but like get into the second lane from the left to turn left kind of situation or after this stoplight at the next stoplight turn left. Yeah, I mean, one of the things, you know, I mentioned CarPlay because I have CarPlay in my car and I am an Apple ecosystem person, you know, if you ask, you know, capital S, you know, who to, you know, do something for you, you can say and do that in Google Maps. And, you know, that works. But I often forget, especially when I'm in transit type thing. So I inadvertently was sort of like, oh, I'm actually using Apple Maps. Huh, weird. I didn't really even think about that. And it was fine. It was fine. What I did do, just because I was sort of curious about how it works, I have not opened Apple Maps on Mac OS, you know, on the desktop kind of ever. I mean, I probably have, but it's not something that I use. And I mean, I really think that Google Maps still has Apple Maps beat by a long shot when it comes to street view and just the, you know, putting in a variety of like, how do I get from here to there? And what I got from it, I felt like Apple Maps got a little confused. But again, most of the time, I suppose, unless you're planning a trip ahead of time, you're on the go anyway. So you're probably on mobile or in a car. Well, folks, let us know what your map experience is. Feedback at DailyTechNewShow.com or just join the conversation in our Discord. You can join that by linking your Patreon account. Sign up at patreon.com slash DTNS. People worry a lot these days about whether AI chatbots can replace people. Usually that's imagined in a bad way, taking your job or something like that. But what if chatbots could replace people from being subject to experiments? An article by Matthew Hudson in science.org describes some efforts to use chatbots as replacements for human subjects in scientific research. Now, this isn't physical research. This isn't like poking the chatbots with needles or anything. How would you do that even? Yeah, yeah. Nikki is gonna, she's been following this story. Can you explain to us what's going on here? Yes, I can. My battery almost died. If only AI could take care of that. No, so I've been following this story a little bit and we'll start out by saying this is a new article that came out in the journal Trends in Cognitive Neurosciences this month. So we're talking about trends, not necessarily something that's happening exactly right now. But researchers from the Allen Research Institute and the University of North Carolina started looking at this idea of whether you could replace people with a chatbot. And initially, they were kind of doubting the capacity as was I when I read this article. And they asked some moral questions to GPT 3.5 about some moral scenarios. So they said, would you save someone who's about to be hit by a car, for example? And surprisingly, they got about a 95% assimilation rate about the same way that humans would answer this type of moral question. And they were honestly taking it back. They weren't expecting this at all. And especially the big point to this is that GPT 3.5 was able to, instead of having a black and white moral judgment of this is moral and this is a moral, they could kind of do more of a gray scale. For example, it could say murder is bad and lying is bad, but one of them is less bad. And until now, LLMs haven't really been great at that. So this is a pretty surprising finding. So would an AI be a good replacement for a human as a participant in a psychological study then? So of course, for everything AI, it would have to be a very specific scenario with very specific guidelines. And I'll give you an example. Let's say there was a study that was usually taking all data from tweets or skeets if you're using BoostGuy and all of the human data that's been generated forever on Twitter. And AI that was trained on all of the Twitter data since beginning of Twitter would be equally as good at answering this as someone mining human data, for example. Another example is using AI for customer surveys. This is something that people have less ethical ickiness about. If you're just asking GPT, is this a good price for my sneaker brand? And it comes up with that average answer of what people would say, people don't really seem to have an issue with that kind of application. That's still a kind of a research survey to a certain extent. And it wouldn't avowedly be lying the way people might lie. So yeah, that's a question that came up. And it wouldn't intentionally be lying, but because people do lie when they do things online sometimes and it's using that data, it might unintentionally be lying. But because it's making an average, it might even and out because yeah, it's being trained on those kind of data. A question for you. Just because you talked about something that might not be like that ethically an issue for folks. What is the most ethical issue when going through some of these scientific studies? I would assume it's probably medical data from humans? Yeah, you don't want to get personal data. You want to make sure that you have consent. You want to make sure that people can't get doxed. There's a lot of guidelines in place for psychological things. You want to make sure you're not causing psychological harm to people by asking certain things. That would be some of the standards. I don't do human psychological research, but I would assume those are some of the things that they have in place. I mean, I'm sure there's a huge book of guidelines of what not to ask people. And you could train GPT on that too, I guess. No, we're not talking about harm to people as far as like, oh, they're taking a drug that could have unforeseen side effects and damage them. So why replace people in studies at all? Well, like you mentioned, Sarah, if you're trying to avoid ethical things or being unethical, rather, you need to get ethical approval. And this is a large amount of paperwork. It's extremely time consuming. I'm actually trying to get ethical approval to use animals right now and it's taking months and months and months. So for humans, it's way more complicated. And if you're just trying to ask about sneakers, sometimes you maybe want to avoid that. And so it will definitely make things faster. And you should take it into account, though, that an LLM can act as one single participant since it kind of just averages a bunch of answers but forms a single opinion. You can't have multiple copies of the same opinion, right? But it would help in these sort of impractical and unethical situations. You could ask an LLM really unethical stuff that might traumatize it, but it doesn't matter because it's an LLM. All right, is anyone doing this? How far along are we? Well, we're not super far yet. So the people, like you said, the robots aren't taking your job if you're someone who's doing human test subject stuff. But to the point that you could probably implement this when you're testing out your study. If you're doing a pilot study and you're trying to figure out what questions to ask people, you could compare some of your data against the LLM data and see if it tracks or even just see if it makes it glitch or if it can give the answer that you maybe would want to get. Researchers are comparing this to the jump from doing in-person surveys to online surveys and how nobody thought it would work because you can't get reliable data from the internet. So this might be, you know, in very specific scenarios. This might be a possible thing. Well, yeah. Go ahead. Just to wrap this up, one of the things that I thought was most enlightening to me as a human who is odd, unreliable and biased is that we all are. So if you are trying to, you know, get, you know, for example, some, you know, FDA approval for something and a language model might be at least not necessarily better at this, but something that will will push the average, the, you know, the project forward. Yeah. That, you know, this could, this could, it would obviously be depending on who you are, you would either like that or not, but this could make a lot of things happen in a more streamlined way. Yeah. And you can even use it as a check or as a control too. It couldn't, you won't do your experiment all on LLMs unless that's what you're studying, right? But it would be a helpful check. I think where we have to be careful is in all of these things that we're putting in place to stop it from offending people in public, which I'm not saying are bad. You need to be able to turn them off when you want to simulate people because people are offensive. And so if you're, if you want to simulate actual people and all the horrible things they say, you shouldn't be filtering the things that a lot of these companies are filtering in their public versions of that. Now there's differences between public and research versions of that, but that was a part of this article on science.org that I hadn't thought of before. That that's an interesting thing to consider here. Yeah. And I really went into this thinking like this is so dumb. Why are we using AI to create psychology? And then reading more about it, I was like, you know, this could save some researchers a ton of time. And I kind of wish I had an AI goat to test my model on before using the real thing. So I can see this happening for sure. Well, continuing our AI, AI speak on the show, more than just chatbots or image generators or helping Nikki with her science or even your future replacement. It's also being used to help endangered species. And Chris Christensen is here to explain more. This is Chris Christensen from Amateur Traveler with another Tech in Travel Minute. Thanks to Travel Weekly. I have a travel story that involves AI, but it doesn't strangely enough involve chat GPT. In South Africa, they've started to track rhinos with collars that are not just DPS enabled, but also have AI to learn the behavior patterns of a specific rhino. And then when they see that rhino behaving differently, it can trigger an event that might mean a poacher or a birth or something else odd going on and where that rhino is so that rangers can respond rapidly. And that could save the rhino's life. And so next time you go to Africa, if you see rhinos, it might be because of AI. This is Chris Christensen from Amateur Traveler. All right. Thank you, Chris Christensen. Good bet that at least a little bit good news chaser is there. You know, it turns out I'm actually going to do something similar to that with my goats that we're going to have a machine learning program learn their behavior and flag it for me. So I don't have to go through it individually. So maybe I should move to rhinos actually. Maybe. That's that's Dr. Headbutt doing rhinos makes sense to me. Yeah, start with goats, stay for the rhinos. All right, let's check out the mailbag. All right, Marty wrote in about the concept of chat GPT because we can't stop getting off the subject, getting worse over time in his experience. Marty says, I've used it daily since it came out I pay for plus and I've been feeling in the last month it's become worse to the point that it's making me Google things again seems to have a shorter memory span it within the last month. I used to be able to say I'm working with framework XYZ and carry on a conversation debugging troubleshooting things for the length of that conversation. But keeps happening lately after two or three messages, it'll forget one of the three frameworks that I mentioned. I find about once every three messages or so it ends up saying sorry for making a mistake where that would happen every few conversations a few months ago. Marty says finally, it started suggesting code that's basically the same thing that I gave it initially. But it thinks that that's the solution. All right. It is interesting to me that most of the people who find that it's worse are the people who paid. That could be because they're paying they get GPT for GPT for is powering Bing. That's where I was going next exactly. Yeah. We should ask we should do a psychological study using GPT. We need a little double blind here. Nikki the goats. Yeah. Get them to weigh in work it out. Well, Dr. Nikki Ackerman's goats and all we're so glad to have you on the show. Let folks know where they can keep up with the rest of your work. Of course, as usual, I am on the cool Ackerman's calm. That's my website. And I'm on the opposite Ackerman's to go on Twitter and the same name on blue sky for now. Excellent. Hey, you. Yes, you the awesome listener of the Daily Tech News show podcast, you know, you love this show, right? You know, you can't get enough of the latest and greatest tech news delivered by the most knowledgeable and entertaining host in the business, right? Well, guess what? You can make this show even better by becoming a patron. Patreon is a way for you to show your appreciation and support for the show with a small monthly pledge. And it's not just a pledge. It's an investment in your own happiness and satisfaction. Because by becoming a patron, you will unlock a ton of amazing rewards and perks that will make your listening experience even more enjoyable and rewarding. For example, you get ad free episodes, bonus segments behind the scenes access and more. You can join a discord community of fellow tech lovers who share your passion and curiosity for all things tech. You can chat with the hosts and other patrons, ask questions, give feedback, suggest topics for future episodes. Sounds awesome, right? Well, it is. And it's super easy to join. Just go to patreon.com slash dtns. Pick a tier that fits your budget and preferences. You can cancel or change your pledge at any time. No strings attached. But you don't need to wait any longer. Join the Daily Tech News Show Patreon today. Help make this show the best it can be. Thank you for listening and supporting the Daily Tech News Show, the show where you get independent tech news from people who love tech as much as you do. Patrons, stick around. We're going to be talking about a cheap way to turn a Quest Pro into an Apple Vision Pro, at least from the outside. Yeah, but just a reminder, we are live. You can catch the show live Monday through Friday at 4 p.m. Eastern, 2800 UTC. You can find out more about how to do that at dailytechnewshow.com slash live. We're back again with Scott Johnson. Probably some gaming stuff. Talk to you then.