 This 10th year of Daily Tech News show is made possible by its listeners, thanks to every single one of you. Mike Aikens, Norm Fazakis, Chris Allen, and brand new patron, Peter. Everybody welcome in, Peter. Hello, Peter. Hey, Peter. On this episode of DTNS, Google releases Gemini to developers, Apple's new anti-theft protection for iPhones and a hybrid computer that uses human brains. What's up, what's up? This is the Daily Tech News for Wednesday, December 13th, 2023 in Los Angeles. I'm Tom Merri. And from Studio, well, I'm here for another day or two. I'm Sarah Lane. And I'm the show's producer, Roger Chang. Yeah, for those who don't follow GDI or Sarah's other outlets like have such a good day, Sarah, Sarah, moving studios at the end of this week. I'm moving studios. I mean, you're moving studios, but yeah. Yeah, I'm moving. That's what I'm doing. I'm moving zip codes, my peeps. But yeah, if you have ideas for the studio once I get up and running, it'll hopefully be a more permanent studio. So that'll be fun. Yeah, in order to form a more perfect studio is the Declaration of Independence that Sarah has signed now. That's right. We, we, the people of Daily Tech News show will now start the quick hits. Last week, it was revealed in a public letter from US Senator Ron Wyden that Apple had been ordered by multiple governments to release push notification data for users. Without handing over the actual messages, this can still act as metadata to deduce things about a user. Without a formal announcement, Apple did change its terms of service to let users know that a search warrant is now necessary for it to hand over that data. Previously, Apple only required a subpoena, which is easier to obtain. On Monday, the US National Highway Traffic Safety Administration issued a recall notice for Tesla models, almost all of them, after a two-year investigation into several collisions that happened while Tesla's autopilot was active. In response, Tesla is delivering a software update to more than 2 million of its cars to improved autopilot detection that monitors whether the driver is paying attention or not. The word recall is a little confusing here. It's kind of used for any product safety fix these days, but Tesla owners should be able to update over the air. You're not going to have to take your vehicle into a place for service. The update adds more controls and alerts to keep a driver in full control of the car even when auto steer is engaged and it restricts autopilot in more of the cases where conditions are considered unfavorable to autopilot. European Union lawmakers have decided on a new classification rules for the platform workers directive. A list of five indicators are designed to determine if there is an employee relationship between a company and a gig worker. If any two of those five on the list are met, that means the EU is going to consider this to be employment, although a platform can push back with proper contractual evidence disputing that. There are also personal data and transparency requirements, and the text of the bill isn't public but begins the process of parliamentary then EU council approval. OpenAI agreed to pay publisher Axel Springer to use its content for OpenAI's language model training. Axel Springer operates a lot of publications. You probably think of them related to journals maybe, but they're also the folks who operate Politico and Insider, all the insiders, both business insider, et cetera. Among the terms of the deal, ChatGPT will now give you links to Axel Springer sources if it detects that that is where the information may have originated from. This is something Google's Bard already does. Microsoft's Bing actually does it in its implementation of ChatGPT. Also, they're not the first one to sign an agreement like this with OpenAI. Associated Press, the AP, has a similar deal, although theirs is only a license to use the text in the training, not one that gives the links at the end. Mark Zuckerberg said in a Threads post today that Metta is testing a feature to show Threads posts on Mastodon and other activity pub protocol supported networks. He noted this helps interoperability, improves interaction choices, and helps content reach more people. Now, this is good news for those of you who don't just hate Metta already and wondered if Threads would keep its commitment to the Fediverse, which it did pledge to do when the platform launched back in July. Yeah. All right. There's the quick hits. And now the big hits. The large, the slow hits is the opposite of quick. This is the big news of the day. Google is bringing features of its Gemini models to developers. Maker Suite is now called AI Studio. It's still a web-based tool, but it now uses Gemini Pro to help developers with text and image prompts and making chatpots that they can then integrate into their apps. When you integrate them, you use an API to call the Gemini Pro model, and that API is free for up to 60 requests per second. So that's good enough to test your app for sure. It's probably good enough to have some lesser used app features or maybe some smaller apps, but you're going to have to pay if you have a full popular app. Google reviewers will be able to see de-identified input and output as well. So some developers may not want to let that happen. Google said it's only doing that to improve product quality. Next year, the AI Studio tool will get access to Gemini Ultra. That's the top level of Gemini they announced, considered to be competitive with GPT-4. Yeah. I mean, I guess my first question would be when next year, because GPT is, I'm sorry, OpenAI is working on GPT-5, or who even knows how far along they are. I know that there's a bit of that AI race to the finish line or race to be the best. So if you're a developer and these Google tools are really interesting to you already because you're in a Google developer system of some kind, I'd be interested to know why because the market has gotten really crowded, but as a developer, you have more options than ever. So I'm curious to know who says, no, this is the right one for us. Yeah. Gemini Pro is pretty good. I also think it could easily be exaggerated how much farther down the road OpenAI is towards GPT-5. Maybe I'll be surprised next year. Tune in for our predictions show to find out if someone else thinks that. But I do think that Gemini Ultra coming next year, you're not wrong, is sort of like, well, wait, so Gemini Pro is great, but we really would like the best you have. When do we get that? I think a lot of developers who already work in the Googleverse, and there are a lot of them because of Android, probably look at this as convenient, like, oh, Gemini Pro will be good enough for what I want to do. I'll be able to add this kind of checking or suggesting or things like that. I'll be actually curious to see what developers make of this because of that Android pipeline, right? Whereas OpenAI is available for anybody to use and granted Gemini Pro will be available for iOS developers as well. But it's certainly going to be appealing to people who already work with Google. Yeah. Yeah. And as our emailer said yesterday and in yesterday's show, a lot of the stuff is going to become part of an app that you like because you like the app and you don't really care what's under the hood. The developers care what's under the hood because that's how they develop. Or even if you can point it out, it's just part of an experience. It's not a phone experience. I would love it if developers in the audience, and I'm not asking you to violate your own NDAs or reveal your trade secrets, but if there are good examples of what you know people are working on yourself or others, like, these are the kinds of things that I think Gemini Pro will be good for and you will see show up in apps, let us know. We'd love to share that with everybody feedback at DailyTechNewShow.com. Google also announced a healthcare focused model called MedLM, designed to help clinicians carry out studies and perform logistical back office work. HCA Healthcare is one of the companies that's been testing MedLM to improve workflows on time consuming tasks. The healthcare models are built on MedPalm 2, which was trained specifically on medical data. It's available to Google Cloud customers in the U.S. with a medium and large size model available at different prices. The larger one is better at clinical studies. The medium model is trained for doctor patient summaries and other real time functions. Google says it will also offer Gemini based healthcare tools in the future. Yeah. So this one is not Gemini because this needs to be trained specifically on medical information to be particularly useful and they already have MedPalm 2. So what they're saying is we're going to adapt Gemini to be able to do what MedPalm 2 does in the future, but we haven't done that yet. So it's going to get better. It was interesting to look at this and understand some of the details that are going on like with HCA Healthcare. They were giving feedback according to CNBC that the diagnostic assistance that you always hear in the headlines was not particularly what they needed. They needed doctors to get help with filling out forms. One of the examples they gave was an emergency room doctor that has to fill out a form of what was said to the patient, what was the summary of the responses and all of that. And MedPalm 2 was able to cut down. It still wasn't 100% accurate, but it was able to cut down the time they took to do that. And they were saying like doctors spend four hours a day just doing paperwork. So if this can cut that down, it allows doctors to actually work on the things that you want doctors to do, which is helping you get healthier. Absolutely. With my medical system, I always read all my after the patient was here summaries, the medical center that I've been to quite a bit over the last few years has given me plenty of good content. And sometimes you read it and it's like, what? This could be better. You know it took them a while, you know, because it's sort of like, you know, they're taking notes while you're sitting in the exam room sometimes. And sometimes, you know, it's stuff that they've got to do afterwards or enter it into some system in a certain way. And, you know, all that must be extremely tedious when you're a busy operation who wants to see as many patients as possible. No, it's interesting. We're Dami and the reverb mic are both suggesting why not just hire a person to do this. First of all, that person has to be trained as well as a doctor to do it. So the doctor, you're not really getting the efficiency. This tool is much faster than hiring a person to do this. And that person would be trained at such a level that it would be a waste of their time. You'd have to pay them way more than you would need. What this does isn't replace the doctor, it speeds up the doctor. So they're saying it's like 60% accurate. Then the doctor looks it over and corrects things and makes it accurate. It's just saving them time. It's a time saver for the doctor in a way that I don't think hiring another person to do it would be, frankly. Finally, Google Cloud users who use Vertex AI can now get approval to try Imagen 2, the second version of Google's tool that creates images from a text prompt. Imagen 2 was launched at Preview at Google I.O. earlier this year. Among its new capabilities are creating text and logos, bringing in line with Dolly 3 and Amazon's Titan image generator. And when you're the largest advertising company in the world, like Google is, that's a help, can also render text in Chinese, English, Hindi, Japanese, Korean, Portuguese, and Spanish. And it can answer questions about what's in an image now. They also added Synth ID, which is a watermark so that you can identify anything made by Imagen 2 as machine generated. And Imagen 2 users, just like Imagen 1 users, are indemnified against copyright lawsuits. Well, that's a plus, I guess. Well, with all the uncertainty around copyright law and whether you have the right to use it or not, that's a big encouragement for people to want to use these models, yeah. Yeah, that's what I'm saying. Even though everything that we've talked about of the Google announcements before are impressive, this is the one that I don't really have a burning need to use the latest version of this Google tool. But it's kind of the most fun to me. The idea of creating some sort of a prompt, whatever it is, to be able to make, you mentioned texts and logos and just various images that isn't all, I think a lot of people go like, oh yeah, the AI images, it's like, lots of fantasy stuff and being able to create Sarah sitting on a spaceship, that's sitting on an apple that's in a vortex. It's like, okay, well, that can be true. It's just going to get into a Chili's, yeah, yeah. Yeah, that's all, that's just fun. And in many cases, that's actual work for folks. But to do a little bit more, I guess, pedestrian stuff that, again, will be time consuming. Maybe you're creating some sort of a slideshow that you're going to pitch to a potential client, et cetera, et cetera. This is, it sounds more like that would be where a lot of Google Cloud users would find this less of like a, oh, just say we did it. And more of a, oh, this actually was a really great artistic use of my time. Yeah, and if you've used these image generators, you know, they've been horrible at text. You can't do it. So the fact that these models are now starting to get text is a big advantage. And like I said, for advertisers, that's going to be great. Like I want to create my logo with certain messaging and you can do it fast. Google's going to build that into things like AdSense, you know, with Imogen eventually. So yeah, that's what Amazon's doing with it. I can't imagine Google wouldn't. Google released a video last week, got a lot of attention. It had been edited to look like Gemini was responding to queries in real time. A Google blog made it clear what was actually going on. So they weren't like trying to pull the wool over anyone's eyes. But a lot of people noticed the discrepancy and some even said, it's kind of a fake demo. So Google YouTube channel, Greg Technology, decided to replicate the Gemini video. He does a lot of just kind of walkthroughs in real time using OpenAI's GPT4 vision. It's not as polished or as quick as Google's version, but it is in real time. And we'll have that link in our show notes. Meanwhile, during our live stream, we got a super chat. We never get super chat, so thank you, Tony, for that. Apple is testing an update called stolen device protection for a future version of iOS, probably in the 17 area, 17.something. It can delay the ability to alter critical information on your phone. That delay will last an hour and will also require biometric authentication, either your fingerprint or face ID, at both the start and end of that hour. It will go into effect for things like changing your Apple ID password, removing face or touch ID from the phone, changing a phone's passcode, turning off Find My, updating various account security settings, et cetera. The phone will not introduce the delay if you're in a recognized location, like your home or work location that you're normally in. It's meant to deter a thief who not only gets your phone, but also has somehow been able to see over your shoulder or something and get your passcode. So, yeah, it sounds like what, and Joanna Stern over at the Wall Street Journal did a great video recently about the rise in iPhone-specific theft and kind of theft rings and how a lot of the stuff does happen. And partly because, yeah, let's say you're at a bar and maybe you're even chatting up the person next to you. You're being friendly, and they've got their phone and maybe they've unlocked it a couple of times. So you've got the passcode, then things can get a little bit strange. I love the idea of, okay, let's say someone goes, all right, well, I know what Sarah's passcode is, and now I'm gonna take the phone, quickly open the phone, quickly add a... What do they call it? It's like a separate verification code that can override things like your Apple ID password. And then that way, even if I go, oh, no, they stole my phone. Okay, let me get to the quickest computer and go in to find my iCloud, then I'm locked out. That is a scenario that thankfully never happened to me. It's probably happened to at least somebody you know because it does happen fairly often and it sounds like it's on the rise. This is a great idea. It's a great idea because if someone has my passcode, they take my phone, they go like, crap, all right, well, it's not her face. But they have to wait another hour and then try my face again. I mean, this becomes like exponentially harder for someone to get into my phone to steal various things like not only passwords, but I don't know, be able to go into my Venmo. There's no money in there, but if there were, you know, that sort of thing is how people have gotten a lot of money stolen from them as well. Yeah, I've been watching another YouTuber called Anna Lee and she was in London. And talking about in the past how her phone had been stolen multiple times. And it's that exact kind of scenario you're talking about, right, where someone kind of observes you until they can figure out your passcode, then just secretly grabs the phone or or does some kind of social engineering where they like, you know, put a magazine on your table and pretend to sell you a magazine and then steal your phone from underneath the magazine, which is one way she or they just straight up take it from you. Yeah, yeah, whatever they get it from you. They get it and quickly unlock it and change all the settings. And so this is meant to stop them from being able to do that. I know the reverb, Mike said, great, it makes it harder to get into my phone. No, it doesn't make it harder to get into your phone because it requires biometrics to activate. And if you're at home or you're at work and I don't know how it's going to work, if you can set other places for it. It won't activate you when you're at home, you'll never see it. When you're at work, you shouldn't see it. Now, maybe you wanted to work. Maybe you were working to place where you're at risk of phone theft, but it's only going to happen on changes. It's not going to happen on getting into the phone. It's going to happen on things that would stop you from being able to find the phone if it was taken. How often do you need to change your passcode or change your Apple ID or remove fingerprint or face ID and you're not at home and you can't wait an hour? I think this is a pretty reasonable way to protect you. Yeah. Because it's not going to come up much. It's it's it's an inconvenience for. I mean, I'm sure anybody out there can go, OK, well, here's a scenario where I would need all of these things. And so this is this makes my life less convenient. It is going to be an option. You don't have to do it. Yeah, you don't have to use it. But that's right. But the but the, you know. The other side of the coin, you know, the worst case scenario was pretty bad, you know, whether whether all you want to do is just wipe that data, you know, wipe the phone, you know, there's stuff on there that might be an iCloud. You can get it back. It's not the end of the world. I mean, there's so many scenarios where theft is going to happen. I mean, if someone wants to physically take your phone, you know, if someone comes up to me with a gun and says, give me your phone, they're getting it. But but to have control over how that phone could then really very unlikely it could be used to get into anything that, you know, is is extremely sensitive information makes a lot of sense to me. And also just kind of going through all of the features of stolen device protection and what it will offer and how exactly it will work was a really just it was a helpful reminder to go into my settings and look at some stuff that I mean, I've set everything up the way that I wanted to. But I haven't. There's certain settings I haven't played around with in quite a while, like an alphanumeric passcode, you know, the passcode itself. I know that that's an option. Absolutely. Yeah, it's been, you know, I've got a used to be, remember, it was once you, well, it wasn't my password was used to be only numbers. Yeah, it was four numbers. And then they made it six. And then they made it custom. And then they made custom alphanumeric. And I remember going, eh, that sounds kind of annoying. Well, today I changed my tune. There are lots of little things that are designed to keep us more secure. And so, you know, it's it's a good it's a tis the season. Everyone be more secure. Yeah. And to weird Ami, who's like, is there any option other than biometrics? Not a secure one. Like this, this is the thing, right? Like if you want it to be secure, then you have to make sure that people can't get into it. So if you don't want to use biometrics, which, by the way, are very well audited on device and very secure. But if for whatever reason, you don't want to use it, then your next best option would be to use an extremely long passcode, something that would be very hard for someone to observe and repeat. And then make sure that you hide it when you use that passcode. You're old. Your other option is to DIY it to make passcodes more secure. Most people aren't going to do that. Biometrics are extremely reliable and extremely secure and extremely well audited. So for most people, I think that's probably going to be a good option. So I'm glad Apple did this and I'm glad they made it opt in so that you don't have to do it if you don't want to, like weird Ami. You can discuss these kinds of options with other folks in our Discord. The way to get into the Daily Tech News Show Discord is by becoming a patron. Go to patreon.com slash DTMS. A study published in Nature Electronics describes a hybrid biocomputer system called BrainaWare that can identify voices with 78% accuracy using human neurons. Yep, you heard it here. The scientists combined clusters of human neurons derived from stem cells into brain organoids. Then an organoid was connected to thousands of electrodes. A machine learning algorithm learned to interpret the signals from the organoid and then combine that processing power with more traditional silicon. The combined system was trained on 240 recordings of eight different people and then tested with new statements from those people. So as if to say, all right, hybrid biocomputer, is that Sarah or is that Tom? 78% accuracy is pretty high. Yeah. And I could already hear and see the jokes about using a brain to be a computer. The perspective of the scientists making this was to learn how brains work, not to make a brain computer hybrid as a practical device. This is apparently really hard to make. It's hard to keep the cells alive, especially if you want to use it in different situations. But they think it's worth it because it could have implications for treating neural diseases like Alzheimer's, et cetera. And so that's why they want to do it. That's not to say somebody couldn't take their research and try to figure out how to take advantage of the processing power of human brain cells. Because these scientists have said we would like to make it easier to grow these organoids, et cetera, is that what they called them? So who knows? Maybe down the road, we will lead to little Petri dish brains in the middle of our computers. Well, and one of the scientists that was quoted in the study also mentioned this could supplement and even replace at least in certain circumstances, animal models of brains. You know, animal testing is a part of science. And if something like this can be as effective, then that's a plus for me. Yeah, because you're just growing the stem cells. You barely can. I mean, some people don't want you to even harvest a stem cell, but harvesting a stem cell is pretty non-invasive as things go and pretty harmless. And then you take the stem cells and you just grow other cells. I think this kind of situation is also very far from the other objection I could imagine, Sarah, which would be someone going, what if the organoid becomes self-aware and it can feel? And yes, I would be uncomfortable with that. These things aren't that complex yet. Right. But, you know, something down the line, I guess, if they got more complex that you'd want to be worried about. I mean, if they can recognize your voice, they know who did this to them. Right? No, this actually, this does not give me the willies at all. I think this is, you know, obviously scientists say, this is hard to do early days, promising stuff. We're going to continue to research. Wouldn't it be great if we understand things that have been historically extremely hard to understand, like, you know, types of dementia? That's, I can't argue with that. Yeah. No, as someone who's got Alzheimer's in my family, I would like lots of advances to be made on that before I ever have to worry about it. And that day is getting closer and closer. So, yeah, this is interesting stuff. It's headline grabbing stuff because brains, but it's also good for medicine. And there, I'm not going to rule out that there might be some interesting computer-related discoveries out of this sort of thing. All right, let's check out the monobag. Let's do it. Norm wrote in, hi, Norm, and said, I love the discussion of chicken crisper gene editing and the follow-on with GMOs. I appreciate how Dr. Nicky brought up how GMO has technically been around as long as we've been breeding plants. And it isn't the boogeyman that people try to make it out to be. Norm says, that last part is my humble opinion. My wife's background is in plant physiology, and she gets these types of questions all the time. And it was really refreshing to hear Dr. Nicky so succinctly explain them, at least as refreshing since she's agreeing to my thoughts on the subject exactly. Hope you didn't get too many disparaging responses. And actually, Norm, we didn't. Maybe somebody just didn't feel like writing in or maybe they haven't heard the show yet. Or maybe Dr. Nicky just explained things so well that, you know, nobody got mad at the GMO conversation, but I agree. I learned a lot yesterday, and it's always nice to have somebody who can sort of lay this out in terms that make a lot of sense to people when those terms can sometimes be fear-mongering on their own. Yeah, I love in Norm's email, the thing I love the most about it is that he acknowledges that it's refreshing because she agrees with him. That's good self-awareness for us all. That's a good reminder that like, oh, of course, we always like it when somebody is saying something that we already agree with. The test is when Dr. Nicky or anybody else says something that you don't agree with and then you think about it and maybe alter your opinion. Maybe not even fully agreeing with them, but that's how we all learn. So I like to highlight that part of that. We need more of that, don't you think? While patrons stick around for the extended show, Good Day Internet, we're going to talk more about the end of E3. You may have noticed we thought Scott Johnson was going to be on the show today talking about this, but Scott is sick. He's okay, but he's not feeling up to stuff. So he had to cancel the morning stream and he also wasn't able to be on Daily Techno Show today. So we are going to talk about it without him in Good Day Internet. So become a patron and find out what we think at our wake for E3. We're all going to share our memories and drink, I don't know, jolt? What do you drink in memory of E3, Sarah? I don't know. Mountain Dew. Mountain Dew. So yeah, stick around for that next. I don't have any Mountain Dew, but I'll have some in spirit. Reminder, our show is live Monday through Friday and you can catch it live at 4 p.m. Eastern 2100 UTC. Find out more at DailyTechnoShow.com slash live. We're back tomorrow with Justin Robert Young joining us. Don't miss it. Talk to you soon. This show is part of the Frog Pants Network. Get more at frogpants.com. Time and club, hope you have enjoyed this program.