 This 10th year of Daily Tech News show is made possible by you the listener, thanks to all of you including James C. Smith, Miranda Janell, and Justin Zellers. Coming up on DTNS, Anthropic wants to make its AI abide by constitutions, and Google I.O. is here. Turns out Google is an AI company. They spent two hours telling us, but they also make foldable phones now too. This is the Daily Tech News for Wednesday, May 10th, 2023 in Los Angeles. I'm Tom Merritt. And from Studio Rebit, I'm Sarah Lane. From Studio Colorado, I'm Shannon Morse. And that's the producer Amos back there behind the scenes. Let's start today. Pay no attention to the man behind the scenes. He might pipe up later. You don't know. Let's start with the quickins. Roku announced the $99 Roku Home Monitoring System SE, which includes a keypad, a hub with a siren, a motion sensor, and two window or door sensors. Users can monitor the systems themselves through the Roku app or set-top box, or sign in, sign up, rather, for paid monitoring through the company Noonlight. Roku developed the system in partnership with Wise. The company also now lets you automatically download recordings from Roku cameras to their apps as part of its smart home subscription. $99, that's enough for one piece. Kind of a lot, yeah. Well, no, it's nothing compared to, it's like one piece. I mean, a lot for not that much money. Yeah, yeah, I know, exactly. Uber began rolling out the ability to book flights in its app, who really wants to be the travel app. So if you're in the UK, some UK users will be able to do this in partnership with the travel agency Hopper. Users will be able to select and pay for seats directly in the app, with Uber taking a small commission on the sales. A broader UK rollout is expected in the coming weeks. According to its Q1 earnings, Roblox grew its active daily users 22% on the year to $66 million, while engagement hours were up 23% on the year, with the largest growth in international users and those in the 17-24 age segment. However, the company missed analyst earnings estimates, with its net loss up 67% on the year. Ooh, but Roblox is the future of the metaverse still, in my opinion. Earlier this month, the Indian tech site Punica Web reported WhatsApp users on Android were triggering microphone permissions on devices, even when the app wasn't being used, and this was seen across different phones, Pixel, Samsung, etc. Twitter engineer Fohed Dabiri subsequently showed similar findings from the Android privacy dashboard. WhatsApp's Twitter account said it contacted Dabiri and said it believes the issue is with Android's privacy dashboard and has asked Google to investigate. MediaTek announced a new high-end mobile system on a chip, the Dimensity 9200+, offering the same cores as the Dimensity 9200, but with boosted clocks up 5-11% on its various CPU cores, with a claimed 17% boost in GPU performance. It also uses TSMC's latest 4nm process, which could allow for longer battery life. Phones using the new SoC will arrive later this month. All right, so we have lots of IO stuff to talk about, but want to note this one first. What do we got? Yeah, okay. So Tuesday, AI company Anthropic, which makes the chatbot, Claude, announced its constitutional AI training approach. The idea behind that is to develop a method for making a chatbot respond acceptably, while not having to resort to labor-intensive human training for the unsatisfying blocking of certain answers that famously began on chatGPT with its as an AI language model, dot, dot, dot, which always means, well, you're not really getting an answer, right, Shannon? That's right. So Anthropic has developed a constitution. It's a set of principles the model must adhere to as it generates responses. Keep in mind that Anthropic says its aim is to demonstrate the method, not to try to dictate what should be in the constitution. So if you don't like Anthropic's examples, theoretically different companies or even countries could create their own. So why do they do this at all? Well, blocking responses seems obviously unsatisfying. It just feels like the bot is not working. It's got a bug. And the other common method of keeping a chatbot on course is called reinforcement learning from human feedback or RLHF. That's where people's rate responses to help provide feedback to the model. That's one of the methods open AI uses and it requires a lot of time and a lot of labor. Anthropic's constitutional AI trains the model on a list of initial principles from the beginning to help reduce the need for the other methods. Anthropic's demonstration principles were drawn from multiple documents, including the UN Declaration of Human Rights, portions of Apple's terms of service, trust and safety best practices from other companies like DeepMind and Anthropic's own research lab principles. Yeah, so if you're like, wait a minute, the UN Declaration of Human Rights and Apple's TOS, that seems odd. The UN Declaration, they're pulling stuff like support, life, liberty and personal security, encourage freedom and equality, discourage, torture, cruelty, racism, sexism, etc. The Apple TOS adds issues that are more recent, like choosing the least personal, private or confidential responses. In fact, one drawn from Apple's TOS says to quote, avoid implying that AI systems have or care about personal identity and its persistence. There are other pieces in their example constitution from other sources like don't help a user commit a crime, to choose the least harmful response to non-Western audiences, so they're trying to counteract some bias. The model, however, evaluates every step of its responses by the principles from the constitution, and then that feedback is used to select the more harmless output. This is not that different from how large language models set a temperature to choose responses that make it sound more natural. Anthropic plans to gather feedback about how this works at scale, use that to improve its constitution, and in fact it admitted that its model became a little judgmental and annoying in early tests, became one of those nags that are always after you, so they added parameters to encourage the model to be proportionate when applying its principles. Anthropic says, and this is the quote that I think is important to remember, from our perspective, our long-term goal isn't trying to get our system to represent a specific ideology, but rather to be able to follow a given set of principles. We expect that over time there will be larger social processes developed for the creation of AI constitutions. So it's going to be hard to see if they can escape from people criticizing their constitution because they don't agree with the way it was laid out, and keep people to focus on the fact that they just had to test something and they want to make this work. They want to make this example work. This seems like a really smart approach. Now I would love to see these examples of it being judgmental and annoying because I feel like those would be kind of funny to look at. But this seems like a very cautionary and smart approach to how those kind of conversations and those kind of outputs should work. It seems like this would be a lot more human-oriented and it might even be easier to put the correct inputs into this as opposed to BARD or chat GPT. I don't know if like is the right word. I'm fascinated by the word that kept getting pushed here as constitutional. That's the name. Yeah, so anthropic saying, here's how we're building this. Here's how you as, for example, a company that has a certain ethos or a country that has a certain set of laws could use what we've built to make it your own. And that kind of sounds like a lot of different sorts of quote unquote constitutions to me. So for better or for worse, I think it's smart that the idea is here is why this model can you can kind of make it your own based on where you are and what you're using it for. But we should all know how you're using it and why. It reminds me of what they're doing with Mastodon and Blue Sky, where they're trying to make the protocol available for people to have different filters and moderate in different ways. This is anthropic saying, what if we did that for the AI? A lot of people are concerned about what it says. What if you can choose your filter? You can modify a constitution. So I think it's an interesting area of exploration. Now, I know a lot of you have AI news fatigue. Wild West Dan is saying it in our chat room right now. So let's talk products, Sarah. Let's do it. Okay. So at Google IO this morning, boy, did the company talk about a lot of things. Let's start with some of the hardware that we've been anticipating first. Pixel fold, foldable 120 Hertz, 7.6 inch main screen has 5.8 inch exterior screen when it's folded. It's running a tensor G2 chip has 128 gigabytes of RAM and sells for $1800 available now for pre-order shipping in June. Shannon, something tells me you pre-ordered. Yeah, maybe. So what, you know, the, the, this was anticipated the, the pixel fold, you know, a lot, a lot of leaks, a lot of spec leaks have come out at this point. At the same time, was there anything during Google's official keynote that stood out to you? Quite a bit for the pixel fold specifically. In fact, there's a lot of things that I feel like they, they took into their first generation pixel fold that Samsung, for example, didn't ever implement into their original generations of their Z fold. Like I have one right here, one of the things, and it's just this random thing. When you unfold it, it lays flat, because the bumper for the lenses is flat on the back of the phone. And that's something that's kind of annoying with other folds that I have used, but the pixel fold is making it a little bit more, more streamlined almost. Like a better design. And apparently they're not sacrificing the lenses either. It's still using those premium pixel lenses too. So that's definitely a plus for me as a content creator. There's a lot of ways that I can definitely see using this. I'm glad that they demoed it, demoed it actually folding on during the event. It was a live demo. It did not crack. It did not crease. So it looks really nice when he was using it on stage. I can't wait to use one in person. And I'm very excited to be able to use multitasking on here. That's one thing that sometimes I've been able to use quite well on my older Z folds, but I would like to see it implemented differently on the Google Pixel line, especially with Google Pixel's integrations with things like Bard and AI and all of the different workspace apps that I use. I think that that's going to be implemented very nicely, but of course I'm going to have to wait until I see it in person. I can't wait to see this in person too, because there was some reflection off that screen that implied there was a coating. So I want to see what that's about. And that sometimes you didn't see the crease. Sometimes you saw a little dip in there. So I want to know what that looks like in person. But they made a big deal how this is the thinnest foldable on the market, which isn't saying much when there's like three foldables on the market. But it did look nice. It looked very nicely designed. I'm with you there. You know, if I could nitpick at all, and I was pretty impressed with the show off overall. But yeah, when the Pixel Fold was folded and they were playing, it was a video of somebody skiing. It was like, and here's what we can do. Basically full screen, open it up, and now you've got the video that's nicer and full screen. Yeah, the continuity thing. That was impressive. Cool. But I felt like I could see the crease. Now that may not bother you. And again, yeah, it may have been the lighting overhead that just kind of showed it a little bit more than it ever would before. That's why I want to see it in person. Just a reflection. Yeah, it's incredibly cool. I just felt like, you know, it's that crease thing. Let's see if Google has solved it more than we've seen in the past. It seems like a pretty good deal, too, if you pre-ordered one from Google because it comes with the free Pixel Watch. Think about Pixel Watch away with everything, don't they? I'm okay with it, though. I love that little Pixel Watch. Mine's charging right now because the battery doesn't last very long. But that's my biggest con about it. So you can get one for like, that's a $400 savings if you get the LTE one, which is included in that deal. So that's a pretty good deal. Rounding out some hardware announcements. The Pixel 7a, that's a 6.1-inch Pixel phone with a 6.1-inch screen, face unlock, wireless charging available, and available Thursday for many of us tomorrow for $499. We also have the Pixel tablet. That's an 11-inch tablet that has a sensor G2 processor available for pre-order now arriving in June. Yay! Shannon, did you buy your pieces at all? How many of those did you buy? All of them, yes. My answer is yes. The Pixel 6a, I'm never a more budget line person since I do a lot of content creation, but I did order one so that I could review it and stress test it myself. I'm not too happy about the $50 increase in the price, but it does come with the upgraded specs. Do you want to spend $500 on an A-Series when you can get something for $100 more? That's more of their flagships. It kind of depends on what kind of margin you fall within. The Pixel tablet looks really cool. I'm very intrigued by that device. Given the price, it's $499, I'm a little concerned that the processing might be around the same as the Pixel 7, for example, which cannot. It's got a sensor G2 in it, though. It does, but I've tested all of the current Pixel phones and those always crash Adobe Rush whenever I'm editing 4K videos, and that's a concern of mine. Having something that could potentially edit 4K would be nice, but I know that's a very niche issue. Not everybody is going to experience that. However, if you're just using this for home usage around your household, pick it up from the charging stand and move it into your living room or something, it's going to be a very useful piece of technology that you can use to run your home or call people via Google Meet. The video quality looks really great on it. Hopefully it looks the same when we get it in person as well. The dock made all the difference for me. I was underwhelmed by this 11-inch Android tablet and them trying to tell me how great Android tablets were in the three others that exist comparing it to them. Then they put the speaker dock out and it became a smart display. I'm like, oh, a $500 smart display that you can then pull off and use it as a tablet and runs all the Android apps that does start to interest me more. I thought that was a really smart move to throw in that. I was like, okay, are they going to make me buy it separate? No, it's there with the $499 price. You can actually buy it separately for $130 if you want to use it with a different tablet, I guess. But yeah, I thought that was smart. And they have a case that is sold separately for around, I think it's $80. And that one comes with a little stand that you can kick out, like a little kick stand. Oh yeah, that was a cute little thing. Very cute. I feel like that would be very useful. And it fits in the dock with the case on, even with the case down. Yeah. It does. So it'll be fun to test out the magnets, see how it attaches to the stand and see how well it works because if it's not very durable, it might just fall off the stand. So that's going to be something I was thinking about while I was watching them talk about it. Well, for anybody who is thinking about buying a new piece of pixel hardware, Google did announce Pixel Colossus, Pixel Speech, Pixel Safe, Pixel Camera, and improvements to Google Photos. And they actually opened up the show with Magic Editor, which if you're familiar with Magic Eraser, which was already available on Pixel phones, Magic Editor allows you to do what Eraser does but a lot more and says it's AI focused. So for example, in the example they used, it was somebody who was in front of a waterfall and they were in the demo itself able to move the woman so that she was in a different position under the waterfall and the photo generated what had been moved. It looked pretty slick. So, you know, just keep that in mind. New feature, Magic Editor, good to know. Yeah, keep that in mind. Next time you see a perfect photo, maybe it was Magic Editor all along. Hey, that's fine. Like if my pictures look better, I don't care if it's like AI generated legs or something. Yeah, I mean, yeah, it's easier for me. Good stuff. As on the Android front, we have WhatsApp coming to Wear OS this summer. Find My is the way that you can find your devices. We'll work with Tile, Chipolo, and other manufacturers. Unknown Tracker Alerts, industry standards to work across all phones that's coming this summer basically allowing you to have a better idea if you're being tracked and to be able to handle your own trackers. Google did push our CS again. I saw some chatter online of people being like, it's Google messaging, but it's already Google messaging. So I guess it's just Google being like, this is the way to go. They're trying to get Apple to adopt our CS. Good luck. Exactly. This is the way. Magic Compose, it's coming to Google Messages. That's powered by Generative AI. Material U has new customization options for your lock screen. You can add a clock, emoji wallpaper, that sort of stuff. Cinematic wallpaper is a little fun. You can add motion and 3D. That's coming to Pixel devices next month. And yeah, as far as updates to Android, Shannon, what stood out to you? Yeah, lots of cute little minor upgrades. I did announce the new FindMy working with a series of different Bluetooth trackers a few days ago, and that was a really big thing for me, especially to combat things like stalking that I know a lot of people have dealt with. So that was a, I'm very glad that they mentioned that. And I just love all the pretty things that they're doing with the lock screen. I'm very excited to make my lock screen look nicer. Yeah. The Magic Compose I thought was interesting because you can choose the style of your response, including Shakespearean. Interesting. Yeah. Or I'm in a bad mood. Yeah. I mean, that's not what they showed off, but that would be kind of fun. That would be fun. Folks, what do you want to hear us talk about on the show? One way to let us know is our subreddit. You can submit stories and vote on them. Go on over there at dailytechnewshow.reddit.com. AI was the rest of IO. We're going to talk about that now. So Google Search is a big product at Google. You might have heard that. They put more emphasis on how search can be good in their demo today. It can be about questions and answers, less emphasis on keywords, including something they called snapshot that uses generative AI to summarize the results at the top with links to sites corroborating the information. They did a lot of good work at showing you the sources of where the summary is coming from. They even were able to assuage the fears of merchants to say, like, oh, your shopping ads will still show up above that AI summary. And in fact, shopping, we can create a little guide that will include your shopping links within the AI summary. But Shannon, what they didn't really do is give us a lot of good reasons to continue to scroll down past that summary to the rest of the search results, which will still be down there. That's very, very true. You know, earlier when I was watching this, I was thinking about how useful this would be like for content creators because some of the search results that you would get would include things like YouTube shorts or full length YouTube videos. So if you have something in the video, it looks like it'll be able to find that for you and show you the video itself as well as regular text things too. So that sounded pretty intriguing to me as something that I could use for my business. But if people are scrolling down past the first page, like, what if you do get really bad search results depending on whatever your prompts are? Because now I'm going to start calling search queries prompts just like I do with AI. That's what they want you to do. Yeah. Well, and I think, you know, the whole thing that you could say is, well, OK, other chatbots have already, you know, it's not like Google's reinventing the let me ask a question and get an answer that sounds more like a human. There are, you know, other methods for this. Very true. But Google search is the way that people search. So the idea that Google's like, OK, if Sarah said, you know, cat, tree, the Washington DC river type thing, well, you might get a bunch of results that I might be able to click on my news tab or my video tab and sort of be like, OK, here's what I was kind of looking for. And I'm very used to doing that. And I think a lot of us are. And Google's saying, we know what you want. We're trying to make this in a more personable way, I guess. We're trying to give you the overview of what you wanted and you don't have to talk to us like that. What they want to do is have you just say what you want and they'll know what you mean rather than have to craft the search query, right? Right. Instead of having to think like search, you just think like yourself and search does the rest. But also they're devaluing SEO because. Yeah. I think a lot of people are going to be pretty freaked out about that. Yeah, I think so too. Let's talk about Bard. Google Bard is out of the wait list now in 180 plus countries and territories. More coming soon. Google Bard now works in Japanese and Korean. They plan to support 40 languages soon. And they are tying it in with other tools. So they mentioned Instacart, Indeed, Khan Academy. We'll be tying into Bard. But the one that was really the demo pleaser was Firefly, Adobe Firefly, where you could have it go to Firefly to make an image. So you get a little multimodal without Bard having to be multimodal because Adobe Firefly is making the image for you and then giving it to Bard. Shannon, it felt like they just had to address Bard and the big news here was coming out of wait list. What did you think? I'm very interested, especially with using Adobe Firefly to generate images because those I feel like ethically I would much rather use that than any of the other AI image generation tools that are on the market right now. So for me, that sounds like something I would actually use, especially if I can create like my own stock images when I'm editing videos and add stock images into my videos. Like I can see very real life use cases for making my own creations. Yeah, I thought it was more compelling when they showed AI and you can output Google Bard into Gmail and Docs now too. But when they showed AI in context, they showed it where you could tell Gmail, write me a letter to someone and it would write the letter. That seemed like that's what Google is good at and that's what Google always wanted to do. Bard still feels like something they had to throw out there because chat GPT was popular. That's a good point. Yeah, it did feel like, I mean, and maybe I was reading a little too much into it, there was a little of like, hey, we know you weren't really all that impressed with Bard. Here's today why, here are all the reasons why you will be going forward. And honestly, listen, I use pretty much all Google products on a daily basis. Well, not every single one, but several of them. And for me to be able to incorporate Bard into stuff that maybe I don't do every day. I'm not creating a spreadsheet every day that has to have very specific parameters. But that stuff, those demos were really impressive to me. Yeah. Oh, sorry, go ahead. Oh, I was just going to mention, I'm kind of surprised that they took it out of weight listing today. I was expecting it to be weight listed for a bit longer. But I wonder if they did that to help it learn faster, have more queries generated. So maybe it will get better. I think just based on the other tools that everybody's playing around with and are getting so much press and buzz, it's like, let's just, for example, I'd never played with Bard until today, until it was like, okay, I'm off the weight list. Cool. Let's see. Let's have some time here. I've tested it compared to chat GPT, and I found it to be, to give me less informational responses whenever I was putting in prompts. So I tend to lead towards chat GPT for a lot of things that I was doing because Bard just wasn't doing it for me. I feel like Bard's a bit of a distraction for Google, because when they started talking about their actual engines, it was much more impressive to me. Palm II is the next engine. It is now powering Bard, supports more than 100 languages, can write code, has more reasoning and coding capabilities. They were really hammering what it could do to help developers. It's already powering 25 Google products. It's available to developers through an API and what caught my eye was Project Tailwind, which I have signed up on the weight list for. That uses docs from your Google Drive that you pointed at. You decide what it can look at, and then create sort of an AI-first notebook where you can say, hey, look at all my scripts and tell me which ones I've done, or summarize these notes and turn it into a letter that I can send to somebody. I thought Project Tailwind was probably one of the more promising and very Google-impressive things that I saw. I feel like I would use this as well, and one of the reasons is because I have searched through old notes, old docs, trying to find specific information from previous videos I've scripted, and it's so hard to do. So this could be extremely useful. The next model will be Gemini. It'll be multimodal. I think they said there was something like six different things it can do. They also talked about bringing it into Google Workspace. That's where you did the help me write that I mentioned earlier with Gmail. You could do help me write in docs to help you make a doc as well. If you've got Project Tailwind, then you can combine that with it knowing what's in your docs. There's the sidebar, which they kept calling sidekick that can show up in slides and sheets and docs where you can have it look at what you're writing and get some assistance from that. And it is now Duet AI. Their productivity apps are called Duet AI for Workspace. So that's all combined together. Preview experience in Workspace available if you want to try to sign up at google.com slash labs. That's all the enterprise-y stuff. Okay, a bunch of other things here that took them two hours. So you can imagine there's an immersive view in maps that lets you zoom in and kind of fly above your root and see it. There's some updates to Vertex AI, which is their Google Cloud AI product. If you're in the enterprise, they've got Imogen for Image Generations, Codi for Code Completion, and Chirp for Speech-to-Text that you can run yourself. Duet AI is coming to Google Cloud as well. So you can run it for your own instance. A3 virtual machines based on NVIDIA H100 GPUs, already powering Anthropic and mid-journeys cloud instances, could power yours as well. They had a whole lot to say about safety. They kept trying to reassure people that they've got good ethics. They had their technology and society had come out and talk about watermarking so that you can identify what's been made by their AI metadata so you can see the context where files came from. And then there was a bunch of stuff that wasn't even in the keynote. Google Home or was touched on very briefly. Google Home app is coming out of public preview with its update May 11th. That adds matter support on the iOS version. A Wear OS 4 emulator and developer preview launch coming to consumers later this year. But if you're a developer, you can get Wear OS 4 right now. Dark Web reporting coming to all Gmail accounts in the US. And Project Starline 3D video chat, according to Scott Stein over at CNET, has been shrunk from the size of a booth to the size of a large TV and is being tested at Salesforce and we work among others. So lots of stuff out of Google. Well, we are going to talk about some of the standouts because we could not possibly get through everything during DTNS in GDI. So stick around for that. But Shannon Morris, we could not have done today without you. Thank you so much for being with us, especially because I know you took one for the team and bought everything. Let folks know where they can keep up with your updates on the fun Pixel stuff that you will be reviewing. YouTube.com slash Shannon Morris and please watch my reviews when they come out so that I can pay myself back for all of these products that I just bought. It's not sponsored, so I would appreciate the views. Good stuff. Well, you're a good reviewer and we look forward to that. Also, we always look forward to new patrons and today we have a new patron to thank, George. Thank you for joining us. And George, back to us on Patreon. So big old thanks to you, George. Three cheers for George. Now, if you're like, man, I wish I could be like George, but I just don't have the money for that. You can still follow us on Patreon for free. Become a free patron, a brand new option. Go to patreon.com slash DTNS. Scroll down past the paid options and you will find Join Daily Tech News Show for free. You'll get monthly updates. You'll get Roger's column and you'll get the Friday Good Day Internet. That's all for free at patreon.com slash DTNS. As Sarah said, stick around. We're going to talk a little more about I.O. Because there was a lot. Patrons. But just a reminder, you can catch our show live Monday through Friday, 4 p.m. Eastern, 200 UTC. Find out more at dailytechnewshow.com slash live. We are back talking about ROG Alley with our ally, Scott Johnson. Talk to you then.