 This 10th year of Daily Tech News Show is made possible by you, the listener. Thank you, including Jeff Wilkes, Paley Glendale, and Dr. X-17. Coming up on DTNS, one of the founders of Neural Networks has some serious concerns about the future of the models he helped create, but some of those models are helping doctors. Plus, can you be too afraid of public USB chargers? This is the Daily... For Monday May the 1st, International Labor Day 2023 in Los Angeles, I'm Tom Merritt. From Levely Cleveland, Ohio, I'm Rich Schroffalino. And from the D.C. area, I am Chris Ashley. And I'm the show's producer, Roger Chang. My friends and neighbors, it is good to be back in the saddle again. We got some good news, we got some bad news, and maybe it's good and bad news. So let's start with the quick hits. ARM disclosed it confidentially submitted Draft F1 form to the Securities and Exchange Commission, which is its first step towards a U.S. stock market listing. They're going to finally IPO ARM. Remember, Nvidia wanted to buy ARM, but that fell through last year. So after that, SoftBank said it planned to take ARM public. ARM designs are, of course, used in just about every mobile device you can find on the market, including a bunch of laptops. Last month, Microsoft began rolling out chatbot integration into its SwiftKey Android keyboard. I mean, it's putting its chatbot in a lot of places, SwiftKey among them. And now it's coming to the Galaxy. The SwiftKey team confirmed that the chatbot is coming to SwiftKey on all current Samsung devices. It's integrated part of the One UI, and it's the default. You can, of course, switch it, but it's on every Samsung device. So they're tailoring SwiftKey to Samsung? Hard pass. Amazon's free ad supported FreeV streaming service will add more than 100 Amazon original series and movies from its Prime video service over the course of the year. In some instances, Amazon will not provide a full season of an original show on FreeV, just kind of a taste. And don't worry, the originals are staying on Prime video. So if you pay for Prime video or if you get it with your Prime subscription, those will stay there and they will be ad free. CEO Elon Musk tweeted that in May, Twitter will allow media publishers to charge users on a per article basis with one click. He said the price would be higher per article versus if you just paid for a full subscription, but would still give you some access. More details are yet to come, including kind of major things, like whether Twitter would take a commission or what kind of accounts would be able to offer the feature if they need to be verified, what badges they would need to have, all of that yet to come. Meta will hold its third annual Quest Gaming Showcase on June 1st, promising over 40 minutes of content related to new VR games. The stream starts at 12.45 Eastern on YouTube, Twitch, Facebook, and inside the Horizon Worlds platform. All right, well, toot toot, all aboard the AI generative hype train. It's still very much accelerating. We're seeing more of it every single day. Yes, thank you, conductor Tom, for making sure we're all registered here. But one of the things that we keep hearing about this technology, we hear about new models, use cases. That's kind of where this stuff is going to actually really start impacting our lives. One place that could make a big difference, the healthcare sector. And this isn't just idle speculation and kind of naval gazing. Doctors at UC San Diego Health and the University of Wisconsin Health have been testing GPT-3 integration since last month. So what are they using it for? Well, to generate drafts to a limited number of questions, things like pulling in that have the ability to pull in patient information. These often do require some editing, but they provide a good starting point most of the time, at least according to the reporting that we've seen. There's also a new study in the journal JAMA internal medicine that took questions already answered by verified doctors on the subreddit ask docs and answered them using chat GPT. So the chat GPT responses were compared to the human ones by a team of five medical professionals who didn't know which ones were machine generated. It was completely blind. They just generated the quality of those responses. And it's really important to note if they were not submitted to the subreddit, they were just only used for the study. The study not only found more AI responses ranked well for quality, but interestingly also for empathy compared to the human ones. So, you know, Chris, it's AI, it's health. Should this scare people? I, you know, often I find myself conflicted when we see some of these articles about the direction they're traveling in. However, this one is no different. You fooled us. You fooled us. That's the swear. So, while on one hand, I think everybody listening to this show has had somebody or themselves looked at WebMD and felt like they were dying. Like, WebMD said it's over at the same time. So, well, first the fact that this can probably provide a much more intelligent answer and a much more streamlined answer to common questions as well as help with the efficiency of getting answers from your doctors. I think that aspect of it is really, really cool. But of course, on the other side of it is, you know, we find ourselves looking at our healthcare system more and more and seeing how it's a lot more profit motive embedded in there. And so, you can't help but want hope that this is not the direction that they're using this for to reduce the amount of doctors on hand, but yet used to provide better answers and better services for people. So, I am once again conflicted over this, but, you know, overall, so far I like, I do like what I've seen as far as the study is concerned. As far as what I remember, the healthcare sector has a shortage of workers. So, I'm not doubting that the profit is not a motive, of course it would be, but this is going to help fill in gaps in coverage because they don't have enough people too. So, there's an upside there. I think it's worth pointing out that the study that used the Reddit stuff was just an evaluation to say, let's see if the responses are good. And it turned out to be better than they expected. That isn't the same as like, now let's use them to treat people. It was more like, okay, now we know they're good. Let's figure out what we should use them for. And the San Diego and Wisconsin study is separate or a separate situation, rather, is saying, let's use them for the non-medical stuff. Let's use them for, I need to fill out this form. Where do I get this form? Can I get my prescription renewed? You know, where do I get my prescription, etc. The stuff that is more procedural, which then would free doctors to actually spend more time treating patients. Yeah, that aspect I do like. Well, and one of the things that really stood out is like that whole question of empathy, right? Because that is extraordinarily important in any kind of clinical center, at least the perception of empathy, right? Where we don't doubt that the doctor maybe, you know, wants our best medical outcome or something like that, but like the interaction feels rushed or for whatever like that. Obviously, this is just a first step to seeing how these tools can be used. But the Wall Street Journal had some interesting stats about like just like what the medical field is kind of like when they're looking at, you know, kind of post pandemic, we're seeing like burnout of like 62% of physicians. And this came from a Mayo Clinic study. And so like significantly up from kind of pre-pandemic levels, combined with more and more patients, also like kind of looking for electronic medical records using my chart and stuff like that, queries are up like 60% over the last couple of years. So those two things of we, you know, we even if given the same number of doctors available, more people kind of seeking these quick answer these questions. Chris, to your point, you know, WebMD kind of being the, you know, the internet comment section of health that, you know, is just like the stressing hellscape and actually getting reliable medical information like could be extraordinarily, not just valuable in terms of time, but also for like health outcomes. And we're going to need a lot more of these studies to go like, okay, now we know that theoretically these like, we have one study that shows these theoretically can do this. The next step is to say, okay, like how can we design these so that we're not like generating. Yeah, what can you do with that, right? The study isn't about the design. The study is about like, okay, it's worth something. Now everybody figure out what, what it's good at. And that's what I like the Wisconsin and San Diego health demonstrations. The way they described it was they get a message from one of their clients, one of their patients, and the chatbot would suggest a response based on the message that was using, again, locally accessed health records. So there's no HIPAA violation. It's not going out on a network. It's the doctor that already has it. And the chatbot is running locally and says like, based on this patient's history, here's what I would recommend telling them. The doctor shouldn't just press send and doesn't just press send. The doctor then looks at that and says, okay, this is a great draft. Let me adapt it. But it's still faster. It's an aid in answering these messages. And I could see a better WebMD coming out of this where WebMD is always going to show you every single disease, right? If you had one that's like, you probably don't have these because we know your medical history. I think that helps. Yeah, and I think that the aspect of this that I find somewhat exciting is I did an interview with Brandon Watson, a good friend of our podcast a couple of weeks ago. And one of the things he talked about is how he's been programming against CHAP GPT to help with his interview at a company. And essentially what he was finding was the AI was picking up nuances in the interviews that he was missing. So I could definitely see that sort of thing take place as well here where maybe a doctor missed something in what the patient is presenting to them but the AI can pick up on what they missed. So as long as they kind of work these things in conjunction, doctor and AI, I really like where this could potentially go and what the study is showing that it could do. And the other thing to think about is that the JAMA study was looking at CHAP GPT, the thing that you can log in and use right now, this very generalized tool, this is not something that was trained to be used in a medical setting. And that's what I, again, I think is exciting. Like CHAP GPT, again, is the tech demo of all of this stuff. Yes, right. Strongerly powerful tech demo. But like when we can take this and say, we need this to, we can, we know how like clinical encounters work. We have like tons of like cultural studies about like what leads to good health outcomes, what leads to people not telling doctor stuff. And we can build, like the idea is we could build very specific models to take those kind of things into account. We'll probably come up with all sorts of different other shortcomings as well. CHAP GPT is just like saying like, if it's any good at all, that's what this thing found out. Like, oh, it actually is. Imagine if we tried to make the tool for this purpose. Yeah. Yeah. All right, let's move on to Ars Technica. Dan Gooden has a nice piece on Ars Technica on the prevalence of juice jacking stories. Since the FBI issued its rather unusual or rather usual and unremarkable warning that it gives pretty much every year against charging over public USB ports. We passed it along because we thought, hey, this is a good reminder that those things can be compromised. Now that warning isn't bad or I wouldn't have said, let's put it in the show. It's not a bad warning. You should be wary of those ports. But every local news outlet in the U.S. jumped on it this time as a new trend, probably because the FCC issued their regular warning and the FBI tweeted it and then the FCC, so everybody talking about the tweets and the FCC issued theirs again and then the local news was all over it. But it's not new, nor is it a trend. Gooden points out that you probably don't need to worry about it unless you're a target of a nation state hacker. There are no documented cases of juice jacking ever taking place in the wild. They've taken place at DEFCON very famously to prove the concept. But no one's caught anyone actually doing this out in an airport. Most Android and iOS phones now warn if an external device wants to send you files or copy yours. That's because of the demos they did at DEFCON, which is why you do those demos at DEFCON to raise awareness about how to defend against these attacks. So what Dan was saying is like, sure, should you be wary? Yes. If you're down to 2% and you're in airport and you only have your USB cable, should you not charge your phone? No. You're probably fine charging your phone. I look at this as the equivalent of like, yeah, if you've got an alternative to SMS for your second factor, use it because SMS isn't great. But if you don't have any other choice, SMS as a second factor is better than no second factor at all. Chris, what do you think about this? Is the warning itself a bad idea? Well, it sounds like what Dan is saying is that your data is not that interesting. It's that the risk is incredibly low because no one's ever done it. And you probably won't be the target the first time somebody tries. Yeah. Yeah. So on a more serious note, so I agree that the likelihood of this happening to the regular person, yeah, probably not so much because not only they're going to have to hack the station where you plug into, they're going to have to figure out how to make you plug into that station in the first place unless they just want random people. But with that said, the one aspect that I do like that this is being reported and put out there is the ability to allow folks to stay on their toes. Oftentimes, I call people that listen to this show and other tech shows that they are the help desk for their family members and it's hard to keep them vigilant on protecting their data and their information. So when you have a story like this that kind of pops and people may come across it, it does serve as a means for people to just not trust everything you do with your phone and with your computers and stuff like that. So for that perspective alone, I'm okay with them probably overdoing it. Sort of the halo effect of it, right? Yeah, yeah. This for me is like it's about understanding your threat surface, right? Because it's like if you try and account for like every possible vulnerability, you're either going to be like paralyzed by indecision or you're going to be worried about something like this which is an extremely low probability event and miss the phishing email that just landed. You've got to be thinking about, okay, let's guard against time to your point. Okay, SMS to factor authentication is flawed, still better than nothing. So it's like, okay, let's take the most table stakes stuff out and the problem with these warnings is it can make, when people that aren't necessarily tech savvy or something like that, you can read that, they can think I can't do anything. I can't even charge my phone and it leads to that kind of like resignation in a weird way. So yeah, it's about understanding like where you're probably going to be compromised is like it's definitely going to be something an email you clicked on or it's going to be a password you were used. It's not going to prioritize properly. Exactly. Yeah. Yeah, yeah. Well, folks, if you have a thought about this or anything we talk about on the show, send us an email feedback at dailytechnewshow.com. Jeffrey Hinton was born at Wimbledon, graduated Kings College, Cambridge in 1970, got an experimental psychology degree, worked on some of the most important algorithms used in neural networks and won a touring award. Jeffrey Hinton, it looms large in the history of the development of what we call AI. He teaches at the University of Toronto now and in 2013, Google bought his research company. You probably heard about that research company. They're the ones that made AlexNet in 2012. AlexNet got a lot of attention as the neural network that could recognize cats and dogs and flowers. It seems kind of quaint now, given what we can do, but it was a big deal at the time. It was pivotal in Google's development of transformers. That's the T in GPT, and that led to Google Bard, chat GPT and more. Hinton just left Google Monday in order to, in his words, freely speak about the risk of AI without having to consider how his comments will or won't affect Google. Now, Hinton says he believes Google has acted very responsibly, but he told The New York Times that the competition between Bing's chatbot and Google's Bard has him worried. The intense competition might cause the companies to disregard consequences and lead to a world where nobody will be able to tell what's true anymore. And longer term, he fears that AI could eliminate jobs and possibly the need for humanity itself as AI can write and run its own code. Yeah, he also told The New York Times that he thinks scientists should be working just as hard on ways to control AI, saying, I don't think they should scale this up more until, or more until they have understood whether they can control it. So, this is a pretty, large looming, respected voice in AI that's sounding a little worried. Is this the time we pay attention to this type of warning? Chris, is this like, is your radar up? My radar is definitely up and it's up all over the place because when I see something like this, my first inclination is to challenge the person and saying, okay, you're the one who helped invent this. So, while you're invented this, you never saw any of these possibilities as a thing. You just saw it now doesn't make any sense to me. But then I can address that if you want. I've got an idea on that because of some of the things that he said here is that I thought it was progressing too slowly. He says, yes, I always knew there would be risks and he's not saying we should stop developing it. He's saying we should develop for the consequences. But it is recently that he has noticed that something he thought and he says this in his interview with the New York Times, something he thought was 20 or 30 years down the road has become closer. So it's not that he didn't know the risks. He thought the risks were worthwhile that they could guard against them and they were coming farther down the road. And now he's worried that the risks are happening faster than he expected and at a time when nobody is paying attention to the consequence. I have a question about this because is this Hinton he is an academic like that is his background entrepreneur sure has started companies and stuff like that but he is an academic. So his interest is in advancing the science of this right. And his problem is that kind of the rubber meeting the road here right is the competition and the result of this technology operating where people are rapidly trying to productize that like does that explain any of his kind of like does that I guess diminish his worry about that or is that like surprising that someone who is supremely smart I guess like is kind of realizing the effects of this market competition at this point. Yeah, so that's a great question. So essentially they're asking is it what is that a valid motivation is his motivations pure. I think and when I started going through his what he was saying and what he was repeating and some of his history. It kind of led me to give him the credibility at least up front. You know one he was offered a job or to take money from the DoD to send his technology to them. He turned it down which a lot of people wouldn't have done and he says he does not believe in using AI for war war machines. So I was like OK, you know game on and then on top of that he had a ton of comments. He held his comments as to not have them hindered by having to worry about how they would affect Google which I thought is also honorable thing to do. So when I put those two pieces together I think he's pretty honorable in his worries. Now, you know is he overreacting? I don't know but I like the fact that the guy that invented it is actually concerned and putting those concerns out publicly instead of behind the scenes. Yeah, I think it I mean he's also 75. So at a certain point he may realize you know what it's better for me to get on the record now than to spend the rest of my life in silence and there's plenty of younger minds that are doing great things that can pick up you know pick up the baton now I don't feel like I'm leaving the discipline in the lurch because again he's he's saying look at how it was five years ago and how it is now take the difference and propagate it forwards that's scary. That is as I get older I realize that you start to think like oh I don't have a whole lot of time left and things are moving faster than when I was young kind of attitude the counter to that is what an open AI's Sam Altman has been saying which is we're about to hit a plateau it won't keep getting better at the same rate that it has been getting better and so I you know I'm not saying one of them is right and one of them is wrong again because I think what he is doing here what what Hinton is doing here is not saying let's stop he's saying we need to be working on how to counter the stuff we should not be stopping working on it but we shouldn't only be pushing it forward we should also be working on the safeguards at the same time and that's what he's not seeing enough of and from that aspect alone I find that that's extremely important and only for the fact that we do not have a great history of our politicians regulating technology properly right in fact we watch some of these interviews and it's clear they have no idea what they're what they're seeing and what's going on and and how to talk about it so you know raising these type of alarms and you know and to you know Microsoft and Google's credit they have started putting out there and saying hey you guys need to regulate this stuff so to their credit you know I think it is a positive thing overall to start you know getting people aware that we should start looking at this I think it's significant that Hinton didn't sign either of those letters warning of the risks now he was still at Google at the time yes but he he is taking a more nuanced approach to this and he could have come out in this interview like if he wanted to be like I wanted to sign on to it I didn't because it could have very easily have said that I do wonder time to your point of you know kind of of reaching a point in your career and in your life and and kind of looking back at that I do wonder if this is a realization obviously AI development like this like tons of money in this for years now at this point he's certainly obviously aware of that but kind of this idea of this specific generative AI going from going to almost kind of like a multimodal multimodal kind of of arrangement right where he was at the forefront of this he was the leader in this for decades right like like at the absolute forefront bleeding edge and not to say that Google is not still there but that there are now numerous other parties and it's one thing when you're at the bleeding edge and you are leading it and you realize like I have a grasp of what I think are the moral implications of this or whatever and then to realize I am dealing now with not just other researchers but other companies that have you know agendas that I don't know about he's very clear to say Google has been a good steward of AI up until this point up until I'm leaving the company basically and but it's not just Google right so I think Kenwarfo 4 has a good way of putting it you should know how to stop the train you started Speaking of the hype train 2-2 Nice one yeah 2-2 All right well late last month the journal Publications of the National Academy of Sciences one of my favorite of the publications published a paper detailing the world's first wooden transistor don't take wooden nickels but you can accept a wooden transistor it was created by researchers at the Wallenberg Wood Science Center in Stockholm, Sweden the researchers created conductive channels inside the pores of balsa wood and used a penetrating electrolyte to modulate its conductivity now don't expect to see this in a laptop with turbo boost clocks anytime soon it's it's it's pretty big it's about three centimeters across and switches at less than one Hertz as opposed to like you know gigahertz this being though a proof of principle design the researchers do say smaller transistors with higher currents should be possible you know and it could find uses in simple things like on-off switches for solar cells or sensors and it could be incorporated into wood products living plants some you know like biodegradable tech might be a lot more feasible with wooden transistors yeah it's some stuff that could look nicer too you know yeah you know home security stuff transistors that are wood could I don't I don't know if they'll they'll get this fast enough to be able to do anything like that but it's certainly interesting that they can do it at all and I think that's what this paper is about right yeah you know what if I can get a better pepper out of my garden then I'm on for it now I just want IKEA to make one like that looks mid-century modern and that would be perfect yeah and they will all right let's check out the mailbag yeah Matt wrote in regarding the story that GM is ending its Chevy bolt line he wrote in and said the good news is that there isn't much that breaks on an EV so in a few years hopefully the people who are looking for a commuter EV instead of an ego stroker will be able to buy a quote unfashionable old EV at an affordable price and he says an internal combustion engine is a Rube Goldberg monstrosity in comparison to the simplicity of an EV this is a great this is a great point if you are someone who wants a more affordable electric vehicle you might be looking for a used Chevy bolt although I don't know the prices might be up since they don't make them anymore and I will say at least anecdotally my brother owns a bolt and my uber driver on the way home from CES picked me up in a bolt and they both complained that not the EV stuff but that the like GM car components are very specific to that car and supply has been an issue now we've been to supply chain hellscape so you know a full content context there but you know going out of production what feels like maybe a little short maybe for that kind of stuff might be hard to find components going down the road we don't know yeah the good news is you know they have the truck coming out soon they have the equinox come out as well the equinox looks pretty awesome from what I've seen so far in an early picture so it's not you know I'm okay with them dropping the bolt that's because you drive a truck though we went what about a sedan drivers could free charging that's my answer to everything free charge mic drop all right thank you Chris Ashley for being with us of course as always let the folks know what you have got going on these days yo come check me out on barbecue and tech especially this week because we had an awesome interview with a young fella who started a food barbecue food truck with his family and just to end in now to how they got started and some of the challenges they go through to make that happen so yeah definitely check out this episode it was really really cool on a lot of I've really enjoyed that episode partly because I've eaten in a bunch of food trucks and it would just kind of open my eyes to what goes into making them and why you make them and stuff it's an excellent excellent episode well done thank you thank you we had a lot of fun doing that one well we are dancing for joy because we have a stately quadrille of new bosses to thank here on Monday that's right Lewis Ryan Conrad and Justin all joined the patron ranks they started backing us on Patreon so get on your dancing shoes and thank Lewis Ryan Conrad and Justin you rock wow we got four over the weekend there were only three days I don't know we all said on Friday but yeah well done all right those four people are now patrons all the rest of the patrons are going to welcome them into the club big you know pats on the back and smiles and they get the extended show they get the longer version good day internet where today we're going to talk about the fall of paparazzi yet another authentic social network that is closing shop and we're going to talk about whether we're finding out that maybe people really don't like authenticity as much as they say they do and your weekend right yeah we're going to talk a little bit about what I ate on Friday yes or Saturday yeah yeah absolutely well remember you can catch the show live Monday through Friday at four p.m. eastern twenty hundred UTC and you can find out more at Daily Tech News Show dot com slash live we'll be back tomorrow talking about a black innovator in the AI space with Nika Monford I can't wait we'll see you then this show is part of the Frog Pants Network get more at frogpants.com club hopes you have enjoyed this program