 Daily Tech News show is made possible by its listeners. Thanks to all of you, including Steve Aderola, Jeffrey Zilx and Michael Bullock. Coming up on DTNS, Dr. Kiki is here to talk brain-computer interfaces. What are they really and who are they for? Plus, Nvidia's got new GPUs and AI art is becoming normal. This is the Daily Tech News for Tuesday, September 20th, 2022 in Los Angeles, I'm Tom Merritt. And from Studio Redwood, I'm Sarah Lane. I'm the show's producer, Roger Chang. And joining us, the host of This Week in Science, Dr. Kiki. Joining from Portland, Oregon. How's it going over there in Portland, Oregon? Oh, it's wonderful. It's wonderful. Weather is good in the fall. We haven't yet hit the depressing rainy time of year. Which is the rest of the year until summer. Which is the rest of the year, yeah. Okay. Well, folks, the Windows 11 version 22H2 update, AKA the 2022 update, is now available if Microsoft says your machine is eligible, which it hasn't said mine is. So while I wait for that to happen, let's talk about a few tech things you should know. Amazon sent out an invite to a virtual event taking place on September 28th. Company didn't give details on what it will announce, only saying it would evolve devices, features, and services. Which means it could be pretty much anything. Last year at this time, Amazon announced the new Echo and Ring devices, as well as the Halo View fitness tracker, and the always home cam drone. So again, could be anything. Yeah, it's gonna be stuff like that, though. Devices, features, and services. Amazing. AMD says it'll launch its Radeon RX7020 series of GPUs in Q4. Those little ones are in the $400 to $700 price range. Lenovo's IdeaPad 1, Acer's Aspire 3, a 17 inch laptop from HP, are all gonna have one of those 7020 series cards inside. The 7020 series is based on AMD's RDNA2 architecture. We will get more 7,000 series GPUs on the way later. One would suspect, because AMD Radeon SVP and GM Scott Herkelman tweeted Tuesday morning that AMD will launch RDNA3. That's one more than two on November 3rd. Mozilla researchers say that by parsing video recommendations data from more than 20,000 YouTube users, buttons like not interested, dislike, stop recommending this channel, and remove from watch history are mostly ineffective when the goal is to stop similar content from being recommended. Mozilla called on volunteers who used its Regrets Reporter, a browser extension that overlays a general stop recommending button to YouTube videos viewed by participants. US grocery store chain Wegmans is discontinuing its self-checkout mobile app. Remember we were talking about Instacart doing the mobile app that can let grocery stores do that? Wegmans was not using the Instacart one, but it's no longer gonna use the one that it was using. Wegmans scan, let you scan each item you put in your cart, then scan a barcode at the self-checkout register to get your total amount and pay. Wegmans said in an email to customers that quote, unfortunately the losses we are experiencing prevent us from continuing to make it available in its current state. So people were just forgetting to scan everything that was in their cart. Sad. Very sad. Apple will increase app store prices across Europe and some Asian markets beginning October 5th, also probably sad for some folks, affecting both regular apps and also in app purchases. In Japan there'll be more than a 30% hike while countries using the Euro will see a 20% hike. Other countries affected include Sweden, Chile, Egypt, Malaysia, Pakistan, Vietnam and South Korea. Developers can change the price of their apps and in app purchases including auto renewable subscriptions if they apply at any time but the minimum price will now be lower. For example, in the Eurozone, the minimum charge of 99 cents was raised to 119 Euro. So yeah, and you can still do free. They haven't increased the price of free. Right, yeah, like you don't have to charge it but if you do, you have some restrictions now. All right, let's talk about these Nvidia announcements. Nvidia officially announced its 4,000 series of GPUs or the 40 series, you might hear it called that too. This is the one based on the Ada Lovelace architecture named after Ada Lovelace. The RTX 4090 will arrive October 12th for $1,599. That's the top of the line. Then there's the RTX 4080. That'll come later in November for $899. The 4090 is gonna ship with 24 gigabytes of GDDR6X memory enough to make you drool. I mean enough to claim it is two to four times faster than the 3090 Ti at the same power consumption. Nvidia recommends that you do get enough power, PC power supply of at least 850 watts when using a Ryzen 5900X class processor is what they recommend. The RTX 4080 will come in 12 gigabytes and 16 gigabyte models. Those are both also GDDR6X. The 12 gigabyte model will be $899. The 16 gigabyte 4080, $1,199. All three of these cards include updated shadow play support. They can capture 8K video at 60 frames per seconds in HDR. They support hardware AV1 encoding. And the 4000 series is gonna support PCIe Gen 516 pin connectors without the need for a custom solution as required in the previous gen. Cards will also include an adapter to connect with three standard eight pin power connectors as a nice option. Power supplies are coming in October from ASUS, Cooler Master, FSB, Gigabyte, iBuyPower, MSI, and Thermaltake. You can expect to see RTX 30 series still on the shelves though because Nvidia has said it made too many of those. You might see those at a discount. The 4090 and the 16 gigabyte 4080 are also going to come as founders additions from Nvidia as you might expect. Roger, what do you make of the new line? We've got the brand new top of the line in videos now. How do they look to you? They look very impressive. I mean, we won't know until actual third party benchmarks come out, but it really looks like Nvidia has decided to go all in with the ray tracing to the point that they're not just improving it, but they're advancing it with new features, including DMO's aching as well as improvements to DLSS to essentially to give you all the added visual benefits. What's interesting is the price points that they come in now is, I mean, they're really not targeting your average gamer anymore. They're looking to kind of go the next step up, whether it's a Twitch streamer who does gaming, but wants to make sure they put out at least a 4K stream to their audience or at home creators who might have other uses for all that GPU horsepower. It's important, it's very impressive, but I will also add it's a little too rich for my blood at that price point. Yeah, and it may not be even targeting those people, maybe that's just what they have to charge and so that's the market they're gonna have to go for. Well, speaking of announcements, Nvidia also announced a processor for autonomous vehicles called Drive Thor based on Nvidia's Hopper GPU platform that's optimized for processing algorithms at two quadrillion operations per second. That's eight times Nvidia's Oren processor and with 77 billion transistors, Hopper can replace multiple chips, saving on expense, saving on power consumption. Nvidia says it uses CPU cores from Nvidia's Grace processor and browse some elements from the Lovelace architecture as well. Thor will be able to run Linux, QNX, and Android simultaneously to serve different parts of the car. Drive Thor will also have lower end versions meant for driver assistance systems that don't need all that processing power that fully autonomous systems might need. It'll ship in 2024 and show up in cars in 2025 starting with China's Zeker 001 EV. And Nvidia also gave us a look at DLSS3, the next version of its deep learning super sampling technology. That's the one that could upscale graphics and allegedly quadruple performance over native resolution. It's an algorithm. It'll add bits to either increase frame rate at the same resolution or upscale the resolution without losing performance. So for example, a game could run at 1080p but DLSS can use machine learning to make it look like it's 4K. DLSS3 can generate entire new frames now using the optical flow accelerator that can track and calculate on-screen object motion vectors, not just pixels, that should reduce stutter. And it'll work with Nvidia reflex technology to reduce latency and improve responsiveness. In a demo, they boosted Cyberpunk 2077 from less than 30 frames per second to around 100. DLSS2 could only get that to 60. Only the new RTX 40 series cards are gonna support DLSS3 at first because it needs the new fourth gen tensor cores and the optical flow accelerator. But there are gonna be a bunch of games. They've already announced titles. More than 35 games are gonna integrate support for DLSS3 with some launching as early as October. So they'll be ready before you can get the cards that can take advantage of them. But yeah, I don't know if anybody here is gonna plunk down for one of the new cards or not, but they look impressive. And it looks like we're gonna have pretty decent support for them out of the gate. I'm also, it's very impressive that they've really kind of up the ante with their machine learning in the card because before it was like, well, are they just gonna use it so people can build an array of deep learning servers based on these cards? But they have also leveraged that technology to kind of up sample images, which is great because it is gonna be a key feature moving forward just as ray tracing has been for GPUs to integrate that because, I mean, it offers so many benefits that I don't think you can have at least a competitive high-end card without it. All right, one last Nvidia announcement regarding large language models. Who doesn't love a large language model? Let's call them LLMs for short. Nvidia just announced its Nemo LLM service and its Bio Nemo LLM with the promise to make it easy to adapt LLMs and deploy apps for all kinds of uses. One LLM that we hear a lot about these days is GPT-3 from OpenAI. They also make Dolly and Sarah, we have some news about Dolly today as well. And indeed we do. OpenAI is now allowing AIR generator Dolly to edit images with human faces. You might recall that was previously banned due to fears of misuse. OpenAI now says the change follows improvement in the filters to remove images containing things like sexual, political, and violent content. Face images can be edited to change hair or clothing and even permit other variations. And a letter to users Dolly said the company is also minimizing the potential of harm from deep fakes. Now it could also be doing this because there are a lot of other text to image systems out there, image and crayon, stable diffusion lets you do pretty much everything you want because you run it locally. Mid Journey is very popular. They all allow different, and as I mentioned sometimes more latitude in what you create. In fact, John Herman at nymag.com has an excellent read called AI art is here and the world is already different. How we work even think changes when we can instantly command convincing images into existence. Some of you have been asking about when we would get the next truly new tech advance. I think this may be one of them. Yeah, so a lot of Herman's article focuses on Mid Journey because that's what he's been using. Mid Journey is unique in that it doesn't have VC backing, only 10 employees, small team, but very popular. Users pay $10 to $600 per year for image generation depending on what they wanna do, new features, licensing rights, et cetera. Its Discord server has two million members though. People are interested. Free users get a limited number of requests before they have to pay. Paid members get their images delivered by private message in Discord and then the money is used to pay for the cloud servers and 10,000 or so GPUs that then process those requests. But what Herman has noticed in talking to other Mid Journey users is that text to image generators are used for a lot of different reasons. Some predictable, some not so much and it's moved out of the surprising and kind of fun jokie phase of making a weird image into something that he calls competent and plausible. Yeah, so here's a few examples from the articles and Kiki, you and Sarah both, I'd like you to think about whether any of these are surprising to you or if it sparks other ideas of what people might be using these for. One example is just showing something to somebody from your head. If you have an idea, you don't need to sketch it. You just describe it and Mid Journey can actually make a picture of it. You could be a prototype or a decorating idea, whatever. A game designer is using it to make between 600 and 1,000 unique pieces of art they need for a game, something they as an individual couldn't afford to hire artists to do. What they're getting is good enough for self-publishing though and if they get a deal with a distributor, potentially speeds up the final work from professional artists. A children's author is using it to create pictures for their book, something that would have taken them much longer in the past. They're replacing themselves by saying, well, let's get the AI to do it. A designer for the state of California is using it for pamphlets, saying it's better than the clip art that they would be forced to use otherwise without the expense of having to pay someone which they don't have the budget for. An Australian ad agency is looking into it for broader creative options for customers that don't have large budgets, especially global customers. You don't wanna pay for a designer. We can do this for you. There's a design director. You might think, oh, there, the ones are gonna be mad about this. A design director is using it to make concept art that can use photography and illustrations that they wouldn't normally have time for in concept art, but it means their concepts don't look like everybody else's because they're not pulling from the same stock art that all the other design directors are pulling from. Deviant Art has been flooded with a bunch of mid-journey stuff. It's certainly controversial in the art world. But Kiki, what do you make of this? It does feel like this is expanding rapidly from something that was brand new and novel not that many months ago. Yeah, but I think it really is, it's a tool. It's an advance in the technology that allows creatives to be able to be more creative. It's not creative on its own. And I think that's the big delineation there, right? That's how we get, oh, humans are creative. We come up with the ideas. We're just asking this tool to help us fine-tune things and to give us products that we can use. Yeah, because the creativity's in the prompt, right? It's just shifting the creativity. Go ahead, Sarah, Sarah. There was probably, you know, in the, I don't know, mid to late 90s, we all had this conversation like, well, if you're really a good photographer, why would you need Photoshop? Kind of the same conversation, right? It's like, well, why would you need this? If you're a creative person who could generate really wonderful art on your own, this is a tool. Like you said, Kiki, this is 100% a tool and we're still in the early days. So we don't totally know how the tool is gonna be used or misused. But I think if you're an artist, and I certainly am, I'm a terrible artist, but I know a lot of people who are a lot better at this than me, who are pretty prompt about this because it allows you as an artist to kind of go to the next level as far as creativity goes based on the AI giving you that first step. I think what's so fascinating about this is when it first came out, the two big reactions, which frankly, from where I said, are usually the reactions to anything new, which is this is gonna be misused and it's horrible, or this is dumb and it's overhyped and it's never gonna be useful. Like those are often the two knee-jerk reactions. And what we're seeing now is that neither of those is true. Well, maybe it's being misused, but certainly not at the level that people were afraid of. More often it's being used as we all have said as a tool. And I imagine there are people already saying this and it's gonna be more common for people to go, yeah, I thought the text image generator might be good for this, but it was just faster for me to do it myself. Like we'll hit that limit where we'll like, oh, it's good for these things, but it's not good for those things. Totally. As someone with absolutely no artistic skill, I can tell that if I can tell a computer to draw something for me, it's going to be better than me trying it myself. Yeah, it's always gonna be better for me too. It's always gonna be better. But there are those like Scott Johnson who are gonna be like, oh no, I can do that faster myself. Yeah, it's always better. What do you wanna hear us talk about on the show, folks? We've got a subreddit that's full of great ideas and yours could be among them. Submit stories and vote on them at Daily Tech News Show dot Reddit dot com. All right, so over the past few months, we've covered several stories about brain-computer interfaces or BCIs. How can they help patients move their limbs? How can they help patients speak again after being paralyzed due to an accident or some sort of an illness? But how do they actually work and what happens between the electrodes of the interface and the brain? Dr. Kiki, we know you've thought a lot about this. Give us a little bit of insight on what you know so far. Well, brain control interfaces generally are any kind of technology that takes signals from the brain and connects it to a computer and this can be either unidirectional or bidirectional. And most often when we're hearing about these BCIs nowadays, it's with relation to these kind of hard needle-like electrode arrays that get implanted invasively into the brain. And the issues that we want to be worried about or that we wanna be thinking about moving forward with these kinds of electrode arrays are related to the usability, the safety, and the longevity of these devices. So currently you might hear about one individual using a brain control interface to allow them to go about doing daily life kind of skills and things because really these devices are trained individual by individual. Our brains are distinct enough that at this point in time, we can't just have an off-the-shelf, hey, put it on top of your, put it on your head and or in your ear or whatever. Not all brains are alike. Not all brains are alike, exactly. Our brains are soft and electrodes are generally hard. So that's another technological challenge. There are a few new developments that have been moving forward to make these invasive devices a little bit less invasive. So softer electrode arrays. But then you have the problem of do they actually connect with the soft cells? Do they make contacts that are long-term going to be viable to get the signal across? Some 15 to 20% of people don't even have brain signals that are good for being picked up by these electrode arrays. So they can't even be used by everybody. And then in terms of the cool cutting-edge kind of stuff, there is a neural dust that is being developed, which is ultrasonic in nature and would allow using ultrasound signals and other radio-based technologies to be able to pick up signals from this nano-sized dust that could be sprinkled on the surface of your cortex. However, so far, it's only been studied with a skull open of a mouse. So if you're a person, you don't want to be walking around necessarily with your skull open to the air so that your nano-dust can be read by whatever device. Well, but for anybody who, and I've had some brain issues in the past and we don't have to get into that right now, but for me, it's like anything that requires like, you really got to look at the brain, you got to do MRI. So the idea that an ultrasound, AKA a much less invasive version of something that could get you, the results that you're looking for is remarkable. Yeah, there's another technology that's based on stents. So similar to the placement of stents that's used in cardiac surgery to open up arteries and veins going to the heart or coming from the heart, they're also practicing the placement of these electrode arrays within the arteries that go into the brain. So it makes it a lot easier for a relatively less invasive procedure to occur to get electrodes to the place where they can pick up a signal and be reliably used for whatever purpose is necessary. So does that work like a pipe cleaner kind of where it's folded up through the artery and then when it gets to the brain, it can kind of open up? Right, and it would open up not enough to actually block any blood flow because as we know, blood flow to the brain is incredibly important. Yeah, can't, can't, can't, don't block that. Don't block the blood flow to the brain. So, but the device has so far been very successful in mouse models. But what I'm hearing is a lot of, if you have a severe enough condition, this might be worth it. If you're a mouse, you've got the cutting edge stuff. This is always the case. Mice are always on the cutting edge, yeah. But otherwise, for practical everyday use, we're still in the realm of those things that can kind of try to read things from outside your skull. And those are limited. Right, so, and that's the thing. Invasive surgery is never the road you wanna go down. And at this point in time, the electrodes don't last long enough to be able to really last the lifetime of a person. So if you're young and you have a disability or even you wanna be a superhero and just talk telepathically to your computer, having the surgery in which you have to place this electrode, this electrode array in your brain, you don't wanna go through that multiple times. And there's the chance of infection. There are always problems with the placement of that electrode array. And the placement of the electrode array itself could cause neural problems. So it's not something you wanna take lightly. So people who are, who have severe disability, that's usually where it starts. And this is where the research is really starting to be helpful, successful, and have a lot of impact, and as the technologies progress where we can have increased numbers of electrodes in the arrays to make the signals more robust and more accurate, we're gonna be able to read people's brains quote unquote, a little bit more easily, but we still have to deal with the fact that our bodies don't want foreign objects put inside them and the immune system will do what it can to get rid of whatever that is. Not to mention reading people's brains doesn't mean you'll understand. No, it doesn't. It doesn't mean you'll understand it at all. Kiki, I knew what you were thinking about me this whole time. That's not what we're doing. Yeah, so you've got the system components that go into the brain and those get the signal. The signal needs to be read, but then you have to process it and figure out whether, yeah, interpretation has to be accurate. And that again is on a person by person basis because our brains are different and we haven't gotten enough understanding yet to really be able to have a mass production of these devices that can, like I said, be off the shelf. Well, speaking of interpreting things, many of you are familiar with Chazam. You hear a song, you go, I like this song, but what is the song? You hold up your phone, you get the information. Haiku Box is Chazam for bird songs. It's a four by six inch box that's about two inches thick, it's pretty small, with a small microphone recording and identifying bird sounds. Weather resistant, designed to be outside, obviously. Wired reports that the company recommends keeping it out of direct sunlight, not submerging it in water, so you have to take some care, but in good conditions, you plug it in, connect to your Wi-Fi network via the Haiku Box Connect app, and then it starts recording bird audio. Then it sends those recorded sounds to servers at the Cornell Lab of Oranthology, which has thousands of bird song samples, and a neural net to process those. Cornell's Library of Bird Song Recordings can tell the difference between actual bird songs and non-bird garden activities. Maybe you're watering your garden and it kind of sounds like a song, but it's not a bird. For those who aren't gonna buy a Haiku Box, which is $399, by the way, they can install Cornell's Merlin Bird ID app, which uses a small subset of the data, and an AI processor similar to what Haiku Box uses as well. Haiku Box director, creator David Mann told Wired that the Haiku Box uses a modified version of that same data set. Kiki, I have to assume you love this. Oh, I love this so much. Yeah, I've been waiting for something like this to become available, and the Merlin Bird app is amazing, and if you just want something on your phone and you hear a bird singing, it is Shazam. You have your phone, you turn on the app, and it can identify bird songs very reliably. This is exciting because it's passive, and you don't have to always be out in your yard to go, I wonder what that bird is, and have to turn your app on. Since it's recording all these sounds, it can be like, oh hey, this bird flew through your yard and made it through a few twerps, and suddenly you know that you've got a migrant species that's passing through, and this is so exciting and interesting for people who are into birdwatching and knowing the animals that are passing through your environment. Yeah, even when you're not around. It's like logging the birds in your neighborhood. Yeah, and Cornell Lab of Ornithology has been on the forefront of all of this for a really long time, pushing the recording of these bird songs and learning similar to Google's learning languages and various voice narrative stuff. And so they are on the forefront of these bird song identification tools, and they've got such a huge library. Their data set is massive, but getting these in your library in your yard can also help them to continue to improve their library, and it'll get better and better and better. All right, let's check out the mailbag. This one comes from Laurent in Unseasonably Wet Montreal. Laurent, I hope you're staying dry, who says the use case that you explained, pardon me, for translating content reminds me of something that Mr. Beast currently does. Laurent is referring to us talking on the show yesterday about the idea of having an avatar who maybe looks like you, sounds like you. You can give them speech and they, for the most part, act as you. Laurent says, if you have a YouTube video that you would like to translate, Mr. Beast, his company does it for you using real voice actors, so you don't get that robotic-sounding voice like many text-to-speech tools. From what I understand, says Laurent, you don't pay for the translation, but he gets 30% of the YouTube ad revenue for that video. Why use that, though? Well, he uses the same voices that people are used to hearing for movie dubbing. So, for example, a video translated to Brazilian Portuguese would use the same voice as people are used to hearing on the big screen that might keep people listening longer than they would with AI voices, plus more jobs. Well, I'm sure if you can afford to share the revenue, yes, you could always pay someone to do the thing that technology could do, so that's interesting, Laurent. Thanks for passing that along. I'm sure there's always a human version. You could pay someone to make the art that we were talking about earlier, but this is an interesting way to monetize it in a way that might make it more accessible. Yeah, I think- In the world with more money. It just sort of, I don't know, I guess it kind of goes to show that this sort of thing, people want it. Maybe it would be an AI solution. Maybe in other times, you'd want to pay for it to be a little bit more personalized, but this is something that people want as a service. Absolutely. Well, Kiki Sanford, we're so glad to have you, as always. Give folks a sense of what you do all week when we're not hanging out with you. Oh, well, I wish I could hang out with you all much more often, it's always such a good time. What I normally do is the This Week in Science podcast. You can find us at twist.org, and the Twitter for that is twist science. We broadcast live 8 p.m. Pacific Time on Wednesday evenings. My personal Twitter is at Dr. Kiki, DRKIKI, and I am also working with the Association of Science Communicators to develop the professional community of science communicators, and we are currently accepting submissions for speakers for our 2023 conference, which will take place April 6th and 7th, 2023. Well, we're so glad to have you today. Please come back early and often. I would love to, thank you. Of course, special thanks to Jeff Stark. We sometimes just say we'd like to thank a top lifetime supporter for the show, and you know what? Jeff Stark is the person that we're thanking today. Thank you, Jeff, for all the years of support. Could be you tomorrow if you become a new patron. Could be you if you've been supporting us for a long time. This time it's Jeff. Thank you, Jeff. Indeed. Thank you so much, Jeff, and thank you to all our patrons, speaking of patrons, stick around for our extended show, Good Day Internet. What will we talk about? Only the, nobody knows. You can catch the show live Monday through Friday at 4 p.m. Eastern. That's 2,000 at UTC. You can find out more at dailytechnushow.com slash live. And we are back doing it all again tomorrow, Scott Johnson joining us. Talk to you then. This show is part of the Frog Pants Network. Get more at frogpants.com. Diamond Club hopes you have enjoyed this program.