 Daily Tech News Show is made possible by its listeners, thanks to all of you, including Steve Aderola, Jeffrey Zilx, and Michael Bullock. Coming up on DTNS, Dr. Kiki is here to talk brain-computer interfaces. What are they really, and who are they for? Plus, NVIDIA's got new GPUs, and AI art is becoming normal. This is the Daily Tech News for Tuesday, September 20, 2022, in Los Angeles. I'm Tom Merritt. And from Studio Redwood, I'm Sarah Lane. I'm the show's producer, Roger Scheng. And joining us, the host of This Week in Science, Dr. Kiki. Joining from Portland, Oregon. How's it going over there in Portland, Oregon? Oh, it's wonderful. It's wonderful. Weather is good in the fall. We haven't yet hit the depressing rainy time of year. Which is the rest of the year until summer. Which is the rest of the year. Yeah. OK. Well, folks, the Windows 11 version 22H2 update, AKA the 2022 update, is now available if Microsoft says your machine is eligible, which it hasn't said mine is. So while I wait for that to happen, let's talk about a few tech things you should know. Amazon sent out an invite to a virtual event taking place on September 28th. The company didn't give details on what it will announce, only saying it would involve devices, features, and services. Which means it could be pretty much anything. Last year at this time, Amazon announced the new Echo and Ring devices, as well as the Halo View fitness tracker and the always home cam drone. So again, could be anything. Yeah, it's going to be stuff like that, though. Devices, features, and services. Amazing. AMD says it'll launch its Radeon RX 7020 series of GPUs in Q4. That's those little ones are in the $400 to $700 price range. Lenovo's Ideapad 1, Acer's Aspire 3, a 17 inch laptop from HP are all going to have one of those 7020 series cards inside. The 7020 series is based on AMD's RDNA2 architecture. We will get more 7000 series GPUs on the way later. One would suspect because AMD Radeon SVP and GM Scott Herkelman tweeted Tuesday morning that AMD will launch RDNA3. That's one more than two on November 3rd. Mozilla researchers say that by parsing video recommendations, data from more than 20,000 YouTube users, buttons like not interested, dislike, stop recommending this channel and remove from watch history are mostly ineffective when the goal is to stop similar content from being recommended. Mozilla called on volunteers who used its regrets for Porter, a browser extension that overlays a general stop recommending button to YouTube videos viewed by participants. US grocery store chain Wegmans is discontinuing itself. Check out mobile app. Remember we were talking about Instacart doing the mobile app that could let grocery stores do that. Wegmans was not using the Instacart one, but it's no longer going to use the one that it was using. Wegners scan, I'm sorry, Wegmans scan, let you scan each item you put in your cart, then scan a barcode at the self-checkout register to get your total amount and pay. Wegmans said in an email to customers that, quote, unfortunately the losses we are experiencing prevent us from continuing to make it available in its current state. So people were just forgetting to scan everything that was in their cart. Sad. Very sad. Apple will increase app store prices across Europe and some Asian markets beginning October 5th, also probably sad for some folks affecting both regular apps and also in app purchases. In Japan, there'll be more than a 30% hike while countries using the Euro will see a 20% hike. Other countries affected include Sweden, Chile, Egypt, Malaysia, Pakistan, Vietnam and South Korea. Developers can change the price of their apps and in app purchases, including auto renewable subscriptions. If they apply at any time, but the minimum price will now be lower. For example, in the Eurozone, the minimum charge of 99 cents was raised to one 19 Euro. So yeah, and you can still do free. They haven't increased the price of free. Right. Yeah. Like you don't have to charge, but if you do, you have some restrictions now. All right. Let's talk about these Nvidia announcements. Nvidia officially announced its 4,000 series of GPUs or the 40 series. You might hear it called that too. This is the one based on the Ada Lovelace architecture named after Ada Lovelace. The RTX 4090 will arrive October 12th for $1,599. That's the top of the line. Then there's the RTX 4080. That'll come later in November for $899. The 4090 is going to ship with 24 gigabytes of GDDR6X memory enough to make you drool. I mean, enough to claim it is two to four times faster than the 3090 Ti at the same power consumption. Nvidia recommends that you do get enough power, PC power supply of at least 850 watts when using a Ryzen 5900X class processor is what they recommend. The RTX 4080 will come in 12 gigabytes and 16 gigabyte models. Those are both also GDDR6X. The 12 gigabyte model will be $899, the 16 gigabyte 4080, $1,199. All three of these cards include updated shadow play support. They can capture 8K video at 60 frames per second in HDR. They support hardware AV1 encoding. And the 4,000 series is going to support PCIe Gen 516 pin connectors without the need for a custom solution as required in the previous gen. Cards will also include an adapter to connect with three standard eight pin power connectors as a nice option. Power supplies are coming in October from Asus, Cooler Master, FSB, Gigabyte, iBuyPower, MSI, and Thermaltake. You can expect to see RTX 30 series still on the shelves, though, because Nvidia has said it made too many of those. You might see those at a discount. The 4090 and the 16 gigabyte 4080 are also going to come as founders editions from Nvidia as you might expect. Roger, what do you make of the new line? We've got the brand new top of the line in videos now. How do they look to you? They look very impressive. I mean, we won't know until actual third party benchmarks come out, but it really looks like Nvidia has decided to go all in with the ray tracing to the point that they're not just improving it, but they're advancing it with new features, including demo zaking, as well as improvements to DLSS to essentially to give you all the added visual benefits. What's interesting is the price points that they come in now is I mean, they're really not targeting your average gamer anymore. They're looking to kind of go the next step up, whether it's a Twitch streamer who does gaming, but wants to make sure they put out at least a 4K stream to their audience or at home creators who might have other uses for all that GPU horsepower. And it's important. It's very impressive, but I will also add it's a little too rich for my blood at that price point. Yeah. And it may not be even targeting those people. Maybe that's just what they have to charge. And so that's the market they're going to have to go for. Well, speaking of announcements, NVIDIA also announced a processor for autonomous vehicles called Drive Thor based on NVIDIA's hopper GPU platform that's optimized for processing algorithms at two quadrillion operations per second. That's eight times NVIDIA's Oren processor and with 77 billion transistors, hopper can replace multiple chips, saving on expense, saving on power consumption. NVIDIA says it uses CPU cores from NVIDIA's grace processor and browse some elements from the lovelace architecture as well. Thor will be able to run Linux, QNX and Android simultaneously to serve different parts of the car. Drive Thor will also have lower end versions meant for driver assistance systems that don't need all that processing power that fully autonomous systems might need. It'll ship in 2024 and show up in cars in 2025, starting with China's Zikr 001 EV. And NVIDIA also gave us a look at DLSS3, the next version of its deep learning supersampling technology. That's the one that could upscale graphics and allegedly quadruple performance over native resolution. It's an algorithm. It'll add bits to either increase framerate at the same resolution or upscale the resolution without losing performance. So, for example, a game could run at 1080p, but DLSS can use machine learning to make it look like it's 4K. DLSS3 can generate entire new frames now using the optical flow accelerator that can track and calculate on-screen object motion vectors, not just pixels, that should reduce stutter. And it'll work with NVIDIA Reflex technology to reduce latency and improve responsiveness. In a demo, they boosted Cyberpunk 2077 from less than 30 frames per second to around 100. DLSS2 could only get that to 60. Only the new RTX 40 series cards are going to support DLSS3 at first because it needs the new 4th gen tensor cores and the optical flow accelerator. But there are going to be a bunch of games. They've already announced titles. More than 35 games are going to integrate support for DLSS3 with some launching as early as October. So they'll be ready before you can get the cards that can take advantage of them. But yeah, I don't know if anybody here is going to plunk down for one of the new cards or not, but they look impressive. And it looks like we're going to have pretty decent support form out of the gate. I'm also, it's very impressive that they've really kind of up the ante with their machine learning in the card because before it was like, well, are they just going to use it so people can build an array of deep learning servers based on these cards? But they have also leveraged that technology to kind of up sample images, which is great because it is going to be a key feature moving forward, just as ray tracing has been for for GPUs to integrate that because, I mean, it offers so many benefits that I don't think you can have at least a competitive high end card without it. All right, one last Nvidia announcement regarding large language models. Who doesn't love a large language model? Let's call them LLMs for short. Nvidia just announced its Nemo LLM service and its Bio Nemo LLM with the promise to make it easy to adapt LLMs and deploy apps for all kinds of uses. One LLM that we hear a lot about these days is GPT-3 from OpenAI. They also make Dolly and Sarah, we have some news about Dolly today as well. Indeed we do. OpenAI is now allowing AIR generator Dolly to edit images with human faces. You might recall that was previously banned due to fears of misuse. OpenAI now says the change follows improvement in the filters to remove images containing things like sexual, political and violent content. Face images can be edited to change hair or clothing and even permit other variations. In a letter to users, Dolly said the company is also minimizing the potential of harm from deep fakes. Now it could also be doing this because there are a lot of other text to image systems out there. Imaging, crayon, stable diffusion lets you do pretty much everything you want because you run it locally. Mid-journey is very popular. They all allow different and as I mentioned sometimes more latitude in what you create. In fact John Herman at nymag.com has an excellent read called AI art is here and the world is already different. How we work even think changes when we can instantly command convincing images into existence. Some of you have been asking about when we would get the next truly new tech advance. I think this may be one of them. Yeah so a lot of Herman's article focuses on mid-journey because that's what he's been using. Mid-journey is unique in that it doesn't have VC backing, only 10 employees, small team but very popular. Users pay $10 to $600 per year for image generation depending on what they want to do, new features, licensing rights, etc. Its Discord server has 2 million members though. People are interested. Free users get a limited number of requests before they have to pay. Paid members get their images delivered by private message in Discord and then the money is used to pay for the cloud servers and 10,000 or so GPUs that then process those requests. But what Herman has noticed in talking to other mid-journey users is that text to image generators are used for a lot of different reasons. Some predictable, some not so much and it's moved out of the surprising and kind of fun jokie phase of making a you know a weird image into something he calls competent and plausible. Yeah so here's a few examples from the articles and Kiki, you and Sarah both. I'd like you to think about whether any of these are surprising to you or if it sparks other ideas of what people might be using these for. One example is just showing something to somebody from your head. If you have an idea you don't need to sketch it, you just describe it and mid-journey can actually make a picture of it. You could be a prototype or a decorating idea, whatever. A game designer is using it to make between 600 and a thousand unique pieces of art they need for a game, something they as an individual couldn't afford to hire artists to do. What they're getting is good enough for self-publishing though and if they get a deal with a distributor potentially speeds up the final work from professional artists. A children's author is using it to create pictures for their book, something that would have taken them much longer in the past. They're replacing themselves by saying well let's get the AI to do it. A designer for the state of California is using it for pamphlets saying it's better than the clip art that they would be forced to use otherwise without the expense of having to pay someone which they don't have the budget for. An Australian ad agency is looking into it for broader creative options for cuts customers that don't have large budgets especially global customers. You don't want to pay for a designer we can do this for you. There's a design director you might think oh there the ones are going to be mad about this. A design director is using it to make concept art that can use photography and illustrations that they wouldn't normally have time for in concept art but it means their concepts don't look like everybody else's because they're not pulling from the same stock art that all the other design directors are pulling from. Deviant art has been flooded with a bunch of mid-journey stuff it's certainly controversial in the art world but Kiki what do you make of this? It does feel like this is expanding rapidly from something that was brand new and novel not that many months ago. Yeah but I think it really is it's a tool it's an advance in the technology that allows creatives to be able to be more creative it's not creative on its own and I think that's the big delineation there right that's how we get you know oh humans are creative we come up with the ideas we're just asking this tool to help us fine-tune things and to give us products that we can use. Yeah because the creativity is in the prompt right the creative just shifting the creativity go ahead Sarah. There was probably you know in the I don't know mid to late 90s we all had this conversation like well if you're really a good photographer why would you need Photoshop kind of the same conversation right it's like well why would you need this if you're you know if you're if if you're a creative person who could generate really wonderful art on your own this is a tool like you said Kiki this is 100 a tool and we're still in the early days so we don't totally know how the tool is going to be used or misused but I think if you're an artist and I certainly am a terrible artist but I know a lot of people who are a lot better at this than me who are pretty prompt about this because it allows you as an artist to kind of go to the next level as far as creativity goes based on the AI giving you that first step. I think what's so fascinating about this is when it first came out the two big reactions which frankly from where I said are usually the other reactions to anything new which is this is going to be misused or this is dumb and it's overhyped and it's never going to be useful like those are often the two knee jerk reactions and what we're seeing now is that neither of those is true well maybe it's being misused but certainly not at the level that people were afraid of more often it's being used as we as we all have said as a tool and I imagine there are people already saying this and it's going to be more common for people to go yeah I thought the text image generator might be good for this but it was just faster for me to do it myself like we'll hit that limit where we'll like oh it's good for these things but it's not good for those things totally as someone with absolutely no artistic skill it could I can tell that if I can tell a computer to draw something for me it's going to be better than me trying it myself it's always going to be better for me too always going to be better but there are those like Scott Johnson who are going to be like oh no I can do that faster what do you want to hear us talk about on the show folks we've got a subreddit that's full of great ideas and yours could be among them submit stories and vote on them at daily tech news show dot reddit dot com all right so over the past few months we've covered several stories about brain computer interfaces or bci's how can they help patients move their limbs how can they help patients speak again after being paralyzed due to an accident or some sort of an illness but how do they actually work and what happens between the electrodes of the interface and the brain dr kiki we know you've thought a lot about this give us a little bit of insight on what you know so far well brain control interfaces generally are any kind of technology that takes signals from the brain and connects it to a computer and this can be either unidirectional or bidirectional and most often when we're hearing about these bci's nowadays it's with relation to these kind of hard needle-like electrode arrays that get implanted invasively into the brain and the the issues that we want to be worried about or that we want to be thinking about moving forward with these kinds of electrode arrays are related to the usability the safety and the longevity of these devices so currently you might hear about one individual using a brain control interface to allow them to do go about doing daily life kind of skills and and and things because really these devices are trained individual by individual our brains are distinct enough that at this point in time we can't just have an off-the-shelf hey put it on top of your put it up put it on your head and or in your ear or whatever all brains are alike not all brains are alike exactly our brains are soft and electrodes are generally hard so that's another technological challenge there are a few new developments that have been moving forward to make these invasive devices a little bit less invasive so softer electrode arrays but then you have the problem of do they actually connect with the the soft cells do they make contacts that are our long-term going to be viable to get the signal across some 15 to 20 percent of people don't even have brain signals that are good for being picked up by these electrode arrays so they can't even be used by everybody and then in terms of the cool cutting-edge kind of stuff there is a neural dust that is being developed which is ultrasonic in nature and will allow using ultrasound signals and other radio based technologies to be able to pick up signals from this nano sized dust that could be sprinkled on the surface of your cortex however so far it's only been studied in with a skull open of a mouse so if you're a person you don't want to be walking around necessarily with your skull open to the air so that your nano dust can be read by whatever devices well but for anybody who you know and I've had some brain issues in the past and we don't have to get into that right now but you know for me it's like anything that requires like you really got to look at the brain you got to do MRI so the idea that an ultrasound aka a much less invasive version of something that could get you you know the results that you're looking for is is is remarkable yeah there's another technology that's based on stents so similar to the placement of stents that's used in cardiac surgery to open up arteries and veins going to the heart or coming from the heart they're also practicing the placement of these electrode arrays within the arteries that go into the brain so it makes it a lot easier for a relatively less invasive procedure to occur to get electrodes to the place where they can pick up a signal and be reliably used for whatever purpose is necessary so is that work like a pipe cleaner kind of where it's folded up through the artery and then when it gets to the brain it can it can kind of open up right and it would open up not enough to actually block any blood flow because as we know blood flow to the brain is incredibly important yeah can't can't can't don't block that don't block the blood flow to the brain so but the device is has so far been very successful in mouse models but what i'm hearing is a lot of if you have a severe enough condition this might be worth it if you're a mouse you've got the cutting edge stuff this is always a case mice are always on the cutting edge but otherwise for practical everyday use we're still in the realm of those things that can kind of try to read things from outside your skull like this and those are limited right so and and that's the that's the thing invasive surgery is never the road you want to go down and at this point in time the electrodes don't last long enough to be able to really last the lifetime of a person so if you're young and you have a disability or even you want to be a superhero and you know just talk telepathically to your computer you know having the surgery that in which you have to place this electrode this electrode array in your brain you don't want to go through that multiple times and there's the chance of infection there are always problems with the placement of that electrode array and the placement of the electrode array itself could cause neural problems so it's not something you want to take lightly so people who are who have severe disability that's usually where it starts and this is where the research is really starting to be helpful successful and have a lot of impact and as the technologies progress where we can have increased numbers of electrodes in the arrays to make the signals more robust and more accurate we're going to be able to read people's brains quote unquote a little bit more easily but we still have to deal with the fact that our bodies don't want foreign objects put inside them and will the immune system will do what it can to get rid of whatever that is not to mention reading people's brains doesn't mean you'll understand no it doesn't it doesn't mean you'll understand it at all kiki i knew what you were thinking about me this whole time yeah that's not what we're doing yeah so you've got the system the system uh components that go into the brain and those get the signal the signal needs to be read but then you have to process it and figure out whether yeah interpretation has to be accurate and that again is on a person by person basis because our brains are different and we haven't gotten enough understanding yet to really be able to have a mass uh production of these devices that can like i said be off the shelf well speaking of interpreting things uh many of you for are familiar with shazam you know you hear a song you go i like the song but what is the song you know you hold up your phone you get the information haiku box is shazam for bird songs it's a four by six inch box that's about two inches thick it's pretty small with a small microphone recording and identifying bird sounds weather resistant designed to be outside obviously wired reports that the company recommends keeping it out of direct sunlight not submerging it in water so you know you have to take some care but in good conditions you plug it in connect to your wifi network via the haiku box connect up and then it starts recording bird audio then it sends those recorded sounds to servers at the cornell lab of or anthology which has thousands of bird song samples and a neural net to process those cornell's library of bird song recordings can tell the difference between actual bird songs and non-bird garden activities maybe you're watering your garden and it kind of sounds like a song but it's not a bird for those who aren't going to buy a haiku box which is three hundred ninety nine dollars by the way they can install cornell's merlin bird idea which uses a small subset of the data and an ai process are similar to what haiku box uses as well haiku box director creator david man told wire that the haiku box uses a modified version of that same data set kiki i have to assume you love this oh i love this so much yeah i'm actually i've i've been waiting for something like this to become available and the merlin bird app is amazing and if you just want something on your phone and you hear us a bird a bird singing you can it is shazam you have your phone you turn on the app and you can it can identify birds very reliably this is exciting because it's passive and you don't have to always be out in your yard to go i wonder what that bird is and have to turn your app on since it's recording all these sounds it can be like oh hey this bird flew through your yard and made it through a few twerk twerps and suddenly you know that you've got a migrant species that's passing through and this is so exciting and interesting for people who are into bird watching and knowing the animals that are passing through your environment yeah even when you're not around it's like it's like logging the birds in your neighborhood yeah and cornell lab of ornithology has been on the forefront of all of this for a really long time pushing the recording of these bird songs and learning similar to like google's learning languages and various voice voice narrative stuff and so they are on the forefront of these birdsong identification tools and are they've got such a huge library their data set is massive but getting these in your library in your yard can also help them to continue to improve their library and it'll get better and better and better all right let's check out the mailbag yeah this one comes from loran in unseasonably wet montreal loran hope you're staying uh dry who says the use case that you explained um pardon me for translating content reminds me of something that mr beast currently does loran is is referring to us talking on the show yesterday about um the idea of having an avatar who maybe looks like you sounds like you you can give give them uh speech and and and they for the most part act as you loran says if you have a youtube video that you would like to translate mr beast his company does it for you using real voice actors so you don't get that robotic sounding voice like many text to speech tools from what i understand says loran you don't pay for the translation but he gets 30 of the youtube ad revenue for that video why use that though well he uses the same voices that people are used to hearing for movie dubbing so for example a video translated to brazilian portuguese would use the same voice as people are used to hearing on the big screen that might keep people listening longer than they would with ai voices plus more jobs well i'm sure if you can afford to share the revenue uh yes you could always pay someone to do the thing that technology could do so that's uh that's interesting loran thanks and thanks for passing that along i'm sure there's always a human version you could pay someone to make the art that we were talking about earlier uh but this is an interesting way to to to monetize it in a way that might make it more accessible in the world with more money it just it sort of just i don't know i guess it um kind of goes to show that this sort of thing people want it maybe it would be an ai solution maybe in other times you'd want to pay for it to be a little bit more personalized but you know this is something that that people want as a service absolutely well kiki sanford we're so glad to have you as always give folks a sense of what you do all week when we're not hanging out with you oh well i wish i could hang out with you all much more often it's always such a good time um what i normally do is the this weekend science podcast you can find us at twist.org and the uh twitter for that is twist science we broadcast live eight p.m pacific time on wednesday evenings my personal twitter is at dr kiki dr ki ki and i am also working with the association of science communicators to develop the professional community of science communicators and we are currently accepting submissions for speakers for our 2023 conference which will take place april 6th and 7th 2023 well we're so glad to have you today um please come back early and often i would love to thank you of course uh special thanks to jeff stark we sometimes just say we'd like to thank a top lifetime supporter for the show and you know what jeff stark is the person that we're thanking today thank you jeff for all the years of support could be you tomorrow if you become a new patron could be you if you've been supporting us for a long time this time it's jeff thank you jeff indeed thank you so much jeff and thank you to all our patrons speaking of patrons stick around for our extended show good day internet what will we talk about only the nobody knows you can catch the show live monday through friday at 4 p.m eastern that's a twenty hundred at utc you can find out more at daily show dot com slash live and we are back doing it all again tomorrow god johnson joining us talk to you then this show is part of the frog pants network get more at frog pants dot com diamond club hopes you have enjoyed this program