 Hi. Oh, friends, what's going on? I thought it'd be fun for the office hours today to try to determine what the biggest three tech stories of the week are. We can try a couple of ways to do this. See, I'm going to look at my sub-stack newsletter. So. Last Friday. My goodness, the Maga Raymond. Good to see you. Robots get closer to roaming your neighborhood. That was embodied. A.I. Spins a pen. OK. Matter, I think the matter one. Hey, Clinton. Hey, BioCal. Hey, Kilbot 404. Hey, Zoe brings bacon. Matter. Matter announcing its new updates. I feel like that is one of the biggest of the week. Yeah. Biggest development in A.I. in months, that's the nature story that they figured out how to teach an algorithm to generalize language and even be inaccurate in the way humans are. I think that might be one of the keys there. What do you think? What do you think is the biggest tech story of the week? Automatic acquiring text, I think, is under-reported. Of course, we could also. We could also do this. I'm looking at the wrong one. Let's see. I was like, why is that not showing? It should be this. There you go. All right, so a collaborative editing. No, a little more. That's that's long. That should have a break in it. Meta earnings better than expected. So many plans on shipping the PS5 Slim optical drive with needing updates. That was a pretty big one. That's the one about the box saying you have to be connected to the Internet in order to activate your optical drive. Qualcomm's new chip lines. Pretty big story. But do we care? The Amazon image generation tools for advertisers. I guess so. That would be that would be the big the fact that we had a TikTok video go viral on the Daily Tech News Show account. And it was about the poisoning. You know what? That's probably the biggest story of the week, actually. The poison, the nightshade, the giving artists the ability to to poison the data set if you use their art without their permission, or even with their permission, frankly. Yeah, I'll put that up there. Qualcomm chips. Artificial wounds. That was more of a, you know, an evergreen discussion. Posse. Video start making arm CPUs for Windows PCs. Matter. Spider-Man 2 beats records. So let's go. Matter update. Maybe not. It doesn't even show up in Google News. Qualcomm chips. AI nightshade poison. OK, so where's that? Let's just get to the matter. Let's get right to the heart of the matter. I don't I I was thinking matter personally, but I don't think it is. I think it's Qualcomm, new chips, just because it's significant and because that that headphone chip, they announced the new headphone chip that's going to use Wi-Fi to improve audio. So you can do lossless audio and improve range. So it'll hand off seamlessly, they say, between the Bluetooth and and Wi-Fi while you're listening. We'll see how seamless it actually is. I feel like that's a good one. This this one is the data poisoning tool so that you can make sure that your your images aren't productive for use in training set. And then matter expanding compatibility. Is that that a big one? Hey, Lucky, I really like the automatic one. You know what, I might I might go with the I might go with this instead of matter, the new training method. This is hard to explain, but. What they did, if I'm understanding this correctly, is with GPT-3, GPT-4, they pointed it at a lot of text and said, OK, figure out the patterns here and be able to imitate them. And that's how very oversimplifying, of course, that's how chat GPT works. It knows what the most likely next word is. And so when you ask it a question, it goes, OK, given that pattern of words, the most likely next word should be X. And then the next one after that should be this. And then it doesn't know what it's doing, which is why you can sometimes ask it the same question again. It'll give you an entirely different answer. It's why it hallucinates facts because it's not looking things up. It's just drawing from what it knows about what's likely to happen next. This trained a system that could understand novel and complex expressions. So it was doing something closer to thinking in that it was able to construct sentences based on being trained on how sentences should be constructed. Now, it's it gets a little wonky because what they did is they trained it on a made up language just to keep things simple. So it was like color dots and symbols. None of the stuff worked. But it was able to do it and do it well. They also trained humans to like do this fake language. And they found that the optimized neural network responded 100 percent accurately while humans did not. And then when they trained, this is the key, when they trained the large language model more or not the large language, large language models were bad at this. They were only 58 percent accurate. When they trained the neural network more, it got more human like. So it got less accurate. It made mistakes, but it made the same mistakes human make a human makes, which indicates it was thinking, quote, unquote, more like a human. Now, granted, this is not self-awareness. One of the biggest things, one of the biggest differences between a true artificial generalized intelligence and a human would be self-awareness. If if an if an algorithm is self-aware, can contemplate itself, then it becomes like us, then it's sentient. So this is far from that. But it's mimicry is much more like us, which may not be good for accuracy, but could be good for personality. Training method could theoretically provide an alternate path to better AI. Once you've fed a model the whole Internet, there's no second Internet to feed it. So I think strategies that force models to reason better, even in synthetic tasks could have an impact going forward. So in other words, they didn't have to train this on everything available. They didn't have to use anybody else's intellectual property. They just trained it to reason. Then you can feed it facts and say, like here, look at Wikipedia and it could actually talk about it the way human would. It's pretty interesting here. I'll put this in the chat room for you. Oh, I it's too early to tell if this is being underreported because it's too early in the day, but I have a feeling this is going to come around to be a big deal. And I don't fully understand it yet. Now, I'm sure there I've made some small mistakes in trying to explain it to you, but I feel confident that they got the majority of it, right? Oh, yeah, I think that might be the biggest story of the week. And then Qualcomm and then the data poisoning. I know a lot of you are still skeptical that AI is real, that it's worth all of this attention. But I think it is. Shall we see what has been breaking news lately? Threads got pulls and gifs. Cool fiber of 20 gigabit per second. Wow. And when does 11 insider build? You watched an hour long interview with whom? Oh, about the AI stuff. Yeah, I get you. All right. Shall we call it then? Any any other nominees? Nightshade. Qualcomm announcements and this New York University. Was it just New York University? Let's see. Pull up the actual nature article. Brendan M. Lake and Marco Barone. NYU and, oh, Catalan Institute for Research and Advanced Studies in Barcelona. Oh, OK, OK. So it's two two professors at different places. The power of human language and thought arises from systematic compositionality, the algebraic ability to understand and produce novel combinations from known components. That's a really overwritten statement. I think I understand it. In other words, human language is really good because we can swap things in and out and it still makes sense and we know how to do that. Fodor and Pilschen famously argued that artificial neural networks lack this capacity and therefore are not viable models of the human mind. They can't compose things and produce novel combinations, just knowing how to compose. Neural networks have advanced considerably in the years since, yet the systematic systematicity challenge persists. Here, we successfully address Fodor and Pilschen's challenge by providing evidence that neural networks can achieve human like systematicity when optimized for their compositional skills. To do so, we introduced the meta learning for compositionality. MLC approach for guiding training through a dynamic stream of compositional tasks to compare humans and machines with conducted human behavioral experiments using an instruction learning paradigm. The that's why they did the weird color coated balls and symbols that that we were looking at earlier. These things because they wanted it to be baseline comparison. The humans are learning from scratch. The machine is learning it from scratch. To compare humans and machines, we conducted these behavioral after considering seven different models. We found that in contrast to perfectly systematic but rigid probabilistic symbolic models and perfectly flexible but unsystematic neural networks, only MLC achieves both the systematicity and flexibility needed for human like generalization. MLC also advances the compositional skills of machine learning systems and several systematic generalization benchmarks. Our results show how a standard neural network architecture optimized for its compositional skills can mimic human systematic generalization in a head-to-head comparison. Yeah, basically you train it to do it this way and it acts like a human. There you go. All right. Yeah, it is complex stuff, Clinton. I'll tell you what. So there you go. I got to the top three stories rather quickly. Unless anybody wants to make a nomination. Big story of the week. You know, another way to look at this. What happened to my... Oh, I didn't have it open. Let me open the DTNS doc. Another way to look at this. Qualcomm's, what was our A block? Our A block is generally the buzzy story. So it was the Qualcomm chips yesterday. It was the Nightshade, which there you go too. It was Matter, which is another one I was considering. And then on Friday, it was NVIDIA. Oh, it was all these different AIs. NVIDIA's Pen Spinning Tricks, Meta doing embodiment. Not a... There's no coffee tech stories this week, Zoe. I'm sorry to say. And then I don't want to spoil things too much. Meta earnings. It looks like it's probably going to be the A block. Trish Hershberger is going to be on DTNS talking about Twitch Con too. Let's look though, shall we? Coffee tech news, coffee with Craig and James. Coffee tech startup brings a fresh perspective. That's from June though. So yeah, not a lot. What else is new before I get out of here? I will answer 20 questions. Fire away. Let me know. Yeah, if you don't have an idea about the source, it's somewhat irresponsible to spread it around. 20 questions. Here we go. One, two, three. What shelf is left, Zoe? You want a shelf tour? I can't remember which shelves we've done. Is it the upper? No, I think I already did that. Did I do that? Shopping for healthcare plans. Good times. I feel like I did all the shelves. I did the hieroglyphics. The sun. At least we know how reliable the sun is. That's a good fine technology. Not the most reliable, but you know, generally doesn't fully fake things. The desk tour would not be very interesting. There's not much on my desk. Have I done all the shelves? Did I do this one? Did I do that one? Yeah, I did that one. I did that one. Did I do that one? I think I did that one. Floor tour. Yeah, I don't have the camera set up where I can easily add it, but if you were to be able to see my shelf, should we play around with settings live on the air? All right, let's do it. Let's see. Can I actually even change this? Let's stop sharing the screen. Oh, look at that. Okay. Yeah. There we go. Shelf cam. Or not shelf cam. Desk tour. Wow. That's really hard to actually tell what I'm looking at, but okay. There's my roadcaster pro that I use to play all the sounds. Here's the good day internet. Here's the mailbag. This is the mid-show break. That's applause. I don't use that one anymore. Same frog pants. Diamond club. I have two DTNS mice pads. There's my logitech mouse. I've got the DTNS one, and then I've got the new one kind of hidden underneath, but I couldn't bring myself to get rid of that one. As the iPad that is very dirty. And it has all my sound effects. I use this mostly on Fridays when we do the mini game. Wrong. All that keyboard, very dirty. My apologies. But it's a logitech K20. Stream deck, which I don't really use. I have a chrome button. I need to make better use of that, but I haven't really found a good use for it in my daily life. A lot of pens, including one given me by Zoe. My coffee mug. My water glass. My baseball. My pixel fold. My little desk pattern thing. That's it. There you go. I tried to disconnect and it didn't let me, let me do it. So why am I trying to disconnect? Okay, there we go. Yeah. That worked. Look at that. Yeah, I don't use it for the sound effects because the roadcaster is so much faster. And the sound effects coming. Here's the problem. I can't have sound effects coming from my Mac into StreamYard. When I'm talking to other people, because I'm using the Mac to hear those people. And so if I run sound effects off the Mac into the roadcaster, I will also be running them through the roadcaster back into StreamYard where they already are. If that makes any sense, right? So Sarah's connected to StreamYard. You can hear her through her StreamYard connection. If I run sound off my Mac, you'll also hear Sarah through my Mac in which case you get a horrible echo. Yeah, using it for video editing in Premiere would be great, Bill. That makes perfect sense. I don't use it much. I don't use this for video editing much at all. So yeah, my audio routing is, that's one of the reasons I have the iPad for external sound effects because I can't fit everything on the roadcaster and I don't want to fit everything on the roadcaster, frankly. Although, well, maybe I do. It just doesn't have enough buttons. It's easier to swipe around through the iPad. But yeah, when I'm alone, right? So when, for instance, on it's a thing, I can use the sound I'm getting from them on the Mac to pipe into Discord because I am not, I am not, they are not streaming from there elsewhere, if that makes any sense. So what did I do this morning? I had a meeting that I can't really talk about too much, but the idea is to, is something around, yeah, I just probably just shouldn't say anything, but I'm excited about it and I'm working on it. So the secret thing, I don't want to Johnson myself, I published the sub-stack edition for today and did a little work on Know a Little More, and then I did this. So that's what I did today. Anything else? I like doing the desk tour. Now that I know that I can use the phone as a camera, I should start doing that more often. I'm going to do all kinds of fun stuff with that. I could also get myself in big trouble if I like, you know, filmed my bank statement or something. So I have to be careful though. But yeah, did you like the pick and the three big stories of the week? I thought that was, I thought that was kind of fun. Looks like folks are in the layback mood today rather than the fast chat mood today, which is totally fine. Wasn't there an office hours question you said that you were going to answer from someone who wrote in? Possibly. I don't remember. I don't. I don't remember. I'll have to look for that. Well then, I'm going to head on out and start working on some more things. But thanks, y'all. It was lovely streaming with you today. And I will be doing the editor's desk. So if you are a patron at that associate producer level and up, keep an ear out for that soon. And thanks for streaming. Good talking to you. Bye.