 Daily Tech News Show is made possible by you listening right now. Thanks to all of you, including Tim Deputy, Brandon Brooks, Hector Bones, new patrons, Allison, Tony Adorno, and Katlo 54. Welcome everybody. We love new patrons. Also, Bramerica, your shout out was sponsored by Matt Zaglin. On this episode of DTNS, why UK banks object to the Apple Card, the top payment processor in Nigeria isn't somebody you've heard of worldwide, and why AI is so misunderstood. No one, seriously, nobody could understand what makes AI work. This is the Daily Tech News for Monday, March 4th, 2024 in Los Angeles. I'm Tom Merritt. From the suburbs of Atlanta, I'm Nika Montport. And finally under sunny skies on the show's producer, Roger Chang. Oh, my friends, it's good to be back for another week of technology news. And we got some good stuff for you today. Let's start with the quick hits. The European Commission fined Apple 1.8 billion euros Monday for abusing its dominant market position in the distribution of music streaming apps. Basically, Spotify complained. Apple got in trouble for restricting app developers like Spotify from informing users about or providing instructions for finding cheaper ways to pay for a subscription outside the app. Apple doesn't like to let people do this. They say, look, you use our in-app payment system or you shut up about other places where you could pay for a subscription. Spotify filed a complaint about the practice in 2019. The EC investigated and has now ordered Apple to remove the provisions prohibiting such restrictions for music apps. But it's kind of a moot decision in a way because starting March 8th, Europe's Digital Markets Act will prohibit Apple from placing such restrictions on any app, at least in Europe. Google modified how to select filters from file search and Google Drive for iOS. You can now filter by file type, opened, and last modified from a drop down menu. The feature will come to Google Drive for Android very soon. Google feature coming to iOS first. You don't see that every day. Interesting. No, not at all. A large language model maker Anthropic released Claude 3, Opus, Sonnet, and Haiku, all versions of Claude 3. They came out Monday. And of course, Anthropic says they're faster and more powerful than the previous models. Opus is the most powerful. That's that's your GPT for equivalent. It's also the most expensive if you want to use it. Sonnet is more compact and Haiku is even more compact and coming soon. Claude 3 Chatbot can summarize up to 200,000 words that's compared to Chat GPT's 3000. Anthropic will also let Claude 3 users upload images and documents for analysis. In AI ethics news, open AI, Salesforce, hugging face scale AI, and a few dozen other companies signed a pledge to build AI for good for the good of humanity. Meanwhile, India's Ministry of Electronics and IT issued an advisory Friday requiring significant tech companies to make sure their services and or products do not permit any bias or discrimination or threaten the integrity of the electoral process. This is largely focused on AI models. And Microsoft announced it's going to show off new games coming to the Xbox in a partner preview event starting at 1pm Eastern Wednesday, March 6. This time the announcement is going to feature around a dozen new trailers from Capcom, EA, Nexon, and a few others. And that is a look at our quick hits. Another big announcement today. Apple announced you can now order the 13 and 15 inch MacBook Air laptops with the M3 chip inside. Those will start shipping on March 18th. Bloomberg's Mark Gurman indicates this is the first of a couple of months worth of new product announcements that Apple is going to make on the web without holding events. Among the announcements Gurman expects to follow are two new iPad Pro models with the M3 chip, new iPad Air models, one of which is a 12.9 inch size, biggest iPad Air they've ever released, if that ends up being the case. A new Apple pencil and a few other things. That's not all the new products Apple might have in store that are making news, though, right, Nika? Right. Another Apple product, another product Apple might want to roll out is its credit card but in the UK. UK banks don't like that idea very much and have called for an investigation into what data Apple collects about spending habits and how it uses that data. UK users can already link their bank accounts to the Apple wallet to monitor recent transactions and Apple balances, as well as use the card for Apple Pay. The banks are concerned, however, that Apple could collect that data and use it to inform their competing products. Apple says all information for Apple Pay users profiles is stored locally but does not specifically comment on data brought into the Apple wallet when users connect third-party banking accounts. I don't know about you, Nika, but it sounds like the banks protest too much here. Yeah, I agree. I don't think that the issue is what they make it seem. It sounds to me is that they are a little bit jealous that they don't get their hands on all of that wonderful, wonderful data that Apple has collected and they want access to that so that they can, in the words that I just said, inform their competing products. So they want to be able to use that data, I think, to upsell their current customers and see what the trends and what the habits are to see what more can we get from these customers in that way. Yeah, I think it's fair to investigate, honestly. I think that's good due diligence before launching into market, asking the government to say, hey, make sure that's staying on device. They say most of the info stays on the device. Let's just make sure of it. I'm into that. I would like that to be proven. But the banks aren't asking this out of concern for you or me, are they? They are asking. They were having a conversation in Discord, the DTNS Discord earlier today, about the fact that your credit card companies will share information about your purchases with other advertisers. That's what these banks are concerned about, is that their monopoly over that charge information will be taken away because they don't protect the privacy of this information as much as a lot of people assume Apple does. Right. And one of the biggest selling points of Apple is that the way that they care and handle for their users' data, it's usually locally encrypted on the device, so that even Apple doesn't have access to some of that data. And it's one of those things where it's like, well, the other guys are giving it to us. Why won't you allow us to have access to this as well? So I think the crux of this is just that they want access to that data to be able to use it, however they see fit. And they're just using this lawsuit and this new launch as a way to say, hey, let's see if there's anything we can do to get that data for ourselves. Not so much that Apple cards a competitor, right? Absolutely. They don't want another competitor there. If I try to think about this from the other side, what is legitimate is that Apple has been getting more and more into advertising, as have all technology companies through the App Store mostly and placing advertisements for app developers within the App Store so that they can collect a little money from them, show you some promoted apps and such. That's going to introduce a little conflict of interest where it would be tempting for Apple to say, well, gosh, we do have all of this purchase information from people. If we anonymized that and then linked it, we could target ads more effectively. So that's why I'm like making sure they're not doing that, that they're not giving into the temptation is fair because that is something that would improve Apple's advertising. Yeah, it would give them an unfair advantage. Yeah, yeah. Completely agree. Yeah. But I think, you know, time will tell to see how this kind of all pans out. But I think there are some conflicting motives here on either side. Again, we live in a capitalistic society. So everybody is trying to find the edge on how they can increase the revenue and the market cap for their companies. Yeah, I guess where I end up coming down is just because it might be a selfish motivation on the credit card company's part doesn't mean it's not a fair question. Absolutely. So yeah, let's investigate and find out. And Apple can be a little bit reticent to be transparent at times. So sometimes you got to push them a little and I'm good with that as well. Yeah. Keeping on the the payment technology side of things in Nigeria, which many of you may not know is the largest economy on the continent of Africa. Stores may have as many as 10 payment machines to accommodate the many different bank cards issued on different payment systems. But restofworld.org reports that a lot of people are starting to ask for their cards to be read by the money point terminal instead of the one from their bank because money point works with multiple cards. As of January, 2.3 million businesses in Nigeria use money point, and it is considered widely to be the most reliable of the payment systems in the country. Customers say it has lower decline rates. Even when you're using your own bank's terminal, sometimes cards will be declined when they shouldn't be. It also instantly reverses a transaction if a payment fails, which is better for the businesses. Some of the other terminals will have a delay and that means customers could walk out thinking the payment succeeded only to have it reversed later. And then the merchants left going, oh, well, now I don't have their money, but they have the stuff and they already left. Right. The company makes most of its money on transaction charges, but also offers business loans. Its biggest competitor is China's OPE, which has a 37% market share to money points only 20%. But a money point has banking licenses that OPE just doesn't have, and which lets money point offer services like collecting deposits and offering its own POS terminals. Money puns business managers also are well-known members of the community, of the communities that they serve. The managers get paid commissions for every new customer sign up as well as ongoing payments based on those terminals transactions as well. Yeah, I thought this was fascinating. And it's another example of restofworld.org doing great work to spotlight this because money point is doing a similar thing to another rest of world article we talked about recently where a streaming service was being, was a subsidiary of a traditional cable broadcast, cable and broadcast company, was using its relationships that it's developed over the years while giving people the room to innovate to dominate streaming in the region. Money point was founded in 2015 as a company called TeamAppt providing software to financial industries and then got a banking license in 2019 so that it could start acting as an intermediary between banks and customers. And that's when it was able to get the point of sale license, make its own machine, be able to take multiple bank cards and realized, well, wait, we're a local company. OPEI is coming from China. They don't know people around here. We know people around here. So they did another great example of how you succeed as a startup, work within the community you're from. And I thought it was really interesting with that what you were saying about the business managers, they're finding people in the community be like, you know, folks, you go to church with folks, you hang out at the market with folks. So you're going to have a better chance to convince merchants and convince people that money point is a good bet. And it turns out it looks like it's working. Yeah. And if you're being honest, if you think about just things that you do in your everyday life, referrals are golden. If you get some information on a product or a service from someone that you know and trust, you're more likely to be open to going with that particular service or product, especially if it's, you know, quote unquote homegrown and it's from your region is from your area. And it gives you a little bit more comfort when it comes to making those making those decisions, especially when it comes to finances, when it comes to your money, when it comes to your bank, it gives you a certain level of comfort and assurance in how you go about your everyday life when you're making purchases for, you know, just being able to live and move throughout your own community. One of the reasons I like stories like this is I think we tend to assume that the big companies, you know, the Amazons and the Microsofts and the even the Walmarts of the world are the ones who are going to provide the services everywhere eventually that sure, there might be a local company. They'll get run out of business when the big folks come in and it's not always true. And granted, you probably aren't familiar with OPEI if you're not in China, but that's the heavyweight, right? That's the one from overseas that has the big backing and all the money behind it. And a local business is showing, no, if you want to be from out of market and succeed, you have to act like you're from the market or you have to hire people that have relationships within the market and you have to empower local control because that's how MoneyPoint is doing it. And once again, it is interesting to see the innovation that you get in places like Nigeria, Kenya, where Mpesa came from, you know, back of the day and revolutionized payments. So I would keep an eye on what MoneyPoint does next. They solved a problem that is very specific to Nigeria in this case, but they seem to be very efficient and very good. And I'm curious if they start expanding outside of Nigeria, what we see. Right. And if you think about it, everything is cyclical. If you think about small towns, when the big box stores came in, when you got a Walmart, when you got, you know, a Publix or a Kroger, it was like, oh, wow, this is great. We have so much access, so much more. But on the flip side of that, when you have large companies come in, as you mentioned, they don't know the community as well. They don't know the way that people move throughout that community. And it can be a negative. And a lot of times, you see, you know, these huge companies like a Walmart, like Amazon, they just get bigger and bigger and bigger. And sometimes they don't necessarily treat the local economy, the local environment, the best. And people notice that and they say, hey, now we have this option of something that's a little bit closer to who we are, to what we're used to. And we're going to support that. Yeah. And David Grizzly-Smith in our chat brings up a good question. He's a little snarky about it. He's like, how long until there's a security breach from MoneyPoint as it gets big? Because the bigger you get, the bigger of a target you are. So that is a question. Like, okay, now that they're successful and people are going to be coming for them, how is their security? And I'm not going to jump to the conclusion that it's not good, because I don't know. But that is something that these companies also have to pay attention to. And I'd be interested to see what their strategy is there as well. Yeah. It's a valid point. But at the same time, there have been some pretty large companies that have had data breaches as well. So you can't always necessarily pin it on a smaller company, not quite having security. Even though that is a great point to bring up that you would hope that they have locked down on their security and ensure that their customer data is safe and protected before they become the big company. Well, folks, Apple has lots of power users and hipster users. But what about the rest of us? Sarah Lane and Eileen Rivera do Apple vision show every week on Mondays to talk about whether Apple's vision matches what they want. You probably have checked it out. But if you haven't yet joined in on the fun, you can watch it live at our YouTube and Twitch channels or you can enjoy it at your leisure. Get subscribed right now at Apple vision show dot com. Check it out. MIT technology review has an excellent article by Will Douglas heaven called large and language models can do jaw dropping things, but nobody knows exactly why. As as always, you recommend you read the whole thing. But let's talk a little bit about what's in the article. Tom, the underlying puzzle is that we know what large language models do, but we don't know why. Yeah, no, I think it's something that some people are aware of. But but not everybody advances in AI haven't come from a grand design of the internals, but from trial and error. You try stuff out, and you see if it works. And if it works, then all the other scientists with permission because it's the way science works copies that and then tries other things that they think might improve stuff. The ideas that don't work, you never hear about those get discarded, but the ones that do work are kept. It's it's an experimental approach. It's not a theoretical approach. Large language models train on data and somehow have been able to generalize from that data to other data. They're really complex versions of what's called a Markov chain. That's something that's just really good at predicting what comes next. A Markov chain could maybe be trained to do something like understand English, but the large language models can understand French, even when they're only trained on English, that's the generalization that we're talking about. And we don't know why we just know that it happens. And the longer you train on data, the better these things get up to a point because of a statistical principle called overfitting. Now, I've heard of overfitting, but let's explain what that means. Sure. Overfitting, it's when a model is too precise, it's it's got too much data, and it can't generalize from that. So I was trying to think of a good example of this. Let's say you're plotting out temperature and the day. So you've got days along the bottom access and temperature. If you do that, spaced out across a year, you could draw a curve, right? And you would see hotter temperatures in the summer and you'd see colder ones in the winter. And then you could generalize from that, like, oh, well, okay, if you ask me what a temperature is in the winter, I'm going to guess colder, because that's the way the line looks. But if you have lots of data, let's say for every hour, and you look closely at that data, now you'd have a wiggly line that goes up and down every day, right? And so unless you back out and view less of that data, you won't be able to see the trend of the months because you're confused by the trends of the days. Now I'm mixing a lot of metaphors here, so statisticians don't get too upset with me. But in general, that's overfitting. You have too much data. And so you see the wiggly line, you can't see the trend. And up until you have too much data, the model can't predict outside the confines of the data it has. Except large language models have beaten overfitting, right? Yeah, they have. Scientists appear to be wrong about overfitting. Or at least when they were thinking about it in the wrong way, there's a little bit of debate about that. Heaven opens up his article from Technology Review with an example of folks at Open AI trying to teach a model years ago to do basic addition to just add two numbers. And the model could do anything it was trained on. You trained it on two plus two, it could do four. Even three plus two, it could do five. But if you hadn't trained it on three plus three, it couldn't generalize to new addition until one day they forgot to turn off the model's training run. Usually it got to that overfitting limit. And they're like, okay, it's actually getting worse. Turn it off, right? Makes sense. They forgot to turn it off, left it training, and it figured it out and started to generalize. It started getting better again after a certain point. It got worked past the overfitting. Other scientists have reported similar situations. They call it grokking. But it's similar to another phenomenon called double descent. That's when a model's size instead of the time that it's trained shows it reducing errors, then increasing errors because it's getting too big, then reducing them again. And some folks are working on trying to uncover what's going on inside of these models, right? Inside of what's going on in the black box. Yeah, yeah. So they're taking an experimental approach, trying to do smaller, sometimes older models, more specific things under controlled conditions that you can vary and see like, okay, if I change this condition, what happens to the model? This is like doing an experiment in physics, right? Like let's collide these two particles and see what comes out of it, except it's weirder because in this case, we created it. We didn't create physics, but we created the large language models. We just don't know how it works. So you've worked with this stuff, right, Nika? Yeah, I have not on the large language model side, but I have created algorithms before I have implemented algorithms before. But I think it's one thing to note in the beginning of the article, when they realized that they left it running for a longer amount of time and they talked about how it was like a light bulb went off. And I was, as I was reading it, I was like, you know, this is kind of what people think AI is, right? It's going to become sentient. And I was like, is it, is there any more sentient behavior than if you think of a three-year-old, a four-year-old, a five-year-old learning addition and they have the simple model two plus two is four. And it just doesn't make sense. The child just can't get it, can't get it. And then one day the light bulb just goes off over the course of time. And I, I had a little chuckle because I was like, this is probably the closest sentient behavior that we've seen, you know, from a model because again, we have our thoughts on what a model should do, how long it should take. You have these, these, these HPC's that are just churning out data just left and right. I mean, these huge, you know, amounts of data that they are, they're burning through to, to, to train on, to, to be able to try and learn and figure out what's going on. And what they realized the longer time you give it, the, the better it becomes. And sometimes if you just kind of step outside of, of what you assume should happen and a mistake or a lack of, I guess, focus or whatever you want to call it, that led them to the point where, oh, we just forgot because we were doing something else and the model continued to run and they saw that it corrected itself. And that is, you know, just one of the things of science. This is all, you know, we're creating all these new things. And this is just sometimes how it happens. A happy accident, you know, resolves the issue and gives you a little bit more insight. Yeah. And I think it does make me understand a little more why some very respected scientists in this field are concerned about artificial generalized intelligence working faster than we expect, right? Most of the scientists working in this field say there are limits to how good these things are. You can see those limits in your daily life. And we're not going to get to, you know, the, the sentient AI that goes rogue anytime soon. But like you said, when you see it do this, and you're like, okay, but I didn't expect that. What else do I not expect it to do that it's going to do? It does kind of creep in, you know, emotionally make you go, well, but I don't know. I can't guarantee it will display any kind of behavior because I'm actually not able to tell how it works. So that's why the experimental approach that's trying to understand why it does things the way it does is, I think, really important because the more we can understand how it works, the less, you know, the less uncertainty we have that it might just go rogue someday. Right. It's all, again, when you're entering a new frontier, it's one of those things where it's just like, I thought I knew what it was. I had a certain, you know, reasonable expectation of what it would do and how it would turn out. And then it completely goes in the opposite direction. It's like, oh, I didn't expect that that one kind of caught me off guard. Yeah. And this is, you know, the big, the big thing that I keep trying to explain to people is that the copyright issues with training are real, but they aren't what a lot of people think they are. And I think this is another way of understanding that is there's not a database of the books it was trained on in there. There's, and someone wrote in and was like, yeah, but it had to make a copy to train on it. So there is a copy involved. And that, that's right. That's where the discussion should be focused. But when the model works, I think a lot of people think, well, it's got a copy of, you know, the New York Times in there and it looks at it. So that's, that's infringement. It doesn't. It's, it's a big Markov chain. It doesn't even know what the New York Times is. It's just predicting what the next word should be. And we don't even know how it does it. We just know that it does. And it works better than it ought to. So it's, it's kind of freaky. Yeah, let's check out the mailbag. There's lots of AI stuff in our mailbag. But let's start with giving you a break from AI. Robert wrote in in response to Spotify, adding an audio book only option to its subscription. We talked about that last week. Robert says, this Spotify deal is tempting. I have an audible account. However, some of my favorite authors are intentionally not releasing on audible due to their prohibitive revenue share drop off when not exclusive to audible. Brandon Sanderson is one such author. So the Spotify deal would be tempting if it has the books I can't get on audible. However, a lot of the books I listen to are towards the 20 hour mark. So would I need to time my listening to the end of the month so it rolls over into the next month because they they only give you 15 hours for free on Spotify. Yeah, that's I don't know. That's it's it's strange. I'm not a Spotify user. I used to be I dropped the service a while back. But it's interesting. I have an audible account. And I personally like reading the books better. I can kind of use my own imagination in in in what's being what's on the page. But I get a lot of people love to listen to books. So yeah, then Pepe Kevin pointed out that the Doctor Who audiobooks are only on Spotify in the United States anyway. This plan change makes him think more seriously about adding a Spotify plan, even if it's not just the audio plan. Interesting. Yeah, thank you for that. Yeah. And back on the AI front, we have Martin who wanted to add to our discussions last week regarding Google AI. Google was not ready for Gen AI to take off the way it did. And it has not the types of AI that they had been perfecting. The recent updates to Google Maps, for example, walking directions, painting the camera and being able to figure out your position and direction in unique AI, but it's not Gen AI. Text to speech, even chat GPT uses Google's AI service. Second, is that Google can't be the best at providing a responses from searching until someone else does it first. Otherwise, every government is going to pass laws, stopping them from doing it. It's multiple countries had also had so much problems with Google news. Gen AI is an actual threat. Keep up the good work. Yeah. Open AI is going to start having those same problems. But yeah, the bigger you are, the more scrutiny on everything that you do. Absolutely. That's why we're small. And we're happy to say that. Thank you, Nika Monford, for being with us today. As always, where can folks go to find more of what you do? You can find me at TechSavidiva pretty much everywhere on the internet. You can also find me on Snobo West podcast, which is a podcast that I co-host with Terence Gaines, who was also a contributor to DTNS. And it's an Apple focused podcast where we talk all things Apple and then some. So definitely check us out over there. Yeah. End your week with Snobo Westcast. And then you could begin your week with Apple Vision Show, but, you know, get the multiple perspectives. Indeed. Special shout out to the Colchester Little League 2023 team and a big thanks to Margo, who dropped this plaque by our house. We sponsored a Little League team last year because Matt was like, well, I can't sell you a team, but you could sponsor by Little League team. So we took a bump on it. And it was really nice. It says, well, thanks to our sponsor Daily Tech News Show, Colchester Little League 2023. So, so thank you, Matt. Thank you, Colchester Little League. Thank you, Margo, for the plaque. That's very nice. Patrons, stick around for the extended show, Good Day Internet. Nika and I are going to talk about Apple's increasing habit of just announcing stuff on the web instead of holding an event. We just talked about the M3 MacBooks. That's an example. Do we like that better? Stick around. We'll discuss. You can also catch the show live Monday through Friday, 4 p.m. Eastern 2100 UTC. Find out more at dailytechnewshow.com slash live. We'll be back tomorrow. Talk to you then.