 Hello and welcome to daily debrief brought to you by People's Dispatch. I'm Pragya. Let's go straight into our first story today. It's on the Pulitzer winning journalist Seymour Hershey's big expose. His latest story claims to reveal the United States' hand in explosions that decommissioned the Nord Stream Pipelines last September. That was during the thick of the Russia-Ukraine war. These gas pipelines are Russia's sole link with Germany and the rest of Western Europe. Prabir Pulkasa is in the studio with us to discuss the wide ranging implications of this story. Prabir, thanks very much for joining us. Prabir, they say, you know, I'll start with a cliche of sorts, but they say the truth will out. Is that what's happening or with Seymour Hershey's story, which basically says the U.S. was involved in bombing Nord Stream? Explosions which led to the destruction of Nord Stream or at least holes in the Nord Stream Pipelines. So is the truth out? Is the U.S. now supposed to answer more questions? What happens next? You know, the question that I have is not this. The question is, why, what has Germany done? Why have not they asked this question? So if they have, have the received answers and if they have received the answers, why have they been kept quiet? So is it a conspiracy of silence which has been punctured by Seymour Hershey's story? Seymour Hershey is not just a journalist. That's right. He was the one who exposed the my or brought to notice the my life massacre. He's talked about the Iraq war, given reams of inside stories on that, how it happened, what happened. And there is a ton of other exposures that he has been, he has done. Interestingly, Seymour Hershey's exposures have become rarer and rarer in mainstream media because now he's forced to get into places which are not so well known, they're not so public. And finally, even those places do not accept his pieces anymore. So he has a substack account on which has put this expose. So I think the important part, one important aspect of the story is that Seymour Hershey has no platform today where he can write the stories. I think that's a big, big, shall we say for any news organization to think, to think about why is that he doesn't have a platform. This story, this is something we have discussed earlier as well, that it was an obvious sabotage by an interested party who did not want Russian gas to reach Germany. Therefore, to say that Russia sabotaged its own pipeline, which has been the refrain of the Western media organizations uniformly. That Russia is the key suspect. Why should it sabotage its own pipeline? It has invested money in it. It is sending gas to Germany. The potential of sending more gas to Germany because of Nord Stream 2, which wasn't open at that time. Germany didn't want to take from Nord Stream 2, but it was there. There's a rising pressure in Germany that Nord Stream 2 gas should be opened allowed. So why would Russia do both destroy its pipeline and its leverage on Germany? Why would it do that? That was never explained by the Western media, but they kept on saying Russia is the key suspect because this was a European infrastructure. Yes, it was a European infrastructure built and owned by Russia for the purpose of sending gas to Germany. So Germany really, primarily Germany and from there it could go to others. So it bypassed other pipelines which are going through particularly for Ukraine. That was the major pipelines going through Ukraine, who of course was playing games with it. So there are other pipelines also, but this was the key one. And other pipelines were with NATO allies who were also willing to cut the supply of gas, except themselves perhaps. So given all of this, why would Russia actually sabotage its own pipelines? We never explained. Seber Hershey's explained what we have been saying all the time, that obvious suspect is one of the NATO powers, either together, singly or one or two of them. Seber Hershey's argument here or what his expose is, is that it was planned by the United States jointly done with Norway. Now Norway has one major stake in destroying the Nord Stream pipeline because it has reserves of gas and it is supplying gas to Western Europe. So therefore Russia is a competitor. So if they take out Russian gas pipeline, they obviously get the market and the price of gas goes up, it even benefits Norway more. So given that, Norway had this stake in this and it was an obvious suspect. The second important part of this story is the fact that it was done, this was all of it was arranged or executed at a time when NATO was having a set of what would call enable exercises of an island, which is actually under Denmark's jurisdiction, an island of that, where these exercises were done and during the course of it, the pipeline explosions were actually charged, meaning that those explosive devices were set over there and apparently the last moment, again Hershey's article, they had also said, don't cause an explosion too soon after this, we should be able to pick and choose when we do the explosions. So I'm not going to get into American law and what is illegal, how it is legal, where the President's powers are, can we do the declaration of all of that, that's really left for the American people to decide. But the fact that this was a planned explosion, that it is one of the countries either UK, Ukraine, United States, Norway, these were the possible suspects. Now it appears that the lead was United States, which we always have felt would have been done at the United States leadership and we had thought UK could be a participant, Ukraine could be a participant, Norway could be a participant. Now this is confirmed that Norway was a participant and benefited also from the rupturing of the pipeline. The interesting part of it, we have I think discussed this earlier, there are four holes, North Stream two had one hole, North Stream one had three holes and the argument is that they made a mistake, they wanted two holes in that means, because each of these pipelines have two streaks, that means there really two pipes. So there are four pipes and out of which North Stream two had two pipes, North Stream one had two pipes. So by mistake, they actually put a depth charge or an explosive device on the same pipe twice and one of the North Stream pipe, two pipes are still actually functional if it can be used. But Germany after this has learnt its lesson as given up, the demand okay will take cash from North Stream two has now fallen in line. So I think they got the message that in the try, another explosion would probably take that pipe out too. Right Rabbir, thanks a lot for joining us. And next for the second time on the show, we have chat GPT, the artificial intelligence that's making waves among users. It's a machine that's responding to questions in the most human-like way ever, but business interests are behind some of the hype around it. Artificial intelligence has come a long way, but does it have a much longer way to go? Bappa Sinha joins us in the studio to discuss the advances in AI and its limitations. Bappa, good to have you back on the show. You know, Bappa, I always hear about chat GPT and people I know make me think that, you know, this is all hype. Is that what it is? Is it all hype? I don't think so. It is, it is a fairly interesting development, right? It marks a significant advancement of AI technology. Well, it by itself, not by it by itself, but this class of AI algorithms is a significant advance of AI technology, right? Why this is generating so much of the hype as you say is, this is probably the first time where common people are coming face to face with such a technology, right? I mean, it's not the only application of or the only AI model of this, what I'm calling the new generation of AI, but it is definitely the first time where humans are directly, where common people, non-technical people are directly interfacing with them. I mean, they have a neat website where basically there is just a prompt and you then ask any questions you can and it gives wonderful answers, right? I mean, it can write poetry, it can write Shakespeare like prose, it can do medical diagnosis, write software programs. It is fairly impressive. So it definitely is not just hype. Is there hype about it? Yeah, I mean, with any such new development, people then extrapolate and kind of project marvelous things. Yeah, so of course, there is a lot of hype around it. There's a natural excitement as well. There is natural excitement. There is also, you have to understand that these things, a lot of these things, the new developments in computers and technology, it has a very peculiar model of way it is funded, right? It is funded through this VC model, right? And the whole VC game, so to speak, is that you make these huge investments or bets and you want to effectively cash out in five years, right? And so for you to be able to cash out, you need to create a hype, you need to project that the world is changing because of this, your technology and get a few billion dollars from somebody and cash out. So this hype is not just natural. It's intentionally generated for the VCs and the initial investors to cash out. But this by no means is like cryptocurrency, which in my book is all hype. There is a core to this, which is real progress in technology. Okay. Bapa, now the thing is, the interesting part is that there is not a guy sitting or a woman sitting behind this software typing up those answers. The machine is doing it. So does that mean human beings will become less necessary or unnecessary? So I think the real question is, does this not today, but in let's say the near future, and near future is let's say five years, right? Because beyond that, we can't really predict, right? So in the near future, let's say five to 10 years, will it achieve a human-like intelligence, right? And in technical terms, there is a technical term for that, which is called artificial general intelligence. So you have artificial intelligence or machine intelligence, and then there is a term called artificial general intelligence. So artificial general intelligence is effectively human-like intelligence where when people talk about machine intelligence, right, or machine learning, you're talking about the machine doing a specific task very well, right? So we already have machines which do, like, I mean, you have Google Translate, which does a reasonable job of translation, not perfect, but a reasonable job, right? You have speech-to-text, right? You have vision where, like, the AI has got pretty good at saying, like, if you show it a picture of a cat, it will say, oh, this is a picture of a cat, right? So they've got pretty good at these things. So, but these are specific things that they can do well, right? But a human intelligence is far more than that, right? Even a, like, two-year-old child or a three-year-old child not only can understand, like, can point out what is a book, what is a cat, what is a dog, it can do that without any special training, right? It can speak, it can start reading, right? So even a two-year, three-year-old child can start doing some of this stuff, but they can also do some things, they, when they come up with a novel situation, a situation a child hasn't faced before, it instinctively knows how to react, right? That is not something which you sit down and teach in class. That's something almost instinctive, instinctive the child knows how to do, right? The AI, the way the AI, our current generation of AI models work, we haven't even begun to understand how that works, right? So we are in no way getting close to any of that, right? So is AI going to match human intelligence in the next, let's say, five, ten years? I think the answer should be fairly categorically, I don't think so, right? But will the AI become far better than what it is, than what we have been used to? I think so, and this chat GPT is basically a algorithm, it's from a class of algorithms which are called transformers. See, the GPT in chat, so chat is a chat program, right? The GPT actually stands, it's the core technology behind chat GPT, right? So GPT, it's a technology owned by this company called OpenAI which is a Microsoft-funded startup. The GPT stands for Generative Pre-trained Transformers, right? So Generative isn't it, this is AI which can generate stuff, it can create poetry, it can create passages, sentences, prose, software programs, there are Generative AIs which can do art very well, like if you look at that art, it looks like this is done by a top-notch artist, right? And so there are AIs which can do that. So these are a category of AIs which can create stuff, right? Training is all these AIs need to be trained, right? But the transformer part, that is new technology, right? Effectively, this is not how AI used to work before, right? So this, there is a paper by Google in 2017 which kind of introduces this technology, right? And then this has become like kind of become the thing in computer, in machine learning and like now everybody is doing that. But before, so just to give an example, just to give a very layman understanding of how things work is, that how, see, in normal, what we call normal programs, what happens is the programmer, the programmer knows, like the programmer knows the rules, right? The programmer basically tells the machine that if you encounter this, you do this, if you encounter this, you do this. So it has a finite set of rules that you can tell the machine that do these things if you encounter these situations, right? But how do you go about telling the machine how a cat looks like, right? A cat could be a fuzzy cat, like there are different kinds of cats, there are different colors, different shapes, there are fat cats, thin cats, big cats, small cats, like to come up with all the rules and cats is like just one category of images, right? Right. The animals similar to cats. They need not even be animals, right? So the kind of things we recognize, like are probably like it's difficult to like have a list of things which we can recognize, right? As humans. Absolutely. And so to actually try to code that down as like, if this is this, that's just not feasible. So the first generation of AI, right? Rather what AI used to do before 2017 was that they have a set of, they have a effective, think of it as a set of equations, right? And the equations have like they are, they have constants in the equation, these things are called weights. So what you try to do is you feed it images of like a million images and these images are all labeled. So you know, which is you tell the AI that this is a cat, this is a dog, this is a mountain, this is a car, bike, whatever, right? And these are all labeled. And so what the AI tries to do is the AI tries to adjust the weights of his equation to get the correct answer, right? You give with the image and you tell the answer and the AI tries to say, okay, I have a bunch of equations. Can I adjust the weights of each equation to somehow get the correct answer, right? It's effectively a regression model, right? So I feed it a million images and it tries to do this curve fitting, right? Is this this or not? Yeah. So and if you, if it guesses wrong, then that feedback goes back to it and it says, okay, you got it wrong. So you need to tweak your weights of these equations a little bit and try it again, right? And it does this over and over again over a million images. And after that, it kind of fine tunes his weight so that it is able to recognize the like these category of images, right? And so then when you come up with a new image, which hadn't seen before, hopefully weights are tweaked just right that it will be able to predict that, oh, this is the new image, right? So it's basically a regression model. That used to be how it used to work. The problem with this is that you have to, like not only do you have to provide it a million images, which should be easy to provide now, like Google has millions of images providing them to the AI is not the problem. The problem is a human has to sit down and manually label each image. So the amount of training data which you can give to the AI is limited by how many humans you have to kind of label each image. Now that becomes the limit. And your models as the more data your model is trained in, the better the model gets. That should be obvious, right? Now, but you're limited in the data because you're limited by human capacity to label these images. So this in 2017, this new paper, this came up with a novel way of kind of doing this training without humans being required to label, right? So in a very simplistic way, it's like if you have a sequential data, like words in a sentence, right? So the model is trying to do is it is trying to get the kind of statistically the correlation of different words in a sentence, right? So it is trying to say if this word comes, what is the statistical probability of this other word coming together with it, either close to the close to that word in the sentence or away from that. So at a very coarse level, that's what it is trying to do. Now because it is, it doesn't require label data. It is just kind of doing the statistical correlation of different words in a sentence. You can pretty much feed it the whole internet, right? Perabytes of data. So it is now trained on much bigger data than what was previously possible. And once you do that, the AI starts performing far better than what it used to do earlier. So that is this new leap in technology and kind of every, like all the big guys can now are experimenting with these new technology. Great, Bappa. Thanks a lot for joining us. Thanks. And that is all we have for you today. Thank you for watching Daily Rebrief. Do come back to us tomorrow. You can visit our website for more People's Dispatch Stories and watch our regular updates on Facebook, Twitter and Instagram.