 So today, the topic is Vulnerable Infrastructure and Black and Grey Swan events. The lecture will take up around 45 minutes and then we will have around 15 minutes Q&A. So if you have interesting questions, feel free to ask them of course. And today we really want to look into the idea that sometimes it becomes very evident that we live in a very vulnerable society. More recently, that became very clear with the Nord Stream gas pipelines, the leaks that occurred there. And as we use technology more and more in many domains of our society, we become very dependent on the complex technological infrastructures that we made. How should we actually prepare for such unlikely events, but when they occur events with very grave consequences? Someone who can tell us more about this is here today. It's Professor Henk Ackermans. He's the Chair in the Supply Chain Management Department at the Tilworth School of Economics and Management. And his research focus is the technical innovation-driven sectors, but also healthcare and public infrastructure. So please give a big applause to Professor Henk Ackermans. Thank you. Oh, these nights. Thank you. Good, late afternoon. Good evening. What shall we say? It's my pleasure to walk you through some pretty horrible stuff, but the pleasure is not from the horrible stuff, but from the lessons that we may learn from this together. This lecture does not end with an explanation mark like this is the right answer. There are quite some question marks in between, as you would expect from a university professor and also from a course on the Studium Generala. There are very few definite answers, but with that disclaimer, let me see if I can start. Is this in the next one? Yeah. Okay. So this is the picture that Hannah was, well, at least the picture shows what Hannah was talking about. This is what you could see on the top from the sky, gas bubbling from the bottom of the sea where some broken gas pipe lay. And this was then, I suppose, the trigger for asking me to tell a bit more about this. The whole title is not just critical infrastructure, but also gray and black swans. And I'll tell more about those swans later on. But let's start with something, yeah, let's start with this one. That's a very nice, freer, sunny, friendly picture. Does anyone know where this is? It's not far away from here. No frequent, long-time Tilburg residence, apparently, because where should I point there? Oh. A bit closer. Okay, this is Swimmingpool, Stapokhor and Tilburg. So this is on the other side of the city. So perhaps 15 minutes by bicycle and 20 minutes by car. And it's on a sunny day. This is a very happy place, it seems, but it was a very unhappy place. Well, that's too much. That's another, in October 2011. What happened then indoors? Children were playing there. You also have the kikkerbatje, so the low pool for the small children. And a mother was there playing with her baby. And out of the top of the ceiling came falling down a large bit of sound equipment. It killed the baby and the mother was severely injured. That's, of course, incredibly sad. What's the reason? Well, obvious reason is why did it come down? There was no act of sabotage, as you would expect. No, no, as we think there has been with Nord Stream. No, actually, there was a simple thing like corrosion or rust in Dutch. So this equipment is attached to the ceiling with bolts. And those bolts, they are made of metal and metal corrodes over time. And of course, when it corrodes, it especially tends to corrode in acid environments, like with the chloride coming up from there and hot water environments with a lot of moisture. Precisely the kind of environments you expect, you know, that are present in the pool. No technology killed the baby. No, not quite, actually. Well, indeed, yes, but we'll return to this one in a minute, but here's a more spectacular one. This is Grenfell Tower. And last week in the UK, it's in a fancy area of London. Last week, the report came out which claimed that all the deaths that occurred here, and you can see that not everybody will come out of this alive, 72 of them, they were all 72 avoidable. Yes, of course, in the end, there is a technological cause, so the floor was susceptible to flames and there was a lot of deferred maintenance, maintenance not done on time. But like in the Tilburg case, all this had been known. In the Tilburg-Stapagoog case, it had been known for quite some time that these things needed to, that these bolts needed to be replaced, but it didn't. In the end, there was even an order to purchase, order to replace it, and we now know that it costs €495 to replace those bolts by better ones, at least new ones. But it was cancelled by one department because they said, well, that's the responsibility of the other department. And when you're in this field of maintenance and asset management like ICAM, you can sort of understand it because what part, so if a wall falls down, that's the responsibility of the public affairs, of the part that builds buildings. But of course, if, I don't know, if the toilets flow over, then that's not the responsibility of the company that builds it, but the one that operates it. And now sort of changing bolts is sort of a boundary area. Anyway, ten years later, nobody has still been convicted for this, that there have been some lawsuits, et cetera, with Grenfell Tower, the same thing, basically. So what you see is that there is a tip of the iceberg thing. So these technological infrastructures, they, of course, in the end, physically collapse and cause human lives in these examples, and not yet, not as far as we know, because of some thing in the physical world, a technical cause. But in the end, it's often the way that we organized the management of these technical infrastructures that they collapse. In the case of Stapenhoog, the swimming pool, an unfortunate combination of responsibilities where there was not an overlap, but there was an overlap, really, but who is to blame for that? In Grenfell Tower, repeated requests, et cetera, I can't go into every individual detail there. But again, it was well known that there was deferred maintenance, that it was dangerous, that people complained, that there were complaints, but nobody did anything. So in the end, it was humans, it was organization that led to a situation in which atoms, in which the physical world actually led to a disaster. So underneath the tip of the iceberg is a whole deep area of culture, attitudes, cognition, the soft side, the part of society where we, in this university, focus on mostly, that in the end led to the collapse of these structures. Is that also true with Nord Stream? I wouldn't be surprised, so this, I know, I'm not an expert in this particular area. I do not know if we've had a sabotage or a conscious demolishing of critical infrastructure. Of course, we had it in medieval ages, et cetera, but do we have, have we had that in the last 50, 60 years? Were there, have there been people that have said, you know, we should reinforce these structures better because they may be subject to terrorist attacks and have been, after those concerns, being known and other internal reports and people neglected them because that's the costly. I don't know yet and perhaps they aren't and perhaps we'll never find out. What is important, I think, in terms of a difference between this case and the two previous ones, is that this is what they call a black swan event, I think. And a black swan event, well, the name comes from Nicholas Taleb, a writer on risk, and he calls his very, it's a name that sticks. The idea was that for a long time people thought swans are white. So there is no such thing as a black swan. That can never happen. And Taleb specializes, this black swan event is an event that is extremely unlikely to happen. But on the other hand, when it happens, it's explainably, completely explainable why it happened. And the risk management, let's say people talk about low, likelihood, high impact events. So the thing is this, you could say that it could have been a meteorite that came from the sky and hit us and nobody knew about this. For now, this has happened only once in our current memory, a return to memory later on. So it doesn't seem like there's an organizational issue underlying this technical failure. Yeah, warfare, probably, but we'll just have to find out. There are several examples later on where initially it seemed like, oh, this is just a very unfortunate accident. And later on, actually, the 737 Max of Boeing, I'll return to that one, turned out actually some very different things were happening there. If I don't look you all in the faces because I can't see a thing when I look there because of the lights they are pretty much in my face then. So this black swan, well, perhaps in many cases, like this topical one ever, perhaps they were, sorry for my clumsy offering, perhaps they were grey swans. And grey swans are also mentioned by Taleb and those are things still very unlikely, but we've seen them before. They happened before. And a living memory, only so long ago that we've almost forgotten about them. And how long? Yeah, that depends. For us, COVID, last time we had that, we have to go back to the big influenza after the first big war. But for Asia, SARS, et cetera, was a living memory. Perhaps not surprising that in Asia, the preparations for the reaction to COVID was a lot faster than with us. So a grey swan is something that rarely happens and still very unlikely, but we've seen it before and you could take measures for this. So deferred maintenance in department buildings, pretty sure it happened before. Also the incident with Stapokora, pretty sure if you look long ago, there are many corrosion indents. Indeed, I think in the process industry of all major incidents in the Netherlands in the last 15 years, 30% have been caused by corrosion at least. So that's of course an industrial area, but we know that stuff rusts. Here's another famous example. You look at a long island, it's almost three miles wide, and that's why it's called Three Mile Island, I think. And this was an accident, serious accident, and it could have been so much worse. In fact, if the later on, the analysis of this leads, that we were half an hour away from a meltdown of all these cores, one or two of them actually had a really big problem, and some 2 million people around this area, it's in New York, no, sorry, it's in Pennsylvania, we'll get to New York, is received a higher doses than they should have of radioactive material, but we came so close, and again, a combination of bad luck, sure. There are whenever these disasters occur, there's never one thing that goes wrong. There are usually two, three things that go wrong at the same time, each of them pretty unlikely, that combination even more unlikely. But always is there also, they could have in retrospect been prevented if some organizational measures, counter measures, would have been taken. The word for this is the Swiss cheese model, don't have a picture of a Swiss cheese, but the idea is that when something bad happens, as I say here we've got a screen, and like the Swiss cheese, there are some holes in it, and if you try to throw stones through it, then the likelihood that it goes precisely through that hole is not so big. If you put another screen behind it, with again different holes in different ways, the likelihood that it will go through both holes is even smaller, a third hole, that's the Swiss cheese analogy. So yes, every individual layer of defense, both managerial training of people, et cetera, has as technical measures, has its limitations, but combined they make it so safe that the likelihood that something will happen is really, really small. And here we got pretty close. So this analysis of this island led an American sociologist, James Charles Perrow, let me check. Charles, Charles, thank you. Ladies and gentlemen, may I introduce you, Akil Bhaktwai. Not only the man who knows all the names and all the books, but also my collaborator in most of the research that I'm about to present here in the coming slides. So this Mr. Perrow, from his analysis of this incident, came to the conclusion of what they called tightly... Sorry, first of all, he said, accidents will happen. Whenever these infrastructures are so complex that sooner or later, if you wait long enough, some thing will go through these various Swiss cheese defensive layers, and in actual fact, it's unavoidable. The more, he said, as these kind of technical infrastructures become what he calls closely tightly coupled. So this is an example of a tightly coupled infrastructure. All those bulbs, this is the... How do you call it, the thing in Brussels? Sorry? Atomium, thank you. If one of these bulbs breaks down, the whole thing comes down, probably. If you put the bulbs nicely next to each other on the ground, then probably nothing will happen. That will be a loosely coupled infrastructure. And his point is that our technological infrastructure, like a power nuclear power plant, or indeed an aeroplane, or a chemical plant, or these are closely tightly coupled infrastructures, with very little redundancy that they can compensate. By the way, a plane is full with redundancy, but let's not digress too much. He said, in tightly coupled systems, accidents will happen. And the thing is that Achille and me, we agree with Mr. Perot, but for a different reason. We agree that accidents will probably happen, but not for a technical, but probably an organizational reason. That's maybe because we're organizational researchers, but we'll return to that. So there is a thought that these kinds of incidents will happen. Well, take Nord Stream. It's a very long interconnected hundreds of kilometers of pipes on the bottom of the sea. Sooner or later, something might happen. If not this case, then perhaps some machine that tries to install a, that's busy doing something totally different, but has some dredging equipment over the floor, or it's trying to install part of an offshore wind farm, you don't know, something may happen. And the idea was, well, we'd better get ready, because this will happen every now and again. We have to talk about humans. Because in a lot of these incidents, that starts with a human mistake, an error. And so the idea is if we automate more, then we can take away many of these incidents. And that is true if you look at the beginning of what they call the human factors field, then you would have two switches in a plane, and the one was meant for going up, and the other one was almost the same, but for something else. And after a while, there was another type of planes where these buttons were clearly separated. The pilots that only flew one type of plane crashed much more often, because they mistook those two arrows, which is something that I would do, those two buttons. So there is much to be said for human errors in many cases. Still, after 50, 60 years of commercial flying, half of the cases of fatal incidents in the aerospace is attributed to human error. Sure, but humans can also be heroes. This is, who knows where this is? Thank you, thank you. Indeed, yeah. You can't join, actually. You know the story. That's cheating. I was there. Yeah, oh, OK, OK. In the plane, who are you? I was in New York on that day. Oh, on that day, OK. Well, it's still a big place, still bigger than the hell of Netherlands, but still. OK, I see your point. Indeed, now why do I put this slide here? Well, you've got humans as a hazard, but you've got humans as heroes. So if we say, ah, it's all cause because people make stupid mistakes, then let's not forget that in this case, the pilot was a hero in avoiding a fatal disaster by landing his plane, not, well, crashing it, but landing it more or less on water, which is great. So we can't just say now it's individual humans who have cognitive problems and mistakes, although many of these big accidents are caused by that. There is recently some new evidence on the crash of the Air France plane that came out of Brazil and a couple of lots of unlikely incidents within the end led to complete loss of the plane. But humans can also be heroes. So humans are heroes, a hero, a hazard. I'm quite hesitant to say now it's the people that have done that. So that doesn't really do it for us, I think. So it's not technology. We said that in the beginning, technology is what fails. But if it's, and to some, the more complex the technology becomes, the more likely it is if you don't take counter measures that it will collapse because people don't. Well, there are so many opportunities for these unlikely events. Humans can make mistakes, but they can only fix, can also fix great things. So I have to put that in as a neutral factor. But now the real problem is, I think, drift. And this is my attempt at finding a picture of drift. So drift is gradually moving step by step. And what is drift? Well, let's take this example. So Achille, I can't say anything. Who knows what this could be? It's not in New York. Well, OK, it's in the Gulf of Mexico. Deepwater Horizon, another great movie. So this happened in 2010. And what you see then is that we have to talk about regulators and operators. So all these dangerous industries in the aerospace, in the nuclear energy, also, by the way, in the transportation of gas. For us, that's Galzouni, but they weren't involved in the Nord Stream directly. Not in that part, at least. Have their oversight. So because they are so dangerous, we have found companies or institutions that actually look at if they work correctly. The thing is, though, that these people that have to keep the oversight, they work together. They come from the same school. They have a similar background. It's a job. It's not their life. Their agency also is often funded by the industry, or at least different parts of the government that subsidize it. Yeah, but we also need to make progress in the industry. And we also have to have technological progress. So it's very difficult for a regulator to really keep being a very annoyance pain in the backside. Gradually, you get more and more collaboration. And collaboration is good. Hey, before you stand somebody who has a majority of scientific publications on the joys and benefits of collaboration, how wonderful that is. Check me out. But sometimes, it does more bad than good, especially when they're in this role of regulator and operator. So the regulators in this case, they agreed with too much. Some of the safety systems were turned off because they gave warnings too often. Of course, perhaps false warnings. What is a false warning anyway? But still, there were many risks taking in the drilling activity. See the movie, and you'll see much more of that detail. So deep water rising is an example, I think, of too much collaboration. Not, I think. We think, killing me, between the companies actually running this operation and the institutions keeping the oversight. It's also true of that horrible burning apartment building on the Glenville. Also there, the institutions had to keep the oversight. They were negligent. And that, by the way, also true of the Stappengaard, there, the swimming pool, there the investigators found an urgent notice, the third one in a row, that really this thing needed to be fixed. Immediately there was a physical inspection. And initially the people at the swimming pool, we could talk Tilburg here. That's so just here, not in Gulf of Mexico, Tilburg here. But no, no, no physical inspection had taken place until somebody actually found the invoice for the hiring of the high crane that was used precisely to inspect this. So there was no longer any denying. Just two, three weeks before, and it has been inspected, was that it's very dangerous. Nothing happened because the budget wasn't there, because some of my fellow neighborhood members, I would almost say, because I live in this nice part of Tilburg, all the nice people live who also work at the municipality. And then the friendly children go to field hockey, et cetera. But still the decision to postpone that 495 euros led to the death of that baby. And that's a regulating role that was missed there. So too much collaboration is a bad thing. And we think of it, and it is, of course, a great thing. Well, these kids, it looks like they really justifiably are collaborating very well. But there may be this thing that gets too much, that you actually forget, oh, wait a minute. Yeah, but in the end, it's nice that we get along well. But in 95% of cases, it actually works well, but in those 5%. Oh, by percentages. Quite often, and I forgot to put that slide on there, but when you go to a big chemical plant, like Shell Moudaike nearby, or so, you see outside a big sign, no fatal, no major industries for so many days, no research, so many. So they're really proud on the number of days and nothing happened. But that says nothing. That's the Taleb uses the comparison with not chicken turkeys, the big chicken that are roasted on Christmas. And he said, if you would make a graph of how happy and how confident the turkey is in his or her life and also how much they trust their owners, then there will be continuously going up from January all the way until half December. Because for a whole year, they only got great food and would care for, et cetera. But it says nothing about what happened a few days before Christmas. So the whole idea, and this is the problem then, that the longer these things take, the longer the great collaboration is very successful and the companies make good profit and no incidents are really occurring, the more you think, well, we can relax a little bit. And that's when the drift happens. Drift from what you first said wasn't allowed. Well, actually, perhaps it isn't so safe. So let's do half of that. Let's not inspect it all the time. Let's expect to lengthen the inspection interval. We don't have to go there physically all the time. We also can get all these little things that initially were thought to be really important. The longer that things are going well, the more that people are lured into the idea, it's actually well and we can collaborate nice and everybody's happy and everybody's making money. That's not ill-will, that's not corrupt, that's not sabotaged, that's just human behavior, that when things are going wrong or right for a long time, you think it's OK to leave them that way because the likelihood that Christmas only happens once a year and all the time before that, you're safe, you're happy and everything is going well. Time, I don't know, fine. The 737 MAX, this is the crash twice. The 737, we've all flown the 737. This is the new version, the MAX, Boeing, together with Airbus, the biggest commercial aircraft manufacturer that has sort of the annual revenue for a smaller country. It's a really important company and when the first 737 crashed, I think it was in the Southeast Asia in March, unfortunately, but strange, a new plane and there was some talks about the pilots not being able to actually deal with the plane when the second one, but that sort of subsided soon enough, when the second one crashed, and this was Ethiopia, I think, over a year later, then it was done, and initially all this was a technical accident. We don't know what happened. Probably pilot error, actually, because you know these countries and these pilots gradually became clear that at the root of this was a fundamental design flaw. In the redesign of the 737 MAX, the distribution of weight in the plane became different, that you could correct that, but you had to correct it by software in the automatic systems of taking off and landing, and as a result of that, and that could also go wrong, and I would have to look it up what precisely happened, but it could go wrong, and that was sort of because if you would not, if you would say, no, that's not good enough, then they would have to redesign the entire plane, and that's billions that you then lose, and somehow the regulators went along with that a little too long. This was a bit of grey area. Can you compensate one design flaw by having compensating measures in the software or not? Yeah, well, it's a design flaw. It's a characteristic. It's not a flaw. It's just the plane behaved different, and for a long time, so the American aviation authorities went along in retrospect too long, as is now commonly regarded with the statements of Boeing that they were working on it, that there would be a patch, and then the second collapse happened, and for a while all the 737 Max planes were grounded, which of course is much more expensive, but very few people ride in those and died in those planes in the meantime, but much more expensive than the original modification, but still. So too close collaboration. Again, a technical cause, and the collaboration, especially between the regulating authorities and the operating authorities. Whoops, that's a bit too fast. So what can you do about this? That's not so easy. As I said in the beginning, I will have more question marks in the beginning, then I'll have explanation marks, and Akila, one of the few examples that we see that actually work is whistle blowing. So blowing the whistle. The Dutch word is... What is the Dutch word anyway? Sorry? Klockeleidig, thank you, yes, Klockeleidig. So the same idea, you make a lot of noise, which alerts people. So if the regulator and the operator are collaborating too closely, then thankfully we still have all these individuals within the organization that can say, oh, this is really going wrong. I do not, and first you tried with the regulator, but that doesn't work because they don't see it, and then often they leak to the press. And depending on whistle blowers, our idea towards whistle blowers is kind of ambiguous. My work, I also collaborated with many industrial organizations, for instance also with NS. And two weeks ago I met them and I said, yeah, we have this problem with this whistleblower somewhere. And he says, he's saying that we don't have our maintenance records in order, it was in a big journal in the newspaper. Actually he's wrong. I don't know if this whistleblower is wrong. I'm pretty sure there are whistleblowers that are wrong. Sure, there are lots of nice cases in the world, but still interesting. I said, yeah, but whistleblowing is good, isn't it? Yes, whistleblowing is good. There was no doubt that whistleblowing in general is good, it's just this particular one. Of course, I don't know if the people in Boeing would have said the same thing, but in general we agree that it's good that somebody from their own conscience says if it's clear that the regulating activities don't work, we've had a lot of this with me too, et cetera, and with our TV shows on sexual harassment recently, that when you complain to the people who should about what's happening, to the people who should take care of you, they don't, so the only alternative is to go to the press. Many examples of this, that's an organizational issue. You see how far away we are from corrosion as a result of higher acids and higher moisture in the air or terrorist attacks. We're really looking at how people behave in groups, organizational stuff. So if the idea that we are getting so far is that we have found that accidents will happen. If, over time, things, the normal stuff goes well, then regulators and operators will always become more and more friendly with each other as a result of which the low likelihood thing, the grace one, or perhaps even, well, the grace one probably, of something that very rarely happens will happen and then disaster will occur and as a result of that action may be taken. But can we prevent the disaster then? Well, perhaps only through a whistleblower. Although we find it very hard to find examples of whistleblowers that actually were believed before a disaster happens and not around or after a disaster happens. So there are some counter examples. This is one, so here's an example of not a... Well, here I could have called a whistleblower. Anyone knows this is a very part of her famous picture? The Challenger. Excellent. Sorry, you're an expert. Oops. This comes from a book. The O-Rings by... I forgot the man's name. Anyway, O-Rings. What are O-Rings and what's the Challenger? The little Challenger exploded shortly after launch and you can see some of the smoker rising from that, killing everybody on board, of course. And the reason was something as simple as that it was chilly that morning. It was cold. Tectocerecent was cold. And because it was cold, some of the rubber rings that connect one part of the rocket, and the others, they become less flexible. There's a wonderful picture or a short video of a Nobel Prize winner in physics, is it Feynman, I believe, who illustrates this in a hearing for Congress where he has an ordinary O-Ring, probably something from a hardware store or something, and he puts it in ice water for 10 minutes and then he shows it, shows to the committee that actually doesn't work anymore. It has lost its elastic capability. The O-Rings were part of, of course, of the design done by subcontractor for Boeing, not the main contractor. And they worked perfectly, of course, with every test, but not in cold weather. There had not been a test in really freezing weather. There was a committee that in the end decides to launch. We now have delayed launches again, again, of the latest project from NASA, but this was back then. And in that room with six people, in the end, they have to decide, will we do this or not? And this man from the subcontractor said, no, because we do not know if it will work in cold water. That's not the same as saying we know it will not work. We do not know. It hasn't been tested. And so there was a lot of push on this man because had already been delayed. Also, he got phone calls from his own boss saying, actually, I just got a phone call from our customer. He knows, paying various billions, that you're being difficult on this topic and will you just not the country to be a bit more flexible on this. But in the end, the guy wouldn't, he wouldn't sign off and they went along with it anyway. They went along with it. Next day, of course, a horrible accident. And for a while the story was everybody agreed on this. Now, we all agreed there was no problem until in the investigation being clear there was a signature missing. There was a signature, yeah, and then gradually being clear because the guy who wrote this story had been silenced by his own company and wasn't allowed to so you can be a whistleblower because he didn't go to dissociate it to the big precious, etc. But you're disbelieved until it happens and when it happens and afterwards. Yes, afterwards. So, but whistleblowers we think we hope are very hopeful and should be encouraged more and perhaps what we can do as society is make the protection for them better and perhaps also identifying if they're really whistleblowers or just crazy people holding a grudge against the company because you'll also have that. But the protection for whistleblowers is better. Like we have an organized crime protection for people who will actually the crime protection, the witness protection programs that you see in the news and then in the TV series that when people testify against a major criminal head of some criminal business then they go through great pains not just to prevent them from being killed but also to secure that they remain alive long enough to give the testimony. So perhaps we should do more that we don't know yet but we are eagerly looking for examples where whistleblowers really made a difference before the disaster happened but even if it's and will then the process stop totally will that lead to a situation in which there's no longer this gradual buildup of mistaken confidence, this drift into more and more collaboration where it really shouldn't be well that depends on how often of course we let those whistleblowers do that thing oops and there comes this picture that came up in my google search when they look for men forgetting so here somebody's forgetting something I could of course easily put a picture of myself up there but because I tend to forget things as well but organizational forgetting is problematic that's really a part here if it's long enough sufficiently long enough will we then then automatically things we find it less important we've seen that repeatedly by the way also because I teach of course in the school of business many of the boom bus cycles we've seen happened so often before but every time when they're long enough then the people actually push the buttons in the trade rooms or whatever they weren't around already when the last time hit really big and so did they know and I've seen this in many businesses and if it's more than a decade ago it's lost nobody was there around nice thing about being a research by the way sometimes a while back I was called by a CEO of an industrial company Henk, can you come back please because we now have this big challenge and last time it happened 12 years ago you and me were the only one who were there and perhaps you can explain to my current board members what actually was happening then and what we did back then and I did and then you're sort of an external brain pack for the organization because the rest of them apart from the CEO they're already forgotten about it so organizational forgetting if organizations the longer that they remember that stuff happened the more likely they are not to make the same mistakes again so suppose that that the turkeys could live after they were eaten I know it's a silly example but then perhaps next year they would be less confident of what happened early December this is another way of making sure that memory stays concrete this is the storm flute keering I've witnessed this once in this type of weather and that is great we close the storm flute keering only once twice a year and most of the time for me just to test it so it rarely is needed test cost billions of course to build and why because over 2000 people died why did they die was it because of high tide was it because of people because people yes because the maintenance of the dykes in this region was very after the post war period was very much deferred very much in the state the condition of the dykes was much worse and so the likelihood that they might fail a low likelihood and it hasn't happened in a hundred years was still there and because it was so long ago and because money was scarce this was the 1950s 1954 actually is 53-54 thank you was this major catastrophe in Zeeland and because of that there was very little interest in keeping everything very well organized but of course afterwards there was and I think what you do here is really poor memory in concrete so even if the people who designed and built these things are long gone they are still there so if we can make changes in our infrastructure that are so long lasting that actually they will live longer than organizational memory well that's good news I sort of expected every time from now on that we that should but I'm interested to see if every time from now on we put a critical infrastructure in the sea that we completely will ignore the possibility of terrorist attacks that would be interesting but also pretty bad by the way don't forget that we want our entire energy supply to come in 10-15 years for two thirds from the North Sea from our solar from our offshore wind wind farms so will we not after the events of grandfell look at any possibility of securing that there is some sabotage and what we will do backup systems I think we will but if it takes long enough then probably the people who designed it will have left but hopefully the structures are still there so this could be a way making it more really lengthening the period in which it's sort of safe by actually ingraining it to the physical infrastructure and here's my last suggestion training let's not forget that the war is very infrequent we have one now we had one in the Second World War we had in Europe one in Yugoslavia but the military train every week train every day for this event firemen train every day they keep practicing aware of some like the likelihood that your or my place will burn down is very small the likelihood that no buildings here in the Netherlands will burn down is very low indeed in fact yesterday I got a warning message that at the Nemo Strad there was a big fire so and of course there are tens of thousands of buildings in Tilburg so any of them has a really small likelihood but the collective likelihood of this pretty high so you could say of course they're training because there's a fire every time so that's the problem that if the more unique this becomes the more difficult it is to train for it but then perhaps training and remaining aware of that something happened and always be on the lookout for that a bit paranoid you could say is perhaps better although I don't know perhaps the Turkey had a lovely life not concerned by anything until the final X came and if being paranoid would still have been but in the context of critical infrastructures I think training making sure that you over invest in safety finding ways of limiting this organizational drift and this organizational forgetting promoting whistleblower activities all these things together may perhaps make our world a little bit safer and then still of course this big still we may have somebody in the submarine or in the boat and doing something we had never thought of so I suppose in the end the question about the the North Stream could we have prevented that I think not but perhaps in coming years I will be that we get more information that actually we could have but we didn't because of some organizational reason so that's my presentation my story questions yes alright thank you very much are there any questions anyone no it was a very interesting lecture right maybe no questions came up no I do have a question that I thought about when I heard your talk just now you hear a lot of these days about data driven decision making and about also AI in the future or even now already making big decisions right what do you think of that in regards to your story is that a good movement or should we also be very wary of that because then we are maybe also even not like yeah we're not taking action in our own hands right we're also shifting away the responsibility then again indeed decision making becomes much more data driven and in the sense that we can now collect more information more widely and better that is better however it doesn't really change things because the problem is still that we are looking at something which is very unlikely to happen and how can you search for all these hundreds of grace ones in your still an attitude that you need to have so and these systems can also have problems one example we had an environmental example we keep forgetting them which is said because they worked remember that what's that called what the spray that you get from all sorts of spraying devices contained a gas and as a result of that ozone layer started to disappear sorry that I don't have the English terminology at hand now where did that hole in the ozone layer suddenly come from because suddenly there was a hole as big as France in the ozone layer near the post well and ozone layer was routinely monitored through over across earth only the thing is because of the data issue whenever they would have no measurement the algorithm says let's take the measurement of the nearest thing next to it and of course there was meant for small holes in data that somehow I don't know a pigeon flew over the satellite but some data incompleteness after a while the algorithm did that for this hole hole the size of France and so after a while a human so actually and we talk 1980s here so this is not a human actually noticed this and as a result of that we became very alone by ozone layer and as a result of that we banned the use of these particular gases in our deodorants etc. coming back to your question will AI and data driven decision making still has implicit biases it's not good, it's not bad it's a fact of life it's very convenient that the way every day your and my day is full of AI and whenever we try to figure out where is something happening or whether we go to or who actually said this so we use AI all the time now cars are full of AI and and that's very convenient and automatically they see all sorts of things and sometimes they're very stupidly wrong so I think that I see AI in this case as a special case of technology it makes things more difficult and more and more is it important that you understand what that actually is doing so the Air France flight that collapsed completely that fell into the sky of Brazil was an example where the pilots didn't really understand what the safety regulations system were doing but now I'll have another example sorry I have so many aerospace examples today but still they are very well documented and whistle blowing and accident analysis is very well developed in the aerospace case but we had in the Netherlands we had a decade or so ago we had a plane from Turkish Airlines falling out of the sky near Skipal I believe the cost in human life was limited but the reason was there was an error in the systems that actually had to calculate the height of the plane and the pilots knew that so they ignored it there was the wrong height on the plane but at the same time what they didn't know was at a certain height automatically to help the pilots the flaps would come out and the landing wheels would come out preparing the plane to land and so here they were flying at 300 meters and the system said it's 80 meters or something like that and the flaps came out and they simply fell off the sky so the problem is with more and more technology the AI driven or all this kind of control stuff that the control system may also be wrong and then you need the human again and we need to do more and more of that and we can we get used to this and we could look at how you and me use our iPhones and we would find several examples where we know that's probably wrong but we ignore it because we know the device is wrong and 10 seconds later we use it and we trust it blindly because we know probably this time is right that would be my long answer to your simple short question alright thank you did any other questions come up in the meantime yeah there's a lot of research into this big disasters and a lot of reports and learning does it actually help to prevent next disasters is there sort of numbers of many industries it flattens out so this is true for the aerospace and for the process industries that started aerospace of course if you start these statistics in the war period then of course there but shortly after so fewer and fewer planes actually collapsed and fewer and fewer major chemical incidents actually occur but it flattens out that there is this level then that perhaps we find except that we can't solve and that's probably because then we've got all the usual suspects and we've got all the obvious things that you go wrong so if you go and visit a shell facility of BP facilities and you have a sandwich and you're walking there in the cantina and you don't keep it and you take the stairs you don't put your hand on the railing they will say and a stranger will come to you please put your hand on the railing whereas the likelihood that something will explode really in the culture they've got that so very few people actually fall down or break a leg because they don't keep their hand to the railing but the more complex ones the one that's many interaction of very low likelihood we had at Shell Moodijk we had two major incidents some 5, 6, 7, 8 years ago one was an example where a new piece of equipment was used and they wanted to start it up a new cracker and use the way that they used it did that in the past but that collapsed and fortunately nobody died also luckily because there was a shift change so nobody was outside but huge pieces of equipment were flying around and another one was also this was also in the news where one of these exhaust valves had actually been open and some toxic material had been flowing out for weeks or months until they finally found out and then it turned out that the person who should have checked that hadn't done it but had said yes and the person who should have checked him actually said yes too these two people were fired but the whole idea of close collaboration and as a result of that because of course all the time these things are closed no doubt they've done that for 500 times and every time and now they forgot and probably alright so they do help but they help to a certain extent they do not get the help us with the grey and black swans that there we need organizations and constant alerts but many of these rules and regulations that come up as a result of these analysis also improve designs of course if there is a technical reason for this a 737 it's redesigned it may collapse in different ways but it's not longer in the way that we had before so it helps but to a point alright thank you, thank you for your question I think we have to end it there because it's already accorded to 6 thank you very much and please give a big applause for Hank Ockerman