 Well, thanks everyone for showing up. I know it's late in the day and you've probably had a lot of conversations and talks and your brain is chock full of good information. So what better way to end the day than to talk about failure? We're going to talk about failure and how to fail successfully and reliably and look good while doing it. Now, we're all here because we care about free and open source software. That's the name of the conference. We are careful and compassionate and passionate users of it. Many of us are contributors to it. All of us are advocates. And it's no coincidence that we are here at the Free University in Brussels. Now, my French pronunciation is awful, but I know that Libre is not the same as Catouille. And I appreciate languages where free as in free beer is distinguished from free as in freedom. Now, free software encompasses both of those meanings. It is free in the sense that you don't have to pay for it. But I think it's more important that we dwell upon the second meaning of free. It is liberating. It allows you to experiment. It allows you to fail. And it's that notion of freedom that I want to emphasize, and I'm going to request you to keep in mind. Because we know that in open source software, success and failures are essentially just two directions on the same road. We fail as often as we succeed. In fact, that's actually an understatement. We fail quite a bit more than we succeed. And that's par for the course. It is just another turn in the road. There is nothing earth-shattering or even that odd about failing in open source software. We fail all the time. And of course, we are open about it. We share about those failures. And we learn from them. That's the key thing about the open source community. In that spirit of openness and sharing, what I'm going to ask you to do now, we'll take a minute or so, is to write on those index cards. I passed some index cards out. If you didn't get one or if you had a set of index cards in front of you, please just pass them around. And what we're going to do, we're going to take a minute or so, write a failure or two that you've had in your professional life on that index card. So we'll keep it safe. We're not going to get into failures in your personal life, although if you're feeling particularly jazzed up late in the afternoon, you may do that as well. But on those index cards that I've provided, write a failure or if you have more than one on that card that you encountered yourself. It has to be something personal. So let's take a minute to do that. Well, you might have to share them with your colleague, but we won't publish them in the conference findings. That I can tell you for sure. Now of course, if you read it out loud, it might get recorded and live streamed. Well, thanks for indulging me. Hopefully some of you had something to write. Actually, let's do a quick straw poll. How many people found this exercise a little bit difficult? Show of hands? OK. How many people are thinking about some right or wrong examples of failure as opposed to something from experience, as in something from the industry outside of your experience? Anybody thinking about that? OK, that's good. At least people were able to concentrate on their own experience. Now, thank you for indulging me again. But you wouldn't be off if you were to accuse me of being a little bit presumptuous in asking you to do that. Because I asked you to do an exercise without first taking, this is a scientific, obviously, conference, taking the liberty to define what failure is. So obviously, we have to come to terms, right? How would you even define or write an example if you haven't yet decided what failure is? So the dictionary definition from the Oxford Dictionary of Failure is this. Two or three definitions. Now, I don't like the first one. It's circular definition. Failure is lack of success. I guess success is lack of failure. Doesn't really get us much. I like the second and the third one better. The omission of an expected result or required action and lack or deficiency of a desirable quality. Those seem plausible. We can work with them. So allow me to use those two definitions from the Oxford Dictionary and add another phrase, one of a decisive action, and offer this working definition for the purpose of this conversation, a lack or deficiency of a desirable quality caused by a decisive action. Let's use that. You don't have to agree with this definition 100%. Well, let's use this for the remainder of the hour to see where we can go with that. So let's say there is some quality, some measurable quality. No scales on the horizontal or vertical axis because you want to leave this as general as possible. And let's say it's trending mostly in a positive direction over time, something you find valuable. And you have to make a decision. That's the key. If it trends by itself up and down, then really, I mean, you could pass or fail at somebody else's whims or, I guess, based on the vagaries of quantum mechanics. But really, you can't do anything about it. We're only interested in things that you can control over which you have some agency. So let's say you make a decision at some point. And as a result of that decision, well, one of several things can happen. That thing that you value, that quality that you appreciate, it can go trending up more or less along the same path that it was going to go presumably, or it can trend down. Those are essentially the three possibilities in general. And if the possibility when it trends down gets into the especially when it gets to the negative territory, as in the value reduces below some predetermined zero or break even point, that you would call a failure. So failure is essentially something that you value going into negative territory based on the outcome of a decision you make. So that's what we mean in almost like a graphical representation of this working definition. Now let's tease this definition apart a little bit to its different components. And let's start with the one that, frankly, somewhat gratuitously added to the end, a decisive action. Now the first thing you might say, especially since it's the part that I added, is, well, duh, don't make stupid decisions. You won't fail. I mean, decide better in advance. That's plausible, although some might say probably a naive statement because you often will sell them. Do you know the complete outcome of the decisions you make? But let's say you know that you know some outcomes or you ought to be able to know. Here's the deal. So this is, I don't know what you all were doing almost three years ago on February 28, 2017. And actually, I don't know if the Frankfurt region or regions here in the US were affected. I don't know what I was doing on that day. I was dealing with this outage on US 1 region, which is region 1, by the way, the primacy. You come in first. You get named region 1 in the AWS parlance. That was down, like out, as in dead. And it was so dead that Amazon's health page suffered. So if you went to AWS's health page, it was all green because that was a cash copy of when it was green. So of course, the first hour or so we spent trusting AWS's health page and looking far and wide into our own software where it was failing. And only then, while we were at work, we don't usually check Twitter that much. But only then did we caught up to the entire maelstrom that was happening on Twitter. Twitter, of course, does not tie itself to the US Ease 1 region. So we were able to catch up. So the point that I'm making here is you can only take this notion of, well, I will not make stupid decisions and fail so far. I will venture to say, of course, AWS is not really open source in the sense that we value here. But they're pretty big. And if they can fail, we should be humble and say we will, too. If you don't make a misstep, somebody in your team will, somebody that you depend on will, case in point, and failures will happen. So failures are a fact of life. You can't simply say, well, I will avoid them. It's not a workable strategy. So then the thing to talk about is these two terms from systems engineering, mean time between failure and mean time to recovery. If you accept the notion that failures are a fact of life, then the thing we have to talk about is what are we going to do when we fail? So the ideal system is the one that never fails. There's no such system. You'll accept that, but it's only up there as a point of reference. So given that that's an ideal, it will not happen, there is one kind of system you can imagine that doesn't fail very often, but when it does, it takes a long time to bring back up. So it has low mean time to recovery, sorry, high mean time to recovery, but also high mean time between failure. It doesn't fail often, but it takes a long time to bring back up. Another thing that you can contrast that with is a system that is low on both those categories. It fails often, it does go down, but it also takes really a snap of your fingers, metaphorically speaking, to bring back up again. So obviously, given that if you could flip a coin or make a wish, you would choose the ideal, but that's not reality, which one would you rather have? Any takers? Who would vote for the system that has the second one, high mean time to recovery and high mean time between failure? One, sorry, depends on your use case exactly. Yeah. It's acceptable, why not? Exactly, it depends on the use case. There's no absolute answers, but assuming that you could tolerate failure, would you rather have a failure that is of that type or any takers for the low MTTR and low MTBF? Couple more hands. And obviously, your absolute spot on the use case determines if you can live for an hour long failure, but the failure happens very infrequently, maybe that's the right use case. And that context is very important. However, there are some human aspects to this that we should be aware of. And this is a system where failure is easy to recover from, favors experimentation and improvement. So the use case you're talking about very rightly, you can call it the production behavior, right? I mean, what would you do in production? But think about it as developers or people in the technology sector, how do you approach a system that exhibits that behavior? A system that has a high time to recover, you approach it gingerly, which trepidation with some fear that, hey, if I bring it down, it's gonna take me hours to bring it back up. Versus a system where failure is difficult to recover causes a culture of change minimization. You don't approach that at all. So you just say, I'm not gonna touch it. This is where your change request boards and all of that bureaucracy comes in because everybody's scared of failure. So knowing this distinction, and you can just look at your own psyche or look at how your team or your friends in technology or even actually outside of technology behave, the people who say, if it ain't broke, don't fix it. Those are the people who know when it goes broke, it's a pin in the neck to bring back up. The people who know I can fix this failure, they won't usually say, if it ain't broke, don't fix it because they will say, how can I improve? So it's human nature and this brings us to our first ethic that if you don't think you can trip or nobody's gonna trip, your friends are gonna trip, your teammates are gonna trip. We all trip and what we want to learn from that is how we react to tripping and error recovery is greater than error prevention. It's kind of in the agile manifesto. Both sides are great, but we favor the left side over the right side. It's not like error prevention isn't an awesome thing. We should just fail wantonly. That's not the implication, but given a choice, we weigh error recovery over error prevention. We should invest time in recovering from errors, perhaps more so than preventing errors, okay? This is a bike shed, not a very impressive one. How many people think they can improve this? I have no idea of your knowledge of building things, but few hands are up there, few more hands, right? I mean, it's not great. I didn't make it, but it's rusty. I don't even know how it's standing up. Clearly water has caused damage, so whatever you store inside, including bikes are gonna be exposed to weather. So we can all see ways to improve it, right? So I can too. So you're with me, those who raised your hands. Now here's an analogy. This is the man page for sleep. How many people would like this sleep function, which basically puts your current threat to sleep? How many people would like it to work so that you could put it to sleep between any value between, let's say, milliseconds, microseconds, seconds, hours? Is that a reasonable behavior? People who think that's reasonable, okay? Well, sorry to disappoint you. The Linux sleep function takes an argument which is number of seconds. So even though you're faster clock cycles and whatnot, using this style, you cannot make it sleep less than a second. How many people think they can improve on that? Honest, let's be honest, with today's computers. Of course you can, and you would not be surprised. This is more than an email from more than 20 years ago. So Linux obviously borrows from the BSD behavior. This is an email from 1999, so more than 20 years ago. And in it, there is this notion that bike shedding is described. Everybody had an opinion on sleep one. Because it was easy to conceive of a sleep behavior where you could put it to sleep for less than a fraction of a second. And this description is where the term bike shedding comes from. And the term is given a choice, this actually happens, given a choice between asking for improvements to a nuclear reactor versus a bike shed. More people have opinions on how to improve a bike shed than the nuclear reactor. You don't have to be a nuclear scientist or any kind of scientist to know that mistakes making or failures in making a nuclear reactor kind of have a bigger impact than fixing a bike shed. Yet the converse is true for human psychology. We tend to have more opinions about things we can all conceive that can be improved, like sleep one. We can all very easily conceive how to improve it. But who will go ahead and do all the things we heard in the morning's keynote about the real hard problems of Linux? Probably very few of us, that would be a fair statement. And that notion we have to remember that is part of how we react to complexity. We actually react to complexity by stepping back. But over simplicity, we like to step in and have opinions. This brings us to the second thing. When we talk about decisions that lead to failure, if a decision is merely an opinion trying to find a voice, curb it. Knowing the difference between a bike shed and a nuclear reactor is key to dealing with failure. It doesn't matter if you fail to make a bike shed, if it's a leaky one like we saw. It doesn't really matter if the world has moved on, if sleep still can't take a fraction of a second. That's not the biggest problem confronting us. We have bigger problems. And the ability to recognize where to cause improvement and where not is a key thing to learning from failure. OK? Let's go back to the definition. And let's look at a different part of the definition, the lack or the deficiency. That's the part that came directly from the dictionary. So how do everybody feel about some of these things? It's a pretty savvy community. So a show of hands, I'm going to run through some things. Just show your hand if you agree with that statement. We favor smaller commits. Everybody should be raising their hand, really. More frequent feedback. Easy one. These are easy gimmies, right? Fewer manual tests? Might be some opinion there. OK, that's good to see. Which implies more doesn't mean fewer testing. It means more automation. More time for refactoring. You all must be doing a lot of refactoring if only like five hands went up. OK, these are all easy ones. Maybe that's not the third one. But everybody more or less in general agrees. But here's the question. How would you know what more or less or fewer means unless you actually measure it? So measurement is key. Sometimes we make decisions and we question the value of the decisions. But we have no data to bag anything up. It's based all on gut feel and heuristics. That's not scientific at all. So for example, the project that I'm on actually was on until December. Branching strategy was a key thing. There's different opinions on it. Some people, many people are very vehement opponents or proponents of trunk based commits. And many people like feature branches. And of course, you get into the length of feature branches, short lived, long lived, et cetera, et cetera. We decided to document it for our team. And we did something that made sense. Of course, I'm not advocating consulance or geo any tool, certainly not a closed source tool. That's what we use on that project. You can use an active decision record template that's based on Michael Newguards' work. This is openly available on GitHub. Actually, I would, given all things being equal, I would favor this because I can keep this very close to my source code. It's in source. I can commit it. My decisions can stay versioned together with my source code. So it doesn't matter which tool you use. But documenting decisions is key. And why? It is because software and the decisions around software, like everything we do in life, essentially, involve a lot of storytelling. There's a lot of narrative. There's a lot of history. There's context. And it's not a commentary on software, by no means open source software, which, by the way, is a lot better than closed source software because of all the eyeballs. But if you read a line of code, anybody who can read code can figure out what it does. Very seldom can you figure out why it does that. That's why, because it's an exclusion thing, right? Why does it do that as opposed to the 10 other things I can imagine? Well, those 10 things aren't listed, hopefully, and commented out. Actually, that would be bad code. It might be in the version history if you go back. But you might have a novel idea. You will never, through simply reading code, figure out, why does it work this way, not that way that I can think of? That is in a campfire story in somebody's head. And if we don't tell and share those stories, that will never get transmitted from person to person. That actually is very key. That's why all of the open source, the one thing in the community's favorite documentation, the one thing we learned in the keynote is there are too many mailing lists. I was actually clapping when that was said because I'm like, I've been on the other side of closed source software where there is no way to find out. If there's too much information, my problem might be I have conflicting data. Which one do I get? It will never be, I don't know why it works this way and there's nothing documented and there are no tests and I've just lost. And there's no person I can find. Now, what do I do? I'm gonna see the inside but I will never have this notion of how not to make mistakes or how not to fail. So for failure recovery, you have to measure and document the impact of your decisions first. If it's just a heuristic, if it's just a wishful thinking, just a gut feeling, it's really not scientific. It's not even gonna survive the test of time. And if you're wondering why measurement is that important and I keep dwelling on it and you might be thinking, well, how do you measure developer happiness and how do you measure those things? You can and I encourage you to. Here's a book from Douglas Hubbard. He talks about how to measure anything and he takes the hard thing. How to measure the value of human life. Tough problem. You may think, well, that's kind of verboten, that's forbidden, you shouldn't measure. All lives are valuable. Well, then you have a triage situation in a military hospital. They do assign a numerical value to human life. They divide the injured in three categories. People who will survive no matter what, if they get care or not. People who will die no matter whatever care you provide. And a third category, people who will survive if you provide them care but die otherwise. And it's that third category, the field hospitals, the field medics address first. What is that if not an actual value proposition on human life? So he starts with that example and many others. So this notion of, well, I can't measure happiness and I can't measure satisfaction. I call nonsense on that. Try to, it might be hard, but please do it. That is what makes your decisions traceable and the value they provide actually compelling as opposed to an opinion. Okay, so let's go to the last bit about of the definition, which is the notion of causality. So causality is important. So let's say you're building a website and I'll take a very silly example. Let's say you have two different kinds of look and feel. One is blue and one is green. And for some reason you find out that the green one elicits more people, more users to click on it, right? So you have 72% click rate versus 52% click rate when you change the user interface. Well, knowing that, you still don't know the causality, but at least knowing that correlation helps you. In the last talk we heard from Mukash, he's talked about correlation of testing, the brilliant idea and I really wanna encourage you to look at his work. Because correlation is the first step towards causation. It doesn't prove correlation, but doesn't prove causation, but at least hints at it. So you, every test that you write is a test towards determining at least correlation if not causation. So here's a user interface test from one of the projects that I worked on and if you don't like JavaScript then don't read anything other than the highlighted stuff. Basically it's talking about when I click on the left button then I expect that particular path to be admitted. Which is the graphical equivalent of essentially saying when the user clicks on this button then they will be taken to this URL. Now if that's the behavior I want, then I can test it. I can actually test it in a provable style and therefore form the correlation between action A happened, result B was observed. Or if it's not observed then the test will fail obviously. Now you can do better than that. You don't actually have to have tests, these tests probably will run before you purchase software in production. You can run testing in production as a thing. But here's to how you do really testing in production. So my former colleague, Dan Able, in his company they have a website which has or a mobile or I don't know exactly mobile app or a performance website, responsive website which has a fallback. And the fallback is the login screen defaults to an old style kind of HTML 1.0 screen when the primary one fails. And then they track how many users are using that old screen. And they have an alert on that. So in this particular instance when 50% or more of the people are logging in using the older style web page, they know and they can go and fix it. This actually happened. This is from his blog. So I encourage you to read it. This happened and they were able to track their failure, their kind of actual production failure before users complained. Because users probably noted this login page looks funny, funnier than the last time but I can still log in. But they knew something was wrong with their system because too many people were being redirected to the old page. That's how you make your causality apparent. The cause of the failure is that the indicator is 52% people using the old style. The cause might be well the new style web page, the modern web page doesn't work or the app is down. So I would encourage you to look at that but this is awesome, this is the high value stuff to determine causality that you want to look at. Now a word about causality, those amongst you who are math aficionados, which should be all of you really, know that correlation and causation are not the same thing. However, correlation is a super set of causation. And it means that if there's no correlation there cannot be causation but the reverse is not true. So if you can't establish any correlation between two things if the coefficient is zero in mathematical parlance then of course there's no causation. But the correlation could be one or close to one and it might still be happenstance or pure conjecture or two things that co-vari based on some third and yet unknown parameter cause. But here's the thing to find out. If you can determine exclusivity from correlation outside this outer circle then clearly you're outside the causation circle as well. So if you change the simple parlance if you change the color of your website from green to blue and no changes observed. 50% people were clicking before, 50% off now. No correlation therefore no causation as well. Outside the outer circle by definition outside the inner circle as well. Causation is hard to pin in real life because there's always so many factors. Especially in life testing you can never have the laboratory that you change even one variable even in A, B testing. It's really hard because you never know the extra factors that you weren't measuring. But if you have excluded correlation you have excluded causation. And this brings me to my fourth ethic at minimum established correlation to understand the impact of decisions. You may not be able to do causation as I said in real world. Too many moving parts. Too much complexity. But you should be able to establish correlation. Very one thing is see what it happens. It may or may not have caused it but at least you would be better off than measuring nothing. And of course the limit to this is skip the inconsequential decisions. Don't do this for things that are trivial. So now I wanna take a turn. I wanna talk about the virtue of sloth. Okay? So this is a very famous quote from Donald Knuth's book. I mean if you don't know the whole of it I'm sure everybody has read this part. Right? Premature optimization is the root of all evil. There's a whole context. What people do not know is literally the next sentence in that book which is we should not pass up opportunities in that critical 3%. A good programmer will not be allowed into complacency by such reasoning. Pardon the old use of genders. He will use the wife to look, he will be wife to you look carefully at the critical code but only after that code has been identified. So 97% of the time it may not matter but that 3% is very critical and that brings me to the next ethic. You want to defer decisions until the last responsible moment but you wanna know when that is. So here's an example which some of you might be grappling with. Should you upgrade your JDK from version A to a more recent version? That's a very pressing, it's been a very pressing problem for years now, right? Well, I mean I know 11 and 12 are now out but people still have 12. So you can make some scientific information, scientific basis for that, right? So this is from Oracle's website, the end of life for Java SEA 8. So it's coming end of the year so now it's literally counting down by months, 10 months or 11 months and counting, maybe 10 months. And we can see that there's no plans to deprecate it on the desktop. So you might say, well, okay, well, I've got maybe I've got a little bit more time. Well, then what about the security implications? So you can go to, for example, the CVE details for JDK and you see, well, there's a lot of bugs on the JDK and then some of them are high severity but then you can see some of these are for other versions of the JDK as well. They're low priority and they're also present in JDK 11, 12 and 13. So you're like updating it to 11 or 12 may not solve this problem but then there are others on the bottom that are only for that particular older version of the JDK, JDK 8. So obviously a lot of information there, you might say even a fuzzy thing like when you're making your decision matrix should I and shouldn't I, when, now, later, will December or maybe later than December or before. It's still gonna be a somewhat subjective decision but there are these data points you can use. It doesn't have to be literally seed of the parents kind of flying. So this is what it might look like. If you're deriving some value out of it, value is going up and at some point it's gonna go down and it's gonna go in the negative. And the decision and where do you do it will have an impact. So if you decide to make that decision here whether that's to change JDK or whatever, this might have this curve, it might have an initial downward curve and then it might go up. If you do it a little bit later, well you'll drive a little bit more benefit from the upward curve but then it will cost you more. And if you do it really late, well then you're very close to the actual value going negative. So you might have had all that time to decide, you might have used that time but you'll be very close to zero or potentially negative value even after enacting on your decision. That variation on when to make time will have impact on what transpires. And that is essentially the ethic of deferred decisions until the last responsible moment. But it comes with a lemma no later. And how do you know? Well if you're that close to the iceberg there's really no point in turning the steering wheel then really. You might wanna hit it head on and not scrape it. And there's a scientific word for that, a scientific notion which is one of satisfying. It's a portmanteau. It's a combined word from satisfying and sufficing and you put it together to satisfy. So it comes from a paper by Herbert Simon for which he actually won the Nobel Prize in Economics I believe several years ago, several decades ago in fact. So there's a long paper which there's a link for that in the conference slide which I think will be made available later. But here's the code that matters. That's the money code as they say. Decision makers can satisfy either by finding optimum solutions for a simplified world or by finding satisfactory solutions for a more realistic world. Neither approach in general dominates the other and both have continued to coexist. So you can defer and make your model more realistic and find a more ideal solution for that or you can simplify your world, replace horses with circles as the joke goes and make a more earlier decision. And neither approach is perfect and certainly neither approach dominates the other either. And that is the key to determining what to do. Now, bringing it to a close, the decisions that we make, the failures that even that we make, they're really a fork in the road. There's no perfect versus imperfect. Perfect is not just the enemy of the good but it's actually the enemy of progress especially in open source land because we are compelled to make decisions that every minute and collective decisions that every minute of our lives. And we're not alone in it. Financial statements come with that caveat as well. That past behavior is no indicator of future success or failure. If you search for any of your financial statements, there's fine print on the bottom, all the stock market, stocks say that everything says that, that past behavior is no predictor of future performance. And that brings us to perhaps the most important ethic which is when you're making a decision based on what you know and looking at one of these paths that that decision might take, what you really don't know at that point is how it would have turned out had you just in Alisa's fair style let things roll. That obviously only comes with the benefit of hindsight and nobody has 2024 side, everybody has 2029 side. So at the point where you're making a decision, you have no idea what's gonna happen. You make a decision and you measure the impact of that decision, of course, that's what you do. But here's a mistake that we make because we are human. We have that what if thinking. What if I had made a different decision or not made a decision? So but we're comparing real costs with imaginary benefits. After the fact when you make a decision especially if it doesn't turn out the way you want it or the way you wished, you say, ah, that decision was the culprit. If I just either hadn't made it or made a different decision, I wouldn't be where I was right now. You can't do that because you know the reality of this real world, but you don't know the reality of the alternate world, right? So beware of comparing real results with imaginary what if benefits because opportunity costs are the thing and there are many other reasons that's not a fair comparison, okay? So now with this hindsight in the remaining 10 minutes that we have what I would like you to do is take your neighbor's card. So hopefully what you wrote there is at least say for your neighbor to see and let's talk about what you would do differently after having heard this. And if you didn't write anything or if I don't think anybody came late here are a failure, failure scenarios that you can see your conversation with. Is it a failure that OSS licenses didn't cover the services that they're built on? Or is it a failure and an irony that my personal info can be locked behind a proprietary database using open source software? I don't own the data, but the software that was used to capture my data was open source. Or is it a failure that many open source projects by some estimates 90% plus have no committers after a year? Or the easy one, is it a failure that I'm showing this presentation to you using close source software on a close source operating system? Well, don't pick that one, the answer to that is easy. Just you should have used LibreOffice on Linux, right? Okay, so take a minute and let's do that. That's my last one. Okay? I think the decibel level in the room is telling me that people are having interesting conversations. Can you hear me? Okay, so thanks for that. This is really good that you're talking about failures with such an energetic and kind of vociferous style. That's what I wanted you to get into, talking about failures. Because as I said, success and failure in the open source land are really two sides of the same coin. Perhaps even the coin is heavily loaded on the side of failure because we fail more often than we succeed. So I'm actually very heartened and very pleased that people were talking about, well, I'm assuming failure because that whole talk is about failure. Keep doing that. Keep talking about failures, sharing the why's, the what's, the campfire stories, as I said, all of those things are important. Failure is important. But there is a distinction and it's a little bit sober one that we need to know. How many people know what statue that is? Yeah, people think it's a thing. I think there's a similar one. This actually is... It's Abel, I think. No, it's not me. It's Abel. And Abel obviously failed, but here's the deal. His failure was not recoverable. So that's why all he had was regrets. And that's the key thing to avoid. Fail often, but avoid regrets. Regrets are the things where you just are eating that are eating up inside of you and you're like, I wish I hadn't done that. There is no way time machines are, I don't think they're possible and even if they are theoretically possible, we don't have them yet. So you literally can go back in time and change the decision that you made in the past. No way you can do that. If that's the source of your regret, don't have that, fail, but don't have regrets. So here are basically the seven ethics. I put them down if you wanna take a picture. I'm sure the talk will be shared or slides will be shared later on as well. But here are those in one slide, hopefully legible enough from the back. And of course, continue the conversation on the website as well. Here are the various ways to stalk me. I'm not particularly active on Twitter, but please do reach out to me on LinkedIn or follow me on my blog or email me. Thank you so much, you have been all awesome. Give yourselves a round of applause. And I believe we have a few minutes for questions. So we have a few minutes for questions and I will repeat the question. So there's only two things that might have happened. Either I convinced everybody or I convinced nobody. So I'm gonna be an optimist and take with the former, but if you do have questions or criticisms, by all means approach me now or later. Otherwise, I know we have to end like five minutes early for people to clean up. Thank you again for a wonderful being a wonderful audience.