 I'd like to, actually, appreciate you, the organizers. I mean, the flood was something else, but I don't think everybody really realizes how thoroughly they took care of the problem. When I got to my hotel room over the weekend, there was already a sign, warning, about Southeast Queensland's droughts. So, you know, great job, guys, country of extremes, drought and flood, all right. So I'm going to go through, here's kind of a list of the stuff, I'm not sure we'll get to all of it, that's okay, we'll have another chance later this afternoon. Just talking about, there's been kind of a divide in the past several decades between the guys that do concurrency theory and the guys that actually write code. And there's been some pain on this on both sides. Of course, I'm a practice guy, so I'll be presenting from that viewpoint, but hopefully we can learn something from this. So they had some corrective criteria, that's what they called them. The linearizability is a very key one, that they hold very dear. And the idea is that everything happens in an order. You might not know exactly what the order is, but everybody that's observing the system will agree on some subset of orders, there'll be a partial order that everybody will agree on. And this is a really nice thing, it makes your proof simple, but it comes at a price. This shouldn't be controversial. In fact, Hurley and Shevitt of course are eminent theoreticians, I've met Hurley, he's a very impressive guy, actually very practical as well. And this is from a 96 paper, a long time ago. Frankly, finally, we've proved that these trade-offs are inescapable. An ideal linearizable counting algorithm is impossible. Since ideal nonlinearizable counting algorithms exist, these results establish a substantial complexity gap between linearizable and nonlinearizable counting. So if you're going to assist on linearizability, you're going to pay for it in terms of scalability, performance, and energy efficiency. So it's worth asking what we do and how far we go. Let's start with the universe. Is it linearizable? Is the universe linearizable? That's an interesting theoretical question. My opinion, I'm not a physicist, my opinion is no. I mean, you know, the plan's way around for each other to move, I don't think so, okay? Let alone the galaxies. So is linearizable ability useless? Is there any purpose for it? Aside from making the theoreticians' proofs easier. So let me make sure I understood you. What you're saying is linearizability is valuable to allow ordinary programmers to reason about what they're doing. Did I get that right? Okay. So not just proofs for theoreticians, but also a confidence of correctness for practitioners. Okay. Any other thoughts? Yeah? Okay, so the universe, maybe the universe, but in software we have software and we can enforce linearizability if we want to, if it makes things convenient. That's certainly true as well. Okay. And yeah, of course it's not useless. You know, it's where it applies, as the gentleman here said. It simplifies analysis of verification for theoreticians and practitioners alike. And most of the time, it doesn't cost much. It works out pretty well. If you have things that are totally independent, no matter what you do with them, they're linearizable because there's no way to argue what order they happened in, so no need to worry about it. You get the benefit of linearizability with no cost. And when you get something for nothing, that's a very good price, right? So the way I see it, in the concurrent programmers toolbox, linearizability is kind of like a hammer. Really useful tool. A lot of really fun ones too, actually. And if you have nails, it's a great thing. If you're trying to screw bolts onto, nuts onto bolts, it may not be your best choice. If you're trying to repair one of these guys, if it's mine, please don't, all right? Thing is this, yeah, go ahead. I was thinking that maybe the better question would be, is linearizability necessary? Is linearizability necessary? Hopefully not. Maybe I'll establish it. It's, we have some things that actually work without it. It's, again, if you don't have a hammer, you can probably get by. I mean, if you have a big, heavy wrench, you can make it work fairly well. But it's a useful tool. And I'm okay with it being a tool. I'm not okay with it being a religion. So if you have small, critical codepads, then the, the, the commentarial explosion, the non-linearizability can get, get to you is less important. If n's small, then the fact you've got a power to n, raise something, raise it n power is not that big a deal. If you interface to the outside world, then linearizability may be a very expensive fiction. It may be helping you at all. And it may be hurting your performance and your scalability greatly. And the poster shout for this is, is routing tables. All right? I feel kind of funny giving this example with Dr. Surf in the crowd. But the thing is that, you know, you've got these internet routing protocols and something happens way over there and it's going to change where you send your packet. And it may take tens of seconds or minutes for it to get to you. And there may be several of them happening. By the time it gets there, you probably have no clue which one happened first. And you've been sending packets the wrong way for tens of seconds or minutes. What's an extra millisecond or so? Probably not a big deal. So it's, statistics gathering is another big one. So we've got this thing where we're counting the number of packets coming in since we're on networking. And why does it matter if we're counting packet size is more important. Why does it matter exactly which one came in which order? We don't know. We can't know. Okay. What does it mean for it to arrive? Does that mean that the first bit hit the interface? Does it mean that the interface card received the packet? Does it mean we got the interrupt? Does it mean the packet was delivered to user space? Who knows? So why bother trying? Just say, you know, arrive some time and here it is. So linearizability is often the right tool for the job, but not always. So when do you not want linearizability? Extreme real time response where the overhead and the scalability may hurt you pretty badly. I'm not going to go into this in much detail, but a recent paper is being presented this week in Austin, Texas, talks about something called strong non-communitivity, which, and if you have that, then you, if you have an interface that, if you have a pair of primitives and the action of one influences the result of the other in both directions, either you're slow, you're non-linearizability, either you're slow, you're non-linearizable, or you're non-deterministic. Those are your choices. And of course, statistics gathering is a big one where linearizability is something you drop on the floor. And yes, you can often transform a non-linearizable algorithm, changes semantics so you get a different semantics that's linearizable. And you can do that. You can also describe planetary emotions using epicycles, all right? You know, you laugh. You laugh. This guy named Ptolemy in Greece long, long time ago came up with this totally bogus model for how planets moved. He based on a lifetime of observations. And it was a thousand freaking years before anybody knew who was wrong. We should all be wrong like Ptolemy, I mean, you know, come on. Also, what do we need going forward? You know, major motivation, I mean, as the general point out, both not just for theoretician, but for practitioners as well, is this simplifies proof of practice. Simplifies knowing that you got it right, however you do it. But you know, I do proofs. I do them mechanically, all right? I use a thing called Promo. I may not be the best thing in the world, but it's been around for 20 years and I've gotten used to it. I had a problem a few years ago that involved a real-time primitive in the Linux kernel where somebody was saying, look, I don't believe that this thing is partitionable. I want you to prove it with four CPUs. And I'm going, look, my machine only holds three CPUs worth of state space. I don't care. I don't believe you. So fortunately, there was a hardware guy who said, hey, I think I can help. He translated my Promo code into the HDL and ran it through some of the hardware validation suites. Those things had absolutely no problem with it. Just a total non-event to prove it. And so I think that one of the ways we can tolerate making use of non-linearized ability where we need it is to take some of the hardware techniques for proof of correctness and apply them to software. I think there's a lot of space we can gain in there. And another thing is that a lot of the focus has been on looking at very small things and making it parallel. This is not very helpful. If you have a very small thing and you split it up, you're very likely to have the communication overhead overwhelm the parallel gains. And so working at the application level where you take something very large and break it up is more likely to give you benefit. And this is not just me. There's Patterson in an article. I wouldn't say summer, but it was last winter here. She talked about that and advocated, which is very nice to see. So what happens? I worked down at the OS kernel level, sort of the middle there. The farther down you go, the more generally you have to be, the more you have to worry about performance, the less you worry about productivity because the more users you have. The more users you have, the more it makes sense to invest a huge amount of effort into proving things a little bit for a huge number of people. Back when I was working my way through college the first time, I was at the very top at the application level. I had two users, two. They're about the same age as my mother, now that you mentioned it, but no, there were a couple of ladies that typed input into the housing program that I was, that room assignment and billing for the students at the university. And so I wasn't paid exactly a huge amount of student programmer, but there was a limit to how much it was worth investing in improving the world for those two users. And so what we'll have is that we go up the stack, we'll want to focus more on productivity. We'll want to have things that allow people to quickly reason about their programs and probably much, much higher level things in linearizability. Linearizability is something that's way down here for most things, okay? So more application-specific facts, theorems, and properties that then lead to pay attention to. And I don't think we really need to scratch the surface there, because people like to be down in the, people like love generality, especially activations. And that's, I think, been a big obstacle. Okay, I'm gonna skip commutativity because I'm going along there and lock and wait freedom. This is one of my pet peeves right now. They call these things correctness criteria. And they usually, what that implies is that if you're not linearizable or if you're not commutative or whatever, you're incorrect. And I disagree. How about properties? I mean, these are valuable properties. I'm not saying they're worthless, but they're properties, not correctness criteria. I'll get off my soapbox now. It's easy to forget simple things. We have funny names for these. One of the things is you need to understand the properties of the underlying hardware and software. And that's somewhat of an acquired taste in many places. But look at other fields that are around for millennia. If you got a bridge and somebody designs and they don't realize that concrete's weak in tension, do you want to drive across that bridge? I don't, okay? And so, if we can't have people in these professions, these ancient professions dealing with them, why, why would we trust an algorithm designed by somebody that didn't understand the underlying software and hardware it's based on? How's that going to work? And yeah, these two properties do change with time. But these changes really have dramatically affected how you design your software. When I first computer I used, you'd kill yourself. Kill yourself to get a floating point operation out of a loop. Because they were expensive. And the favorite thing was to turn them in table lookups. Well, these days, the floating point arithmetic is probably faster in table lookup if the table lookup involves a cash miss. I mean, you know, these things do change and it does make a difference how you work. You know, get used to it. And there's, I'm not going to go through these. There's a bunch of really simple things you can do. Partitioning is called embarrassingly parallel and I'm telling you it's an embarrassment of riches. The only thing embarrassing about it is if you don't use it when you should have. I think the biggest thing that we can use a lot of help with is tooling to find bottlenecks and other problems. Linux kernel is actually in fairly good shape compared to most applications. It's got things like lock depth. It's got Perf and a bunch of other things like that. But there's still a lot of things that are hard to track down in the kernel. And the application level usually doesn't have anywhere near this kind of instrumentation. Could you explain your comment that pipelining is simple but can greatly reduce synchronization over it? I thought you would have said greatly increase. Could you just clarify? So on pipelining, it's a mixed bag. You can get in trouble. You can make things hard. But in the classic case where you have an embedded system, you have this one processor that's just doing this thing and it drops its stuff into a buffer. And the synchronization is fairly minimal for just this straight single producer, single consumer thing. And if you have that sort of a regime, you can end up with reasonably small synchronization overhead. It can get complicated. It can get hard. But there are really important, really common special cases where it's dead simple and pretty fast. Especially if the processing you're doing in each node is really big compared to the amount of data, of course. But that's the wonderful case. But if you're just doing a single add to a byte and passing it on, yes, you'll be in trouble. Agreed. OK. I'm running short of time. I just, what I'm going to do is go ahead to here. Let's suppose that there's a randomly selected human being. You don't know anything about them, except their human being. Might be somebody in this room. Might be somebody that's there in Africa. Might be ahead of state. Might be anybody. What one change would you make in this person's life? There was one answer right in here. More memory. More memory. OK, that's one. There's this thing, drink to forget. I'd like to remind you of it. But it's not a bad choice, but may not be universal. Any other guesses? Say hello. Spying to them. Maybe the wrong thing to do, depending on the language they speak and where they are and what the culture is where they are. That's Greek? We don't know they're Greek. Greet them. OK, so the change you make in your life, you would greet them. If they don't speak your language, will that help? Get out of the conference room. Out of the comfort zone. Out of the comfort zone. OK, that's fine in the way you make it. Well, this is sunny outside, so my first interpretation might be pretty good for today. But get out of your comfort zone. That depends on what getting out of the comfort zone means. If they're subsistence, living in subsistence, they've got to do what they're doing, or their margin of error is extremely thin. Give them wisdom of over 30 years of experience. When you were a teenager, would you really have wanted that? Are you sure you would have wanted it then? Or would you rather have had it then now? Just asking. Give them a way to live carbon neutral. Might be good, depending on. I apologize for my expression when I did that. I got a member that I very clearly remember the 70s. I don't know if you're aware of what the prognosis was in the 70s. We were entering an ice age all over the headlines. So a little skeptic sense of it. Yeah, maybe a good thing. Yes. Give them more time? Give them more time, unless they're bored. Yeah, or in jail. Or which? Or in jail. Or in jail. Yeah. I have one. I would help them eliminate the need for sleep, because it's a waste of time. Thomas Edison, he was right there with it. I could identify with that, except that I've found that sleep is the greatest debugging aid I've come across. I think you really can't do anything unless you know something about them. So I like my contributions to the Linux community, but the only reason I've been able to do that is because I've lived among them. You're not like I've gone to every other houses and all that. But I think if you're really going to make a meaningful change in somebody's life, you have to have some understanding of the person you're helping. And so I'm hoping. I'm hoping that get-togethers like this will help. I think we're mostly practitioners here. But I hope that if we get. All right, well, hopefully you get to know us, and we get to know you. And we can actually help each other out. I think the biggest problem we've had between the theoreticians and the practitioners over the past several decades is there's been almost understanding and contact between them. So hopefully we'll fix that problem. Anyway, I think that's pretty much it, as far as what we've got. But what is read these? Oh, OK, if you insist. List number three for school members. Identify non-strongly knock. Oh, OK. In other words, what the heck is non-strongly non-commutative algorithms? I have a list of them. I have a list of them. OK, so for example, if you are doing set membership or putting things in bags or heaps or whatever you call them, don't return whether it was there or not. And don't say when you do the add or delete. And in some cases, that can allow you to use more scalable and more efficient algorithms. Because the thing is, if you have two things, one's adding something, the other's deleting something. If they don't have to say whether or not the thing was there, then they don't have to coordinate with each other as much. And therefore, you can in some cases use cheaper implementation strategies. Although I think that in that case, avoiding linearizability may be more helpful, but I've got that on the next bullet item. So Atea, one of the authors is Mag and Michael. I've worked with them a long time. It's a really cool paper. It takes a little bit getting through it. And they've got a few places where they overreach a little bit. But it's really, I'm not going to say captured by intuition as a parallel programmer, but first thing I've come across that came within a time zone or two. One thing that Paul said about a few minutes ago is about getting to know each other. And when we organized the first mini-conf in Wellington last year, what really made me feel proud not only the caliber of the presenters, but someone told me, an anonymous guy, I don't know why geeks don't carry business card, but whatever, he told me, hey, we have a community here. So I honestly expect to see you all together in 2012 or you are all invited to speak too, assuming that the mini-conf will be accepted again. But that's exactly how we are trying to get together. So I even invited David Patterson to the mini-conf. He just said no. Are you saying something nice about him? I don't understand. Precisely. So questions. So we will have Paul again at half past one in his main talk that it's, exactly. If parallel programming is hard, if so, why? And I believe that if you don't have the answer, then you can keep asking him at five at the panel. So we will also skip the lighting talk, which have now at 11, which will be a little bit condensated at half past two or something like that. And it's a bunch of clever guys of a startup call open parallel. And that will be also presented on Friday at I think at 11, 11 something on 101. It's about the integration of inter-straining building blocks into Facebook hip hop. So that will be a little bit presented on the lighting talk at exactly 25 past two.