 So you can only do the main session of the day. Parvaj is here for a week. He's got a fairly fascinating set of slides. I'm sure some of you must have seen this on Slideshare. How many of you actually looked at the slides before you came here? So that's a fair number of you. So I trust you know he's going to have a fairly interesting 90 minutes ahead of you. So Parvaj, hope you. Excellent. Thank you very much. Kiran, thanks for inviting me here. You know, for me, C and C++ has become kind of like in relation to like, we have discovered something and we need to tell the world about the things that we have discovered because most people don't know this. Even C-C programmers struggle with some of the concepts that I'm not going to present today. So as a missionary, I take every opportunity where I can kind of spread the gospel. So thanks very much. So more than half of you raised your hands when you had seen the DFC slides on Slideshare. But just checking, how many C-programmers in the room? Who programmed C? Normal C-programmers. Normal C-programmers. But first, how many have ever programmed NSE program? Okay, that's all we're asking. How many programmed C during the last week? Okay, a few. Cool. So how many programs have done C++ the last week? Okay, so there are a few C++ programs as well. I will add, this is not so much about C++, but everything I talked about today, or actually everything I talked about today, also applies to C++. And I have some few slides in the end which is more or less only about C++, but I'll come there. So, just to get started, as a programmer, we feel that we are a muscle of the universe. We are very, very, oh yeah, oh yeah, I know how to command this machine to do whatever I want. But the more experience you get, the more often you know what happens once in a while, you get bitten. And you get bitten by unintentional blows, sometimes. And sometimes you get bitten because there was something you didn't really know that you should have known. And if you hadn't known this, you wouldn't get the problem in the first place. And this is the motivation and the reason why we developed this slide. I'm doing this work together with John Jagger. Because we realized that most programmers are not able to reason about the piece of code like this. So, I'm now going to ask you to make sure that you sit next to someone so that you can together discuss what do you think will happen if you try to compartmentally execute this program. So, make sure you discuss with someone closer. I would like to hear something. So, please start the discussion with the person next to you. Try to reason about the flow, the execution flow and the values you can relate to this slide. And so on. And please find a pen and paper. If you don't have a pen and paper, you've got some people in the back row. We have some paper at hand, sir, so that you can write down what you think it will be. I will not. Write down what you think it will be. I think so. And for those who are not discussing with partners now, I really encourage you to do so. We should take so much more from the rest of the exercises that we're going to do. So, please start the discussion. Discussing. Okay, you have had the chance to look at this example now. Anyone wants to give a comment about this code? No return. You say it doesn't return? It doesn't have a return. It doesn't have a return. Oh, yeah, that's valid comments, but this is C99. So, for the last 14 years, you didn't have to add return on the end of May, actually. Because C99 adopted from the C++ standard, where if you don't return from May, it will always return success to the execution of one. So, this is perfectly valid code. When it comes to the return, yes. All right. Anyone wants to suggest a number? What is going to be printed out, you think maybe? Nine. Nine? Over here? 11. 11? Anyone better? What did you say? 10. 10? 7. 7? 13. 13. Oh, you are the top winner now. 13. Anyone better than 13? 15. 15. Oh, no, no. No, no, I think that's not going to work. I was trying to compete with them. All right, okay. Does anyone want to say more than 15? Because then you win. The thing is, this code can print out absolutely anything. And absolutely anything can happen. With this code. If you write this in Java, you are guaranteed a particular result. If you write it in C, all bets are off. Anything can happen. So, I do actually have time to have three different compilers on my machine. So, if I compile it with the GCC version that comes with Mac OS, the operating system, which is GCC 421, which is a fairly old one, but it's still the one that ships with operating system, I get 12. If I compile it with Klein, which is now kind of the default favorite compiler for Mac, I get 11. If I compile it with official, deeply serious Intel compiler, I get 13. What is the problem here? What is it that we don't understand? Or most programs don't understand? What partner plus plus does have an order? Yes, you are onto something here. I have another suggestion. When the compiler makes a token out of it, in order the precedence or left to right or right to left, that is why I-value is a more important rule. I-value is a final I-value plus index of V. Or is it from left to right if it is traversing? It makes a difference. You are certainly onto something. It has nothing to do with precedence, but it has to do with evaluation order, which you are into now. The evaluation order in C and C plus plus is not specified. That means you will not know, exactly as you say, you will not know when the side effect takes place. There is no rule that dictates when the side effects take place inside evaluating an expression. Well, there are hardly any rules. So, how can we kind of... This is dangerous, isn't it? There is no warning, sir. What should we do? C-programmers know that we have to try to get some warnings. How can we do that? Wall. Wall is your friend. Not only wall, you have to put on Vextra and Patentik. And also, if you compare it with optimization, there are even more error messages. So, let's give it a try. You will not... You cannot rely on you actually getting warned of this. There is no requirement in the C and C plus plus standard that it has to diagnose this kind of code. And the problem is that this is now undefined behavior. So, if you have undefined behavior anywhere in your code, the whole code is basically undefined behavior. And this is what we are going to talk about today. Another thing, and this is what I suspect adds to the confusion. It seems like most programmers treat C and C plus plus as a high-level language. And I argue that it's not really a high-level language. It's not a high-level language, as you'd expect, from Jahar, Sinshar, Rumi, Python. And we can also mention Clojure, Lisp, or Haskell. All of these languages are kind of high abstraction from the undefined architecture. This one, C and C plus plus, they should be treated as protocol samples, because that's what they are. They sometimes manage to hide the fact that they are protocol assemblers. But if you forget about it, then you end up with these problems that we just saw. We have a piece of code that gives all kinds of results and we don't know why. So, without deep understanding of the language, the history and its design, we are doomed to fail when we program C and C plus plus. And this concerned us a lot. We had been working three years already, teaching C and C plus plus. I had been introducing hundreds and hundreds of developers. And I saw this kind of recurring pattern of not deeply understanding concepts. So, there were some strange explanations of what was going on here. So, we published this slide there. And we thought that someone would be interested in it, but we were quite amazed when we saw the reaction when we were like, yeah, 20,000 pounds was the first hours after we published it. And I think it has now fired up again. I think it was done 10,000 times today due to some Twitter storm, et cetera. But it's a slide deck that has now been read by Manny and we have received lots of email and a response where people are asking, where can I find more information about this? And initially, we thought, oh, that's a problem. We can tell you and then we start to look. There is no books out there that talks about these things. And there are hardly any courses you can attend that talk about this. The best book is probably Peter van der Linden, the expert C knowledge. No, expert C programs. Expert C programs. Deep knowledge. Deep C secrets. And then there are a few books about security that also talks about these things. But a problem that is not much. So, that's where we felt we need to spread the word. So, let's start with the basics here now, okay? No, we are not going to only talk about the problems. We also start with the basics here. What is this one going to print out? Now it's not the trick question. It's exactly what we can do. Excellent. Four for four. Very good. What is this one going to print out? Four places. Four places. Four places. Four, five, six. Good. What about this? Derbys? Higher? One, two, three? One, two, three? Yeah, good. Larry, as we call him, he's not sure. Because he doesn't know that static variables with static storage duration is initialized to zero. In C++, objects with static storage duration is initialized to its default value, which for native types means zero. But he's not sure about that. But he has a strategy. I don't need to know that because I always said to something. The thing is that as you work as a programmer, sometimes you have to read, call that older and white and understand it. So there is no excuse to say, well, I don't do it so I don't need to know about it because you're working together with someone and therefore you need to know these things. So this is guaranteed. This program will print out. But what happens now? Garbage, garbage, garbage. Garbage, garbage, garbage. No, no, garbage, garbage. Yeah, you're onto something. Maybe. Well, this happens a lot in the... It depends on the C++. This happens actually a lot with programmers that generalize and then try to take knowledge from one place and then generalize it all to another place. So Larry just heard that static was initialized to zero, so therefore... One, one, one maybe? No. Because objects or variables with automatic storage duration are not initialized implicitly. Is it garbage? This is also an example of undefined behavior. So absolutely nothing can happen. Absolutely nothing can happen. So the question here is why is static variables or static variables, why do they get the default value while automatic variables do not get the default value? Because there is a reason for it. Sorry? They are supposed to be persistent in the data... No, the runtime linker can set it. Yeah, you are on to something. Any more suggestions? Yeah, the automatic variables are on the stack and the stack will contain whatever is previously there. The suggestion is that automatic variables are on the stack. In practice, that is what very often happens. But at the same time, if you look in the C standard, the concept of a stack is never mentioned. So this idea that automatic variables go on the stack is something that very often happens. But if you think that always happens, then it will get into other type of problems. Something that is really true. Any other suggestions to this answer? It's expensive to manage the stack. I like that. It is expensive to initialize variables with automatic storage duration. While it is not so expensive to initialize variables with static storage duration. Because what typically happens when you start a C program is that the operating system will load the program into memory and then it will pass the program culture to a label called starch, which is usually in the C runtime library. The C runtime library will do some initialization. One of the initializations it will do, typically, is just to do man-set over the whole area. Man-set zero over the whole area of static variables. This is what usually happens in practice. It is something that happens once before you start a program. If the variables with automatic storage duration should get an initial value, that would add extra commands for every time you enter a block. For example, every time you enter a function or whatever. I heard several good suggestions about why. This is so-and-so. And it's not always about C being a difficult language because there is a reason for these kind of things. Isn't that as bad, by the way? That's true. There are a lot of questions in C and C++ that answer is because of optimization or efficiency. Let's go back to our example. This is exactly the same as before. Let's try it in mind-machine because something is the theoretical part, other thing is the practical part. What happens when I run it on mind-machine? This is kind of scary, especially if you tend to do the type that generalizes overseeing one example and then it's always like this. Does anyone have a classical explanation for this one? It seems the same thing. You do the same stack, you do the same results. In this case, first of all, it's undefined behavior, so anything can happen. But this is actually executing, and this is a phenomenon that happens. Being able to explain that like you just did now, the reason is that it's the same memory area that is used over and over again. Just like you said, it's a garbage value, plus one, plus two, plus three. The garbage value, when you compile it without optimization with GCC, very often happens to be zero because GCC tries to be helpful by man-setting the stacky area. I think that's the page pool from DevZero. The allocator pulls a page from DevZero by default. I'm sure that depends on architecture. That's the reason why stacks tend to be zero. Okay, yep. Larry might have heard something, might have been on Stack Overflow or read some articles or books or overheard someone that is quote, oh, maybe he even read the C standard. And he just remembers the sentence, oh, no, it's because the value of an object, the automatic storage duration, is used while it is in the terminal. And this is exact knowledge. This is not exact theoretical. It's the best direct answer, but it doesn't explain this phenomenon. There are a lot of programs out there that thinks it's okay just to know a lot of the sentences and say, because of that, I'm not doing that. That doesn't help you when you're trying to debug code that others have written for the process out here in some moments. And Larry also thinks that as long as it's undefined behavior, I don't need to care about this. I think that is wrong. I think you need to learn enough about practical implementations and typical execution environments so that you are able to reason about this and even if it's undefined behavior, because that helps you a lot. You can actually, by understanding this deeply, you don't need to know all the sentences in the standard in order to program and see if the answer was correct. And in this case, the explanation you gave that it's probably the same stack area gives you an insight into how a C program executes. That might help you avoid these type of things. Because you won't get help from a compiler, always, you're not guaranteed to get help. And the question now is why do you think C standard doesn't require that you always get a warning or an error on the enrollment code? Can I have a suggestion on that? It is not even good. Well, it was in my code. Well, actually, it's not really called invalid, it's called ill-formed. And it's not well-formed. So the standard says that if you're not having to find the error, then nothing will happen. So I agree, it's a bit sloppy. The code in my code is actually called ill-formed. When you have ill-formed code, why don't you get the diagnostic message? Why doesn't the C standard kind of say to be a C compliant compiler, you have to issue orders when the programmer is writing crack code? There is a deeper reason for that. If you are certainly onto something, it's extra work for the compiler. There is a deeper reason for it. We could also exploit the architectural specific thing. Yeah, we can. But why aren't we closer? Because you're C. Oh, yeah, yeah, that's always a good answer for it. No, that's not the case. Sorry for leaving you like that. The reason is that being able to diagnose ill-formed code is a very difficult task. Writing a compiler is already very difficult. Writing a compiler that can print out warning messages when you write crack is even more difficult. And C is a systems programming language. It wants to be the best language to implement first on any code, after a machine code, of course, on any CPU out there. So it would be a decolonial requirement to impose a compiler writer to the warnings of crack code. So instead of as it is now, there is, I think I've heard it saying that for a new completion in your architecture and the CPU, it should be possible for an experienced compiler writer to write a C compiler, a C compiler, in like a few months. But if you add this requirement of having to put out diagnostic crack code, you might need hundreds of developers working for 10 years before you have a compliant compiler. So even if you have some compilers out there that are able to give you warnings about this, you shouldn't expect C compilers to give you any type of warnings, really. The C standard is very relaxed about what you need to diagnose and what this... And basically it's saying, if it's not valid code, you can do whatever you want. That's the short version of what the standard says. There are tools to help you with these things if you're interested. Fade to its colority and... Yes, good point. The issue was that there are tools out there that can help you with this. And Covertive was mentioned, Lint, Clockworks, I should have mentioned more because there are really good tools out there. But they will also, they will take you at some way, but they won't be able to do that. So that is the reason why I want to always get on the standard with some involving code because it should be relatively easy to write. That's a C compiler. So let's look at this one again. There is something you can do in addition to adding more of an extra and that is to always compile with optimization. This is a good thing. Because if you compile with optimization, you force the compiler to consider a larger context of what it's doing. And therefore, it might be able to see that you're trying to use A in initialized. If you compile without optimization, it is just scanning through the code and compiling it into a similar language and that's it. It never sees that you're trying to update an initialized variable. Another thing is that you will typically get garbage out if you really see garbage coming out when you compile with optimization. And the garbage is very often, especially on high-end target machines, is very often not the same. By the way, I just saw this one earlier today and thought about it. Can anyone give a plausible explanation for why these garbage values are different? They're resistors. They're resistors in memory. Yeah, they're resistors, but... They're rate of resistors. Yeah, but why are they different every time we start it? Because in some way, we would think that initials are... It might also be executable. Not sequential. It is actually intentional why it's different. And then, suggestion. I don't remember exactly the word for it, but it's a security mechanism that was implemented in several operating systems around 10 years ago, and it's to kind of pretend that it's that smashing and things like that. Stack endless. Stack overflows and hacking. So they try kind of modern operating systems like Linux and Windows, etc. They try to always load programs into different places in memory every time it boots and every time it stops. So it's more difficult to hack into those machines. So it's just an additional level. And I think that is the reason why it gets different numbers at a time. But the same program. It's the same program. So the question is not quite 6.5.6.3.4.4 and 3.4.4. The question was why this kind of garbage is... But it's interesting that it's not even the same when you're executing a price. We can't reason much about these numbers anyway. I thought the first time I saw it was different. It was around 10 years ago. And just recently I realized it was because of a second security mechanism that was gradually implemented in more and more operating systems. So now I'm going to show you some cool and already learned enough now. So we are able to actually look at this piece of shit. Very bad goat. But what do you think it will print? When I run it on my machine. I've seen my machine execute a few pieces of code already. It can't be without optimization. It will print. 42. It will compile with optimization and print. Very good. Of course it won't. So this one prints 42 exactly because, and we could say that because we understood about the stack and executions, etc. This is useful knowledge, but we cannot rely on it. And if you could give a plausible explanation for it, you should feel both good and bad about it. You should feel bad because you are assuming something about underlying architecture that you are not really allowed to assume anything about. You should feel good about it because then you are able to explain what happens in the real world. And you might be able to easily troubleshoot C programs that you need to debug. And maybe you even avoid falling into all the traps while he doesn't know this. So for a person that doesn't know what typically happens when you execute a program they attempt to do what we human beings are best at. If you don't understand it you just come up with that explanation. And we believe very strongly in that explanation. And Larry, he thinks there is a pool of named variables in the machine that since we used A the next time it's asking give me an A it just happens that you have always thought it was a 42 as well. And so he doesn't think it's strange at all that it's now 42 because of this name variable and this is, yeah, you know it's exactly what happens for 13 or was it 1400 years the humanity thought that all the moons and the sun and the stars they were revolving around Earth. And for more than 1000 years we were developing very fancy mathematics to describe how the stars they were not going like this they were going like this at the cycles around our wonderful world. So when a group of when a whole world can actually believe in this kind of in this is kind of an invalid conceptual model about how the world is working and if you have this kind of idea about how sea is working you tend to invent things like to name the pool of named variables like Larry just did. So I'm not going to spend time on this slide, I'll just leave it there but if you want to reason about C and C++ program it is useful to just come up with a decent conceptual model about how the memory is laid out and this is this is a model that no machine implements directly but which will give you approximately the right way of thinking that there is something called a text segment with instructions there is something called a data segment with static storage and this is the one that is often meant as a zero problem over there sometimes called the DSS and then there is an execution stack that grows and shrinks as you call functions and it grows and shrinks with these kind of activation records that are every time you call a function and activation records go on the execution stack something happens and it pops out again and the rest of the memory is used as a heap and can be dynamically allocated but this is as you can know this or you can think about the execution like this but you should know that as soon as the optimizer or anything takes in C doesn't have to kind of treat it like this at all because in C it's all about behavior as we mentioned later as long as it behaves as but it's fine so yeah this is undefined behavior and as you mentioned if you compile with optimization you typically get different results and one of you experienced that you try to approach the compiler and it doesn't work or you change the optimization level and it doesn't work anymore you must have yes of course you have because this is there are actually large development groups that are still stuck with old compilers because they don't dare they have tried to use an approach to a compiler but it doesn't work so they stick to the old one because it works and yeah so this is C310 Visual Studio 6 what are the people I use in the Visual Studio system or flags and yeah and I don't want to change optimization level because everything just crashes and the reason is that they have very often the reason is that they have undefined behavior and this is what you need to kind of consider when you talk about C310 the standard talks about what kind of behavior should a C program have so it talks about the implementation defined behavior the unspecified behavior and the undefined behavior and it's also talking about recall specific behavior and examples of implementation defined behavior is for example this construct also what should happen when you ride shift um no sorry what should happen if you ride shift a very big integer so your overflow it's unspecified which order um the evaluation order so which one is executed first you don't know whether it's printing 42 or 24 and it's undefined what will happen if you have a very the largest integer and your plus one to that then you get undefined behavior I think the unspecified like it will always print 2 because printer returns the number of bytes and so I would but it has a side effect so it's a side effect we're looking for we are not actually printing out the salt of the satellite good and if you start reading all this kind of long long long lists about undefined behavior and this is when you have undefined behaviors long long lists I've had this chair in the C standard and there is also this long list of unspecified behavior and implementation defined behavior you might think this is a crap language let's use something else instead but I argue that these kind of defects in the language is one of the reasons why they are so successful because it's easy to write a compiler for it and it's possible to optimize a lot and C doesn't impose a particular hardware architecture that instruction set on a particular architecture needs to follow etc so therefore it's very convenient to implement the SQL compiler early and use it as the kind of the first about a similar language that you use on that architecture so I think one of the really really good successes of C is exactly these defects and I think C++ has inherited this success because if you program C++ correctly you can end up with C++ program that is just as efficient as C so there is possible to use C++ as a systems language program as well it doesn't leave any room under C and C++ where they can sneak in a language that is better than C and C++ so therefore C and C++ will always be kind of the first language that comes after assembly language that's my assumption my theory for why we are still using these languages after 30 years and why I'm sure 30 years from now we will still be using C++ so let's look at this one now now I would like you to discuss again the other piece of paper write down what you think this will bring up and this is not undefined behavior so this is perfectly valid C++ but discuss with your partner I would like to know some kind of discussions going on 7 8 8 C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ C++ Okay let's start with the first one. What do you think? 3, 4, 7 is suggested? 4, 3, 7, 4, 3, 7, 3, 4, 7, 4, 3, 7, any other? But the point here is you are guaranteed to get either 3, 4, 7 or 4, 3, 7. That is a guarantee. You won't get garbage or anything. You are guaranteed to get either of those two results. And you are guaranteed that 7 will come last. What about this one? 3, 4, 7 is suggested? 3, 4, 7 is suggested. Once again you have a guarantee that it will either be 3, 4, 7 or 4, 3, 7. That is guaranteed at language gates. But it's implementation. So my machine, if I do use GCC, I get 3, 4, 7, 4, 3, 7. But if I use other components, I get different results. These are all 100% valid interpretation of this source code. In practice, you might say, this doesn't have a practical value. But it does. During the 20 years I've been programming with C and C++, or actually more than 20 years now, I've seen it happen a few times when inside these methods, there are big methods and there is some logging going on. Especially in debugging, for instance, it's printing out something, or it's logging to a log file, and suddenly it just happens to kind of... It logs in an order that you didn't expect. Or it crashes. You thought that this one would be evaluated first, but this one is printed and then it's crashing and everything is just confused. But if you know that it's unspecified, then it's easier to list about. So there are four valid interpretations of this source code. And to add to the confusion, if you are experienced in nearly every other programming language, this is new because nearly every other programming language have a very strict evaluation order of this. So you can use the experience you have from other programming languages to assume how things are evaluated in C++. And 4-tran also have to unspecify the evaluation order, apart from that. So why do you think the evaluation order is mostly specified? You let the compiler writer decide what is most efficient. Yes, you use the efficiency card. Which is always a good point. If you know the SMD language, sometimes it's easier to do post and right to left to track arrangement or right to left. So the person who knows the lower language is in a position to do it. So doing that makes it easy to write the compiler, and it also gives optimization opportunity. And the reason why we go back to this one, if you look at the Sembrer code of what GCC typically produces, the reason why GCC off and do it backwards here is because then it can push the arguments on the stack. You can evaluate backwards and push the arguments on the stack before it's calling the first. Now that used to be a valid argument for the link. So I just do it more habit. Because typically most implementations of C and C++ now, they don't use the stack as much to send arguments over to a function. They use either three or six registers, which is quite typical, to pass the arguments over, or some other type of thing. Okay. What do you think this one will create? It is IR3, IR4. The point here is once again, I'm going to reiterate the point, the thing here about unspecified evaluation order. And anything can happen. Since it's on the front of the area. Anything can happen. Trust me, I will show you an example afterwards. That is a very strong argument for this. There is this saying that when the compiler encounters ill-formed codes, it is allowed to try to make nascent demons fly out in your house. That's a common saying that comes from the use that posed many years ago. It's difficult to find compiler that are actually really pooling your frags on you, if they are found in some code. But there was, I have been told, I haven't seen it myself, but it used to be a GCC worst, that upon encountering a pragma that wasn't defined in the standard, it started net hacking. Or it started to play in Torah, or annoy yourself. Because anything can happen. And that's allowed by the C-Sanit, so that's what GCC decided to do. But it's crap code, but do you know why? Can you explain why? It's not only because of the evaluation order, which, if you know that the evaluation order is unspecified, it helps you on the way on Torah to explain why this is undefined behavior. But it doesn't, as I heard a word mentioned earlier, there is, in standard, there is a very specific reason why this is undefined behavior. It doesn't actually go above your height. It can't be. Yeah, it's... You're setting and getting it. Yeah, that is a pragmatic explanation, which is very good, and I think that is demonstrating deep understanding. But at the same time, there is more specific reason for it. Your genie writing all the ones in between who's equal to yours. That's exactly what I was hearing from the standard. What mine was saying, is that you are updating and reading a variable twice between two sequence points. So the question here now, what is a sequence point? We will look at that later. Do you know where the sequence points are? One is semicolon, and conditional operator. Thank you for contributing, but it's not semicolon. And this kind is, it is one in seven here. And the pragmatic explanation is that it doesn't know when I is being updated here. And it's... You get undefined behavior if you try to read on that today, the variable twice inside an expression. Because you don't have a sequence point. You don't have any rules about sequencing. So you will have... You will have an unstable memory state. If you look at the sembler code, it's kind of reading a value into memory. And then into a register, typically, and then updating it. And some compilers write it back into memory. Others just keep it in a register until you come to the sequence point where C is a guarantee that all the side effects will take place. And can be relied on after that. Oh, he doesn't care. So if you don't understand the rules of sequencing, and since it was class, this is the kind of code that you can stumble into all the time. So what is this thing? And on paper time, these snippets either print 42, or you should label it as undefined. So that's the first one, print 42, or is it undefined behavior? I'll write it down. All of it. I present that to you. That could be the way it has to be noted. It's not the same thing. That's the next one. But it's not printed. It's a huge portion of that. It's a complicated number. It's a pretty good number. It doesn't have anything to do with it. Okay, number one. Are we going to touch on that? Yes. It will be a tragedy if it wasn't 42. What about the next one? Not specified. Yeah, it's not specified. And it's only not specified. It is now undefined behavior. What about the next one? What is it? Yes, it is only two. And the difference between those two are that this one, you can call it a short-cutting operator, but that doesn't kind of a short-surfing operator, but that doesn't really explain what happens. The thing is that this one introduces a sequence point. That is the reason why this one works. This one does not introduce a sequence point. That is the reason why two is undefined and three is... What about the next one? It will not print anything. It will not print anything which is un-ported. It will not print anything? Yes, it will print for you. It will print for you. I told you it's undefined or un-ported. It's un-ported. Who thinks it's un-printed 42? Who thinks it's not sure? The thing is, this is guaranteed to print 42, but it's very useful to know why it's in 42, because there is no semicolon. I don't know. So the explanation is that the semicolon introduces a sequence point. The explanation is that this is a full expression, and in the standard it says that after evaluating a full expression there will be a sequence point. That's what it says in the C99 standard. And a full expression is an expression that is not a sub-expression or a larger expression. So when you have an expression that should be evaluated, and it's not a part of a larger expression, it's a full expression, and you will have a sequence point. And this is guaranteed by the standard. And it's the reason why we can update it here is for unused here. So it's not about the sequence point. No, it's not about the semicolors. What about the next one? Classical example of undefined behavior. And it's because this is also a full expression. But notice that within that expression we are updating a really good variable to us. This one, question mark. 42. Why does this question mark? The thing is that if you look in the C99 standard I have not been able and I have asked a lot of people to explain this, but I have not been able to find the chapter and verse to combine together that really explains that this is guaranteed to be defined. However, in C11 that came out two years ago, there are hardly any C11-compliant compilers out there in the market, so it's difficult to test, but you can test things like that in C++11. Here is kind of the chapter and verse about how to evaluate expression and what they have done is they have added a line here. The value computation of the operands of an operator are sequenced before the value computation of the result of the operator. And using that line it is possible to say that this one is guaranteed to be functioned. But of course, if this one is a macro then you don't have this guarantee that it will be a sequenced one, so then you have an underpart vector. If it's a macro you'll probably have an underpart vector. But what does food mean? Food is different. It's printing for the team. It's printing for the team. I thought about that. I thought about that. So things are happening in the standard to kind of make it better and better when it comes to sequence points. And actually now in C++11 and also C++11, since there is support for concurrency they have to update the memory model. Therefore, they have stopped using the sequence points as a concept. They are not talking about what the sequence is for another thing. So the wording has changed and once I get hold of a C++11 compiler I will start updating these slides according to currently there is no C++11 compiler. Do we change this between C++11 and C++19? Yeah, there are some significant and really cool changes there. But it doesn't seem to attract much attention from the compiler writers because they don't get C++11 compiler-compilers these days. But the thing is, going back to sequence points, the thing is that conceptually we have this idea that there are a lot of sequence points in the program. The thing is that there are not so many. And the rules are like this, two simple rules. Between previous and the next sequence point an object shall have its store value multiplied by the evolution of the solution. The next rule is furthermore the prior value shall be read only to determine the value to the store. That is the most. We tend to think it has a lot because that's what we are, for example are used from Java or C++14 and the other languages are so they move so slowly while evaluating the expression. The big expression of the evaluated by the certain four-year-old rules played down by the standard. What do you think the SQL power does? Is that the whole expression? Create a long expression pipeline and show it to the CPU much faster because it doesn't need to care about which order to evaluate this. Until there is a sequence point and the state needs to be restored. So, I just leave this slide is just left so people can read it. Later, I will publish this slide, of course. This is the condensed version I will put it over there. Well, the one on the left is two years old. This has more explanations etc. So, if you don't want to have a razor standard and understand it all when it comes to sequence points you get the decent conceptual model if you understand these five things. You have a sequence point after evaluating a full expression. You have a sequence point after evaluating the arguments and just before the actual call to a function. You have a sequence point at logical and at logical or. That is the reason why you can guarantee a left to right evaluation. You have a sequence point at the comb operator and you have sequence points on this version. That's where you have sequence points. And it's not at the same point. Okay, now we have seen this need for a few times. There is a reason why I did plus plus a instead of a plus plus. Because I have made several programs to leave me alone that when I run it like this which of course is four four but they think it's pretty strategic. This is scary. Initially I was like, what is going on here? This is so strange. How can it possibly think it's going to? So I was just like Larry. That's my attitude. Until I started thinking about this understanding when the side effect takes place is not so easy after all. So I realized that a lot of these programmers, well I haven't met so many but a few that this one goes through three things. They have this idea that the variables when you do it like this, it's updated when you go up the block. But at the same time did you really know that sequence point? Did you have a valid understanding of when side effects took place? And if you didn't you shouldn't laugh so loud about those things. This is three to three even though it's a bit symmetrical. So yeah, just to illustrate the point that if you don't understand it completely then you tend to end up creating your own beliefs that is being updated after the block or after the semicolon or something like that. Sorry. So, strange explanations are often assumed about having an invalid conceptual model. This is perfectly valid c code. There is no unspecified behavior here. There is no undefined behavior. I just felt I needed to have something that was completely pure c. It works exactly as it's supposed to be. It's portable and so on. What do you think these five groups of commands brought out? This is the video part. Let's see Okay, the first one we're just going to print out the first one. Thoughts. You mean that are you sure about that? Any other suggestions? Can you always understand something very, very important? What about the next one? The next one? Sure. What about this one? What? Till I had thoughts. It's just so forth. I can hear uncertainty in the room here. It's a forth. And the last thing, it's a it isn't one. The thing here is I think you did very well and I think this is very difficult to reason about. Whenever I come into this and I'm thinking about what's really going on here because in order to answer these things you have to understand part of the promotion and the motion rules of interterm. You have to know things like that. The short since it's smaller than the end it will be promoted to the and I said this was portable and it will be a seven inch compiler. I'll take that back, is that possible? Because I think you will get some problems with the same sizes. But if it's shorter than the end then it can be promoted. But if it's the same size as the end then it cannot be promoted. So you will get different results depending on the size of these types. But kind of the guideline here is of course you should never compare side to side ends. But notice this is something you probably do all the time because I bet a lot of you are indexing into arrays by using ints instead of size t. Don't do that. Because size t is the right way of indexing into arrays. Ints are not the right way to do it. And this is the type of problems that you will get. You will notice very much if you try to port code from 32-bit architecture to 64-bit architecture it will just overload you with warnings and bugs and errors. Because you have been using signs integer value to kind of index into memory. That's not good. Especially when you are porting between different word sizes and machines. 32-bit isn't along the same sizes as the end? The standard doesn't say anything about that. So sometimes could be. The only thing the standard says is that no one should be at loss. It shouldn't be smaller than it. And it also gives you a guarantee that it will not be shorted and 32-bit. But apart from that it doesn't. You have to say long, long because it's a little bit insert. I own some machines. We need to use it in turn. One good way of business is that you are using stdint.h that came with the C99 standard. Then you have mechanisms for reasons about certain sizes. Okay, we talked about integers. Let's continue with integers. What about this one? Looks innocent, doesn't it? It's idiotic, all written, of course. I just wanted to create something short. But I'm quite sure that some of you at some point have written a function that takes an integer and then do something with that integer later on. I did exactly the same mistake a while back trying to do a binary search on a 4GB array. Left plus right by 2. Left and right. So you got the problem? Because there is a guarantee in the C standard that it will not create code that you have in your password. Because if you are going to do this correctly you will have a lot of it's statements etc. checking if we are near the boundary and if we are near the boundary of an integer then certain things can happen. Because here what happens if you put in a large integer is that you get signed integer overflow. And that is the final failure. So now anything can happen and once again what does it print? And Larry is confused inconceivable and the thing happens. Okay now it's time for me to now I have been showing a lot of toy examples where I just say that anything can happen but I am going to show you a real example that I think is so cool. Because remember that you will not get code in your password. And in this case I think it's something like 25 lines of code you need to implement if you are going to do treats overflows correctly. Well this is a fairly complex always actually to kind of treat boundary values correctly. Exercise. What do you think might happen if you run this code? I showed you over there a piece of code where I showed that the interpretation of this code can result in four different results. So what do you think notice it's only from the area we all will know that. Discuss with the people around here and some of you might have seen this before and all that. What might happen but until I saw it myself I have to admit I didn't believe this could happen. Okay. What did I do? Suggesting is that this can either print through or print false. The garbage line that we get. It can be through and forth both. Can it? It's a multi-process system and after first check it is after the process I like your answer. And the variable is reset it can be false again and the false can come back. I like your answer because although that's not what's happening here you are going to illustrate in the quantum computer that could suffer. So just to test this thing let's call it first call bar and then through and in bar we are poking some garbage into the stack area that we think this one will use. So now I'm poking the value 2 into the memory area that we want to be using. And then I use my three compilers and I run it without optimization and since I'm now going to show the assemble code I find it much easier to read 32 bit assembler and it doesn't make any difference really it just makes it easier to read. So when I compile and try to study what the compilers are doing I tend to use I386 and normal 32 bit white it's convenient and so far I have not seen any example that it makes any difference. So the intercompiler in this case gave me a trip that's a fair one is that what I expected? Come on plan gave me a truce and GCC gave me a truce Thanks to Mark Schreuer that blogged about this one in June last year and it was became very popular and read by many and I also read it and I immediately fired up my compilers and I started looking at the assemble to figure out what is really happening here and I'm now going to show it to you. With optimization though you get false false at the assemble time we have time we have time This is a piece of code we ran it through the ICC compiler without no optimization and I don't expect you to read this very quickly so I will help you by writing it back into C again so this is what you should focus on it's creating I want to own the relationship to what you see here so you can trust me this is what happens it basically takes the random value in the memory and without optimization it's using stack so ICC is actually using stack then it's loading that stack memory error into register A it's comparing register A with zero and if it is zero then it's jumping over through then it is loading B again into register A and exactly as it said if there was something messing up in the memory area it could actually change in between because this is not atomic operations in this case it might bring nothing also in this case it might bring nothing yes so it's loading B into memory again into register A again so it's not zero and if it's not zero it's jumping over and this is actually working as most programs expect should be so if the random value is both zero it's false if it's one it's true and if it's anything else then it's true if we put on optimization this is what going to happen it doesn't care about memory at all it just uses whatever is in the register A and register A in here it's actually Eax register do you know how you can change the Eax register of course I immediately wanted to do that and I can control through your false values be printed how can you change one easy way of changing the register A Eax yes Eax is very often used as the return value so for example if you print out three characters the Eax register will often be three so do that and then call this function and things are happy but anyway it just takes this random value that whatever it has in the A register or Eax register A compare that to zero and if it's not zero it's jumping over and printing true if it is zero it's printing false so once again that's no chance basically combine two blocks that's what's up good with plan with no optimization what's happening is it's taking this random value and checking the last bit it doesn't load it into register checking the last bit with one and if it's one then it prints true and if it's sorry checking the last bit for one and if the last bit is one it's printing true if it's zero then it's printing false so this is what we get even numbers is false all numbers are true so that's fine with optimization with optimization all over this is what's happening I will be able to answer that it doesn't do anything it just prints false that's simple plan is it it's undefined value anything is false it's false it'll convert a printf into a puts it'll convert a printf into a puts yes there used to be a saying that you should never use printf when you only have a string but most compilers now tend to replace printf with one argument into puts if it's the one argument it's at the bottom because it's a behemoth the language is often called behavioural focus languages as long as the behaviour is the same it can do whatever so it can replace printf with puts so don't be surprised if you put your debugger on printf nothing is happening it's because it might be calling puts instead so well this was planned we just looked at that this is GCC I'm sorry this is GCC this was what I fired up immediately when I saw this article and it took a while a few minutes it's hard to read for me even if I write assemble once in a while, not very often I find it hard to follow this but eventually I realised of course that's what's happening and it's taking the random value and checking again 0 and if it's 0 then it jumps over 3 then it's taking that random value again loading it into register and x4 with 1 and then it checks the register with 0 again and jumps over it with 0 now the garbage value in B is 0 or 1 this algorithm actually works 0 now if false it is 0 and true it is but if it's 2 that's the bit pattern 1, 0 it will go in here the bit pattern 1, 0 equal to 0 so we print 3 then it goes down there bit pattern 1, 0 into the register again x or with 1 so you get 1, 1, 3 compares 3 with 0 nah, it's not 0 so it principles so that's the reason why you get true and false when you run the GCC we have broken the rules of the language but probably we can do whatever when we optimise GCC, they are inspired by the plan ganging all the way around stop it so this is kind of the possible outcomes that I have seen but since it's underfunded every password anything that happens with it it's going to flow through those etc and I think what is happening here, but I don't know that but I've talked to a few compiler writers and they say that might be a reasonable kind of explanation as long as you don't understand things that we understand which I don't but I think what is happening is that when you run the optimiser you are kind of building a tree that you start applying optimisation rules on to kind of collapse this tree into a smaller and smaller unit until it's very small and you can't make it smaller and then in general you've got I suspect that the reason it's kind of just pretty false is that once you have undefined behaviour and in value 12 here the tree that is built and when you apply the rules suddenly things are kind of just falling apart in a way it shouldn't have been falling apart and it just reduces it into false I don't I'm not sure if I believe that the compiler can actually understand what you're told well about because if you put on the compiler flags and so on you don't get warnings about this to people so I think it's just a result of having an invalid pool building up a huge tree that is going to be optimised you apply a lot of rules to kind of reduce it into something smaller and it just plugs into a unity grid also I might be wrong but that's what I'm I think that might be a possible explanation so going back to the very beginning again where we started so what's really wrong with this compiler I guess we all agree it's crap okay everyone agrees it's crap gold but at the same time you can write this one in Java or C sharp or a lot of other languages and it will be perfectly valid code and it will give you exactly the same result on every machine so it's not uncommon in all the programming language to actually do these kind of things to update the variable twice in an expression like this so it's crap gold what about this statement the standard says that this is a valid code it's crap and bad because you update the variable multiple times between two semicolons it is undefined behavior because and then between two sequence points an object is modified more than once or is modified and the prior value is read all the time to determine the value to be stored you modify and use the value of a variable twice between sequence points and then here is the last statement that I'm going to show in C and C plus plus so the question is what's wrong with this code in C and C plus plus unlike most other languages the order in which sub expressions are evaluated and the order in which side effects take place except a specified for function call and operator or operator tutorial operator and comma operator is unspecified therefore the expression here does not make sense this is kind of the perfect the scholarly example also which is good you can do nothing wrong by saying things like that but I have to say I like this one better this one is demonstrating a deep understanding of the language this one is probably some understanding of language but it could also be something that my mother actually remembers very well so she could actually read this and just say it doesn't necessarily mean that you understand C program this demonstrates understanding this demonstrates that you can repeat something of your bed this one I feel this is kind of leading you in the wrong direction we are talking about updating a variable between sequence points it kind of creates yourself an explanation for something that you tend to stick to and that will eventually just lead you into trouble so sequence points are not defined by the semicons the standard says that this is invalid code year maybe it does but at the same time it's a questionable attitude to have if you don't understand the whole C standard then you are not allowed to write C code nobody can remember the whole C standard so I think it's better to try to reason about these things so you can come up with a good explanation create yourself a good conceptual model about what it means to program in C and C++ but finally of course we all agree it is common to think about this kind of stuff that I have been talking about do you have final minutes? yes, but at least an hour more alright alright approximately the final minutes and then we can do a few session it's common to think that these things are not so important but the problem is that if you have undefined behaviors somewhere in your code then the whole code base basically also have undefined behavior and it's not so the thing is that when you have kind of the run time environment passes through the undefined behavior it will get the corrupt state and when it's executing the rest of the code it might do the wrong thing but it's not only the run time environment that is influenced by this because you can actually have the compilers also influenced by this it's first compiling a piece of code and undefined behavior the compiler also have a state that needs to care about and the state might be corrupt so it generates bugs all the places in your code I haven't seen concrete examples of that happening but I believe it happens I really believe it happens and there are some people that say although they can't show it to me they have seen corrupt states in the compiler that generate corrupt all the places if you have undefined behavior here so you know you can actually get storage results even if you are never executing the invalid code but who is releasing code that undefined behavior I don't know I just heard stories about it from my own experience I started to work professionally in 1996 so for 17 years I have been working with programming and CNC++ most of it banking and insurance applications I have been developing and I know that they have undefined behavior and it's not only me I have been developing traffic systems for seismic systems for supercomputers for active charge north and now for the last nine years I have been working on metallic presence systems all of these systems have undefined behavior I know that's a personal experience and for example for this one I started working together with 200 developers in the same code base all at CNC++ 4 or 5 million lines of code we have loads of undefined behavior so if you go back a couple of slides you might not think somebody would release code like this but I have actually seen that in end line in code where somebody is trying to read the start of a buffer so it's very often happens with macros actually when the macros keeps saying you don't realize that you actually get undefined behavior so the code is like i plus read length from current buffer which is modifying the position that you are adding to yes that's true that's the thing you see in real code bases excellent example there are people that build all the types of systems as well I can't say anything about all the types of systems but sometimes I think if I'm involved in releasing programs with undefined behavior then maybe somebody else is involved in it as well and you might think that the compiler writers they know how to do this so I told you I have three compilers on my machine that's not actually true I think I have five or six compilers because I have also been compiling up the portable C compiler I have a tiny C compiler and another one which I can't remember the name of when I compiled the portable C compiler with the latest GCC version 4.9 and the latest GCC version is really good at finding undefined behavior I got this one do you notice what happens there we are actually updating the variable twice between two sequence ones you might look at this and say what should they do that but at the same time this is a compiler an open source compiler I'm not sure if it's free I think it's a free software that has been out there for many many many years it has had thousands of eyeballs looking into these codes and it's written by a very clever people and still they are releasing with undefined behavior and it's that's 1.0.0 in 2011 so the final word is stop thinking about C and C because they are more like portable assemblers with all the defects that are becoming stacks from portable assemblers you must have an understanding of what happens under the hood but I believe because if you don't understand what happens you will just be a kind of buggy creating machine but if you do have a useful mental model of what happens under the hood then I strongly believe you have a chance thank you very much okay I can if you allow I can give you a session on the media and I also encourage you to if you feel you have to need that you can sneak out quietly and don't feel bad about that because then I know that the people in the room they want to be here so I can take questions if there are one of the things, one of the patterns I have run into is the same variable being defined twice in a spoke the same variable being defined twice in a spoke so the end I outside and the end I inside the room but that's okay that is defined there no it's okay but the fact that if you also understand sorry no it's an anti pattern but if you actually happen to see that at some part of the other you do not know which one you're actually the question is kind of variables quality but this is something that that is actually sometimes useful it's a good practice to try to avoid it but at the same time it's important that the language supports it that it can happen and as you know in C99 you can declare and define a variable inside a block you don't have to do it in the top of the block in C99 you could also define it close to your search whenever you need it you just have to create a new block so that you can define it but yeah it's a potential block but it's still defined there yes thanks for the talk you shared a number of insights from the perspective of someone using C so adding more awareness with respect to what they're doing to the end of the slide clearly some of the things that you shared were examples of systems that are safety critical systems what do you think is the way forward because with a lot of C-cone examples like this it's hard to trust developer judgment with some of this and it doesn't look like advances with some of the static analysis of the code pieces that it was hadn't defences from are these new things which would help us avoid some of this there is the kind of problem the answer is that I think we will always struggle with this I think we will be programming just as much C and C++ 30 years from now as we did today remember that when you are developing in Ruby and Scal under all the two languages typically in an application programming domain but also the software industry and probably an increasing part of the software industry it's actually working with embedded systems like Microwave albums, cars test fighters telepressing systems telephone lines distance mobile phones for central controllers etc. and these typical embedded systems there is currently no good alternative to Scal C++ so your car with its 250 CPUs and estimate 200 million lines of code a lot of that will be in C++ and C with under prime behavior so that is the side part of the answer the good part kind of the positive answer to that is that fortunately the industry is moving forward and during the last decade something extremely important has happened in programming does anyone want to suggest what I might be thinking of? I'm going to not going to say that's making it worse in some way but there is one thing I'm thinking of yes test driven development that you should not write any code before you have a test to check whether you are doing the right thing or not and this is a technique that some programmers use a lot and find it useful I'm not saying that TBD is the solution to everything but it's also a technique that unfortunately a lot of programmers don't know about and to me that's a bit now in 2013 that's a bit like having a carpenter that doesn't know how to use the nail job and then you see those developers that know test driven development and really doing a quick fast job it looks like the TBD people are working slowly because they are doing two things at once but they are hitting the target all the time because they are just trying to satisfy a test that they just wrote so I believe that going in the direction of learning more about implementation techniques and TBD being one of them which can be compared to kind of in finance and there are actually you know that accountants can be put in jail if they don't do double bookkeeping there are some people that say a programmer should be put in jail if they don't do that soon I don't know but it's an interesting argument I believe that the answer to your question is that we need to be more humble as programmers and we need to kind of the programming techniques that can reduce the number of bugs in our systems the group is sort of more more than checking and maybe more tools to have an inspection of code the question is whether there are room for more tools I think there will always be a room for more tools like Lint and Covert in Covert that was mentioned there is a tool to market for that but the nature of C and C++ in such a way that it's impossible for a tool to completely understand what it's trying to do as a programmer so it will just be moving the boundaries of where it can fall so we need to we need to stop moving more slowly C and C++ programs so we don't fall off the cliff without having the public protection of tools that's why yes well I certainly gave a lot of talks about so I have good company with all those people that are just traveling around talking about how fantastic it is but it is, yes I do T2D is actually my default technique now even for the very simplest thing that I'm trying to do I look at the program and then I do T2D but I don't use T2D because of this I use T2D because I find an extremely valuable tool to avoid analysis parallelism parallelism I've been talking for three days now parallelism because usually as a software developer when you move into a new domain you don't understand really the requirements nobody understands the requirements even those people that come and say what you should make doesn't understand the requirements so instead of stop seeing the requirements and what you're going to solve and stop thinking, thinking, analyzing it's usually much better to kind of move slowly into the domain of triples but for doing that I try to make small steps while discovering more about the domain I'm trying to solve and do as originally and that seems to have the double effect of also making sure that I'm not writing code that I don't have tests for so I would say T2D is my default implementation technique but I don't use it always so at 60-70% of the time I go into first with T2D and then I make a decision of whether I should continue or not other times I dive in by what is called faith or faith-driven development or some magic fluff on my head and I'll write it code and this is what we're going through sometimes I do that and then I realize, oh I need to be more serious than just testing development so I think your approach is very good look at the question, how do you test this in our code base and we have been working with trying to reduce on the fun behavior now for we have an active approach for the last eight years I would say in our code base so we are from where we were eight years ago we are in a fantastic state now but at the same time we have become so humble in what we are trying to solve here to kind of move on in the right direction and of course we are using all the tools that we have mentioned so far the thing is that if you create an awareness among all your developers and I mentioned we were around 150 and 200 working on the same code base all the issues that we have been talking about today but also you are trying to make sure that everyone is writing kind of compliant code then it is much easier to switch compilers so one of the things that we do for example we encourage developers to use different compilers so that different developers discover different things so some of our developers they are every night they are compiling a new version of GCC from kind of the bleeding edge of the core track others are using Clang some are using some modules can actually be compiled with ICC as well and these compilers give different results so one way of dealing with it is to make sure that you don't start relying on a uniform development environment because then you might have all these bugs and inconsistency without ever discovering them until someone to their father who should buy an equal file and bring a compiler speak with you sorry? bring a compiler speak with you you change the compiler speak with you and that is a problem and also to try to make sure that your team of developers are using different development tools so that you get used to dealing with different behaviors what is the thing that is being seen my favorite test framework which I'm kind of famous for because I published a paper on it and a lot of people are now oh yeah, you think that is it's called a search so I include a search.h and then I write a lot of search statements in my test code I think that works perfectly and it's what I recommend there are a lot of tools out there for doing test during development Google test is very popular you have C++ unit test you already see unit and so on and so on and I have nothing wrong to say about those frameworks because they are doing a great job but there is one downside of using them and that is the tools they become so overbound it's not like a small sharp tool like a search that you can just always carry around when you need to it's like this big, big, massive thing and everyone move away because here comes my test framework and I think that is one of the reasons why people are reluctant to test frameworks because they know what happens when they say oh should we try this test tdv thing yeah just win everybody know do it like this so assert.h that's a good starting point and once you get used to doing test during development you can consider something bigger have you seen something called Ccamp Ccamp? it's like a cpan for cprogram so it comes with something called a failure mode basically ld preloads malloc and every time malloc is gone it forks one where it returns null and one where it returns a value I load those kind of tools and make sure the one you return null exits yeah and there are loads of those kind of tools small kind of tools that you can apply and then you get see your code from a different perspective and we develop a lot of them in the house as well to test with you yes so I had a question regarding or your thoughts regarding this subject most of modern GPUs have computers modern GPUs, graphics yeah that's a statement that I cannot say yes or no to and the language and the shading language is very much like C and you typically give out the shaders as a source code and there is undefined behaviors across three different GPUs and there is no standard on it because that's what the inherent manufacturer has developed this compiler model so coming from a C background you're getting into GPUs which are more evident today in the market so do you have any thoughts on the subject as to how these compilers would work across systems which use the same language definition but well in the context that I'm in now as long as it's not about standard C or standard C++ I don't have many thoughts about it in the public afterwards or in the dinner or whatever we're going to do afterwards I am willing to talk about that sharp and go and root the and scale and all the languages and also maybe I heard more about how the GPU is working but it is it is when it comes to not talking about GPU when it comes to CPUs and microcontrollers it's a very common belief that oh we have over the developer all the tools we need for us the thing is that every single day there comes out a new chip an architecture that needs something more than assembly language and those kind of chips they will often get C first as the language to program and that's why it's so important to learn C properly so that your code can also work in those type of series that's not a good answer for the GPU evaluations that we raised but that was more at times yes from the slides actually, sequence points all of this is a pain point so you have mentioned that function forms a sequence point, is it basically that one function call completes basically we invoke the function call, all the things are evaluated or is it for the each of the arguments how does it apply there as a sequence point it is the rule for the question was about sequence points when it kind of kicks in regarding question calls that was number 5 I listed it's basically that the whole expression is evaluated and you have a sequence point just before you call the function now if you have the common separator it's different common separator than the one I mentioned in the rule of truth because that is the parameters you don't have any you don't have any guarantee in which order those parameters are evaluated the only thing you know that all of them will be completed with their evaluation and the side effect has taken place before the program count removes into the server team and that's a guarantee as far as I understand it it's kind of funny because in some situations maybe like this I feel that I know a bit about C++ but then I go to some types of conferences like the ACCU conferences for example I recommend that one and you stand there somewhere and you realize that there are like 20 people around you that knows 100 times more about this than I do then you feel very small so for every new knowledge that I acquire about C and C++ I start realizing that there is so much more I need to learn about C++ for example in difficult cases when this sequence point happening and in the news C11 and C++ 11 it's not called sequence points anymore it's called sequence before so it's talking about sequencing and as soon as you introduce concurrency with it with it thanks everyone for that it was fantastic