 Good morning and welcome back to the interrupted session. I do understand that there are a lot of queries from various centers. Alright might as well go to answering queries. Yes, my question is when we write int main means that the main function returns integer value. So my question is where the value of main is returned? It is a very interesting question. So look at this particular thing. The question is that if we are writing int, so where does this return value go? This is returning an integer value. Actually I was hoping to answer this question when we date with functions because generically once we define functions and most functions return values unless the function is named as void. So just as the functions return values to the calling program similarly the main function returns the value to the calling program. In case of all other functions the calling program is usually our main program or some other function. But in case of main function itself the calling program is operating system. When our program starts execution it is the operating system which has invoked this main. So imagine that there is an operating system program at the top which has called this main. Consequently when main finishes its execution the return value is sent to the operating system. Of course we as programmers do not see that nor we see the consequences. But imagine a very large piece of software which comprises of let us say 20 different programs almost executing simultaneously doing different tasks. Usually there is a separate coordination program. But in case we wish to decipher as to what has happened for the execution of any one of these programs we can actually go to the operating system log and find out what are the values return. So when I say return minus 1 for example in the main program the value minus 1 is returned to the operating system which can be trapped through other mechanisms to generate appropriate error messages. Again as far as our students are concerned it is adequate to say that every function which is supposed to return a value whether integer, float or whatever type will return it to the calling function. Since in case of main program the calling function is the operating system itself which invokes the main the return value 0 is sent back to the operating system. Let us go over to any other question somebody else may have. When we return the value 0 when we return any value from the main function to the operating system text at answer. So what is the result when we return the value 1 various values that are returned apart from the 0 means what does the operating system receives to that for 0 for 1 and for other values. Okay the question is an extension of the question that was asked earlier the answer is very simple ordinarily the operating system will not do anything on its own. So you can say that logically the value that you return back simply disappears in the operating system. However we can instrument some commands within the operating system to take cognizance of these values typically a negative value means that the program aborted abnormally. In general positive values other than 0 are not used unless they are specifically required because if a program terminates successfully the value 0 conveys that to the operating system. If the program does not terminate successfully then if you want to trap the reason why the error has occurred at different points in your program you may use return minus 1 or return minus 2 or return minus 3 these actual numbers are available within the operating system which then can be used to decipher why program terminated but this deciphering is not done by the users of the program but this is typically done by the debugging experts who wish to find out why the program terminated wrongly. To conclude the return mechanism works exactly similarly as it does whenever a function is called from either any other function or from the main program the main function itself returns values to operating system usually they are all lost because irrespective of whether the program terminates correctly or incorrectly you will see a dollar prompt on your command if the program has terminated incorrectly the operating system may generate an arbitrary message such as memory dump or abnormal end or some such thing that will happen whenever a nonzero value is returned and when an exception has been located which is reported to the operating system. Suffice it to say that we would expect well written programs to always return zero to the operating system if there is some other condition that takes place abnormally within a program it should be the responsibility of the programmer to check for that condition and provide appropriate error messages within the program itself rather than depending upon any action by the operating system okay I hope that satisfies the answer let's go to the next center. In the context that you executed this program on a particular machine and on a particular implementation of C programming language the answer that you got is correct that number is probably 2 to the power 31 minus 1 but please understand that the largest number is often implementation dependent and implementation defined and therefore it could be different for different machines. However as long as your students understand that there is a limit and that limit is not a very large limit that the number of digits representable both in integer and for the mantissa in your floating point number are limited and that you have to take care of them I think the message has been conveyed. The exact value is hardly relevant in any computational problem that there is a limit which must not be crossed is what is important yes bhandera please go ahead. Try it out I have a program which will calculate factorial which I will just show and you can try at what point in time the factorial gives up. In fact there will be an assignment which will ask you to calculate the factorial of a large number where the factorial value could be 900 digit long. That was a question that I had asked in the final examination of my first year course. So we will put that as an assignment which you can try. Can we go over to I think Sona College had a query No Sona College is saying connect us. Nirma and ASC Amritapuri. Let us go to Nirma. He has a query in slide number 14. Good morning sir. In slide number 14 the integral representation that a range is given 2 raised to 31 minus 1 to minus 2 raised to 31 minus 1. So whether it is a signed representation or the 2 components and then from my point of view the negative range which is minus 2 raised to 31 only. Yes thank you for pointing this out. Let me put this slide back again here, this paper back again here. A subtle but important point is being made. If you recall I had shown this example where I had shown a 8 bit number and stated that if first bit is used for sign then the range will be 2 to the power 7 minus 1 to minus 2 to the power 7 minus 1. Obviously I am talking about a signed representation where first bit is used only to represent plus or minus sign and if you extend this argument for a 32 bit representation you will get 2 to the power 31 minus 1 to minus 2 to the power 31 minus 1. However I did indicate that you can use 2's complement representation and the person who asked this question is right. Incidentally I hope you had introduced yourself so that all others could know your name. Perhaps next time you could remember to do that. He has correctly pointed out that if we use a 2's complement representation here then while the largest number will still be plus 2 to the power 31 minus 1 but the largest negative number which will be all 1's will still be interpreted as minus 2 to the power 31. So very correct answer and thank you for pointing this out. However I would like to once again mention that while these technical details that you are so carefully and correctly pointing out are important in order to understand and appreciate the nitty gritties at the level of first scores and at the level of general computing these exact values do not matter at all. You will rarely get a numerical computation which just stops short of reaching the maximum values. You will either cross these maximum representation values by a large margin or you will remain inside it. So what is important is to ensure that you will never cross these boundaries. Exact values are good to know from an interest point of view and from technical correctness and completeness point of view. But as far as basic programming concepts are concerned they are absolutely no relevance. So I hope this answers your question. Okay we will go to one more query. Sir I want to know whether a main is a keyword and also main is user defined function or user defined function or not. Okay I think I will compile a list of main traditionally in the sense that it defines the main function the operating system hands over control when your program starts executing. So yes you are very right. Main should never be used for any other purpose then as the name of the main function. Okay so symbiosis please go and ask your question. Good morning sir my question is that if I want to find the compliment of number 5 then why it is giving minus 6 as the answer? Okay thank you very much I will repeat this question. If she wants to find out the compliment of number 5 then why is she getting minus 6 as the answer? It's a good question but when you say compliment you have to define something else. Compliment with respect to what? Just as we have 2's compliment, 10's compliment, so you have to define a base. Now I do not know what is the program that you have written to find out the compliment and to what base you are finding out but I would say to you and to all other colleagues to take this as an interesting and important point and examine it by simply writing programs for finding out the compliments to different bases and confirm it for yourself. So thanks for asking this question but I am putting the ball back into your court and I am requesting you to find the answer also. Yes COEP please ask your question. Sir why there is no concept like signed unsigned float or double? Oh there are no why there are no concepts of course these concepts are there in programming language I did not get your question can you repeat it please. Sir we have signed unsigned integer character like that we do not have any signed unsigned float like unsigned float is not there we cannot declare like unsigned float. Okay okay I will okay first of all good question is question is only with integer we talk about signed representation unsigned representation and that is because the C programming language permits 2's compliment or signed or unsigned representation specifically for integers. As far as floating point numbers are concerned there are 2 components mantissa and exponent just as it defines that the mantissa will always be assumed to have a decimal point at the beginning similarly it has an implicit definition for the representation of both the mantissa and exponent. So there is no question of we being able to define whether either of these 2 are represented as signed or unsigned or 2's compliment. There is a fixed representation which C language permits it is an idiosyncrasy of C language however once again I will tell you that these kinds of representations do not really matter as far as the computations are concerned all that signed and unsigned would mean that you will change the complete range of numbers in floating point the range is decided by exponent and not how mantissa is represented. Secondly in case of integers the unsigned int is important not really to represent a very large integer value but to represent different things other than numbers inside a byte. So an unsigned int or an unsigned care can be used to represent values between 0 to 255 which are always interpreted as 0 to 255. I remind you that yesterday when we discussed the representation of digital images we have intensity values for pixels which will never be negative similarly consider a digitized audio file the digitized audio file will contain audio samples and the audio intensity will never be negative it will be 0 or something positive that is where the unsigned representation comes into picture and becomes useful however for normal computational problems the internal representations in my opinion are completely irrelevant they are good to know and technically you will be correct when explaining it to it but they have no actual value in trying to solve real life problems by programming. So let's go to Sonar college. Sir can you explain me about the various types of precision techniques that we are using in C languages? Various types of which techniques? Precision or precision. Precision techniques is it. We will have an example of how to use the C programming language and what techniques to use to represent and handle higher precision. The natural precision that is available in C programming language is what we have seen that's all C provides. Anything better than that is not available naturally in C but we will discuss this through an example in the later sessions. So let's go over to Government Engineering College Trishur. Hello sir. Sir I have a doubt about the environments we are doing the programming in C. But if the environment is best. I will repeat the questions an interesting question for writing C programs which environment is best whether Windows environment or what you call Unix environment. It is the question assumes that some environment is better for either writing C programs or for executing C programs. It is absolutely nothing of that sort. C programming is done in an environment provided by compilers not by the operating system. So depending upon how good a compiler environment is whether it permits good development tools such as debuggers such as syntax checkers etc. There are editors which can permit you to do that. That would define which environment is good for writing or developing C program. As to which environment is good for running C programs every computing environment is good for running C programs because practically all the underlying software that you see large portions have been written in C. You take the Microsoft operating system itself. You take the Unix operating system itself. They all been written in C. As a matter of fact the first Unix implementation was done using an assembly language on a PDP-7 machine. The moment the C language was defined by the same group they used that C programming language to rewrite the kernel of Unix. C programming language in general has been the chosen tool for many decades by most computer scientists in developing what you call the lower level software such as operating systems or tools or even compilers get written in C. In fact this reminds me of a question that was asked yesterday why is C called middle level language? Yesterday or day before I mentioned that this is an archaic wording and nobody calls it middle level language. But at that time C programming language was developed. It was closer to the machine architecture and much easier to translate into machine language than most other higher level languages existing at that time. Please remember that at that time there were no object oriented programming languages and functional languages were still emerging. We are talking about the days when we had Fortron, Cobalt, Pascal, Algol and such programming languages. With respect to these C was considered closer to the machine. To answer your question finally there is no such thing as some environment being better for C programming than the other. All environments today provide adequate facilities to write and run your C programs. Unfortunately C has not been a programming language which is used nowadays for developing large applications. Most of the large applications today are getting developed in some kind of object oriented framework such as C++ or Java and other tools and technologies such as database technologies or object oriented systems. C is still used to teach the first course in programming in a majority of our colleges in the country and that is the only reason why we are having this workshop and this discussion. But if you ask me C programming has not remained in the real world as important as some of the other things like object oriented programming. Again talking about environment modern programming is often done through high end tools such as integrated development environments. Let me mention a couple of integrated development environments. These are called IDEs for example. You have Eclipse as an integrated development environment developed by a rational group of companies which is now taken over by IBM. On the other hand you have another integrated development environment called NetBeans which has been developed by some group primarily for Java programming but permits good program development environment for multiple programming languages including C. If you look at C on the PC front you have TurboC and such C compilers which give you a good development environment. These are available whether in Unix operating system or Microsoft operating system. To conclude your answer it is wrong to say that an operating system is the right environment or wrong environment or good or bad environment for developing programs. Programs are incidentally developed in an operating system but they are developed using tools such as compilers and development environment and if at all one has to compare one has to compare those environments. If you take Ubuntu for example one of the reasons why people may not prefer Unix or Ubuntu is because of non-familiarity with such environment either for them or for their other colleagues. Otherwise Unix environment or for that matter Mac operating system environment which is a Unix derivative is as good in terms of the tools for permitting you to do good development. Consider the environment that you will see in your lab. You have a G edit which is a very simple editor. It can actually cross check for matching brackets and so on. So it is a simple development environment and the GCC compiler is by far the best C compiler that exists in the world today. There is nothing equivalent remotely in terms of its spread and comprehensiveness. Incidentally the GCC compiler is available both for Unix environment as well as for Microsoft environment. So I hope that answers your question. The correct answer is it is wrong to compare which environment is better in terms of operating system. It is perhaps correct to compare which tools are better to develop C programs as far as running executable C programs are concerned after they are compiled absolutely any kind of hardware is good for running C programs. Let us quickly go to Perrier. C OEP has one more query. I will take a call on it after talking to Perrier. Perrier please go ahead and ask your question. This question is indirectly related to the first question because you are referring to a Ubuntu compiler and a Linux compiler. I will once again repeat that Ubuntu, Linux, Unix, Microsoft, Windows are names of operating systems and none of these come ready made with a compiler. So it is wrong to say Ubuntu compiler or Linux compiler just as it is wrong to say Microsoft Windows compiler. These are operating systems which intrinsically do not contain any compiler whatsoever. So coming back to your question. The GNU compiler which you are using the GCC compiler is exactly the same compiler that we are using on Ubuntu which will what would have been used on the Linux as well. In short the compiler used on many variations of Unix operating systems such as Unix, Linux or Ubuntu or Suze or Novel would generally be the GCC compiler in a Microsoft operating system GCC compiler is also available and other compilers are also available. So I hope that answers your question. Specifically Linux and Ubuntu and such things do not define a compiler. You have to choose a compiler to use in any one of the environment and the one which you are using in the lab is exactly the same compiler as the one which you would be using probably in any other Linux version. Okay, let us go to the last institution COEP Pune and Trishur has two more questions. Okay. Both are good questions. The first question relates to the differences in implementation. What was pointed out is that when you print an integer in turbo C it shows two bytes whereas yesterday in the lab we saw four bytes. You remember I mentioned that the actual allocation of the memory units to integer or float is completely implementation dependent and I did mention that you can have an integer which is two bytes or integer which is four bytes. It is not the question of the back end operating system environment but it is the turbo C compiler which chooses to implement an integer declaration as a two byte value and it is the characteristic of GCC compiler which chooses to implement an integer in four bytes. This is the implementation dependent difference which is permitted by C programming one way and the programmers have to take care that whenever their program is executed the execution will depend upon these limitations imposed by the compiler. So to conclude the limitation is not of the Ubuntu environment or for that matter the Microsoft environment the limitation is that of the turbo C compiler that compiler gives only two bytes for the integer. However if you define a long integer you will get a four byte integer. Please remember this has happened because turbo C developed as a compiler on small machines. In very early days microcomputers used to have 16 bit memory and therefore allocating an integer to 16 bit made a lot of sense. A whole large number of compilers which were meant for microcomputers had this behavior. On the other hand compilers which came from larger machines such as earlier mainframes or the mainstay unix machines almost always had an integer implementation of four bytes. So please do not blame C programming language on this. The C programming language standard itself very specifically says that these limitations will be dependent on the particular implementation and when they say particular implementation they are talking about implementation of a compiler not of an operating system. In short this is the difference between the turbo C and GCC features and that is perfectly alright nothing can be done about it. Let's go over to Trishur who had a query. I was just discussing this specific nature of behavior with my colleague Prof. Uday Gautande who is an expert in numerical analysis. Let me repeat this question. The query is that she typed in a number 123.456 as input and when she was displaying it back she got something like 123.456.001 and the question is why is this happening? Actually my colleague Prof. Gautande has created a small sample program which he used to ask his students to run. This he did I think 20 years ago. I will get that program and send it back across which you can run. This is happening for two reasons. The main reason here by the way is that when you input your decimal number it is not only limited by the precision of the floating point but it is also converted to a base which is 2 or 10 or 2 or 16 whereas the number that you have given has a decimal base. Just consider a decimal fraction 0.1. It so happens that 0.1 which is 1 by 10 in decimal and has an exact fractional representation does not have an exact representation in binary. So if you try to represent a binary fraction which is exactly equal to 0.1 you will not get consequently a number which appears to be finite and rational in decimal representation could become a different number when represented in binary or hexadecimal fraction. That is the reason why you are getting this additional 001. The last 001 is the result of first converting 123.456 into an internal binary representation and then converting that binary representation back into a decimal representation. So that is an interesting phenomena that you have observed. It is a good question I would like all other colleagues to remember this point and I will definitely put the example which I will collect from Professor Gayathonde and include it in one of the downloads on your site which you can try it out tomorrow. Incidentally before going forward I forgot to answer the second question asked by the earlier participant. The second question was While I am multiplying the same number Sir while I am multiplying the same number that is calculating the spare root of 123.456 in progress and through calculator I am getting the same answer but difference after 2 decimal numbers then which is correct sir? The reason you are getting this difference is again related I have your question. The answer is again related to the same phenomena. The exact behavior will be different depending upon the amount of precision that you have in different machines. For example the calculator is also a computer and that computer depends upon whether they have implemented a decimal arithmetic which is unlikely. They also must be doing some similar representation but they might have larger number of digits and therefore you may not notice the difference but this kind of things can be shown to happen in both calculators and computers. Please wait till the example as are created and you will be able to understand why these things are happening in the subsequent sessions. So please wait till you do run that exercise in your computers later. So to conclude this interaction let me just mention that the question that was asked by the earlier participant was that we said that Dumbo does not remember anything and he uses memory locations so how does he remember the locations? First of all let me explain that the entire model was supposed to be a very simplistic model of computational behavior of a computer and therefore to attach exactness to various description is perhaps unfair. The idea of that entire episode of Dumbo was to convey to our students who do not know anything about computers some basic things about the computers but to answer your question the reason why we have the name tags on the drawers is to ensure that Mr. Dumbo can see those name tags and remember which drawer is B, which drawer is C, which drawer is some etc. So yes you are very right. Dumbo that means my computer cannot remember anything unless it puts it into the memory. Inside a compiler when compiler translates it into the instructions what it actually does is it converts all our variables into location addresses and in fact the instructions to Dumbo specifically point to a location address just as we have A, B or sum the location addresses could be 0 0 1 0 1 1 1 0 etc. etc. within the binary framework. So when the high level language program is converted into machine language the instructions explicitly refer to memory locations and that is why Dumbo does not have to remember anything. When he looks at an instruction he knows which location he has to go to to collect which data that is the purpose of the name tags on the drawers. With this we will go ahead otherwise we will have very little time to complete the remaining portion here