 ర్ర్చన్ కొసినోన్ంరన్సి. ప్మెర్స్ పెరంల్ కొటర్సియిట్నక్ర్. హారికత్ండిట్సిన్. సంల్ంస్గా సంలట్ట్. పెర్ట్. and we will study about the static testing aspects like static analysis, code review, instructions, tools etc and today we will start on the data coupling or data flow analysis so before that we will just do a recap of what we have studied in the earlier session we had a look into the static testing and we know that it is a component based testing without execution of the embedded software program we will do a evaluation of the system without the need of executing the program by analyzing the various aspects of the embedded software in context with the dynamic testing the static testing is again is the execution not needed so basically for the completion of embedded software testing we need to have both these type of techniques to be tested as a complemented testing technique for each other completion so the different types of static testing is on static analysis, review, inspection and test process then we have testing metrics which are used in the reporting process and making sure that the testing is complete in all aspects in static analysis we had gone through the type of analysis that we do we had done in the initial code, parameter type of mismatches, possible array bound revolutions, faults that are found by the compilers it could be general, info type warnings or any errors so these type of issues which are going to be reported by the compilers are also analyzed statically and with the help of the code we will do the program complexity as the complexity increases the fault density increases we know that we should limit the complexity to certain extent so about the complexity of the program is controlled and within the limit also we had highlighted about the advantages of the static analysis when it can be done as soon as the code is done and compiled it can be used as it is we have the understanding of the program and the requirements and so on par with that whether it is there or not so we can do a pre preliminary sort of a evaluation of the developed code or the program so that is why static analysis is effective when as soon as the code is done better to do it in the beginning than in the end or the after so dynamic testing is completed also we have a representation of the control flow called tree sequence diagrams class diagrams depending on the embedded software system so that is also used for analyzing the project or the program so that also will be part of static analysis static analysis so the main aspect is one of the main aspect is control coupling data coupling data coupling the dependence of the software component and data not exclusively under the control of the software component that means the dependency of the component on particular data is not getting controlled by the same component that is getting used or shared between different components how the data is being coupled between these components is what is being done in the data components and control coupling the manner or degree by which one software component influences the execution of another software component so that is what the control coupling is about the basically we do an analysis for the various parts of the program in terms of how the control is done in the entire life of the software program embedded software program is being analyzed that is what we do with the control coupling so these two are the definitions from the caste paper that is from the FAA Federal Aviation Academy this is one of the mandatory group which qualifies or certifies the avionics or the airlines or aerospace software products so examples also we have studied in the other session and with the help of control coupling control flow analysis we can see if there are any unreachable path or unreachable path partially or completely if there are any parametric issues so that the flow is not proper and we can have a visual instruction of the control flow also we do a software complexity is called a static matrix which we study in detail in the later sessions macabay software complexity line soft code misting levels fan out on command then we had gone through so come of the macabay complexity with little detail like cyclometric complexity calculation is on with all of edge minus nodes plus two and if in case of any disconnected path are there it will become a P depending on the number of path P is not even the path so complexity basically becomes a net adjust and in words so these are divided out of its diamonds or the processing blocks within the software program so the software complexity is greater than 10 means the complexity high and some of the industries they call for rework of the particular product simply because they want to have a complex program because of range of the error are very high so basically the complexity measurements of the complexity number this is the likely number of the independent problem and we also saw the examples of the cyclometric complexity and very important and initially in the first case we have only the complexity is one because we have two nodes and one edge the second one we have four nodes and four edges so here the complexity is two the next one we have the complexity is two why because there are two paths and they are disconnected to the help of four nodes and there are two edges and in the last one we have five nodes and five sorry yeah five nodes and four edges meaning to say that it will become one the complexity is one you can see the independent execution path is only one from the first node to the last node some more examples reported form on a tool such as understand cc++ may be you can go through that this is from the LDRI test here you can see the various paths being arrored with the different nodes and the edges here we make a base 10 in the next one we have more complex problem you can see there are lot of nodes have been connected with the different edges the complexity here is 20 in the next one we have a complexity as 46 very how so definitely this will call for a rework because the program can crash or the program may have a lot of errors and bugs it is equal to 6 so this kind of a make up a complexity is a bit avoided so with the help of tools it is easier to analyze such a complexity so the hence the tools are getting used and static analysis we will study we will continue today we know that a control coupling the program controlled and the independent execution path or independent execution path have been analyzed so in data coupling basically depending on the objects of the data used entirely in the software program so data will be data will work better on sequential code you know that when there is a complex code so we need to analyze the data in terms of its usage as well as the flow so ideally we will do a visual inspection of the data flow examples so we will go through the definitions with no intervening use that means we will just try to understand so what is the definite usage and attempted use of variable after it is killed that is especially this is in the embedded process how the data is being used how the data is being initialized and updated in the entire program and likewise attempted use of variable after it is killed similarly after defined how the attempted uses the variable used in the entire embedded software program so that also will be analyzed in the data coupling or the data flow analysis coming to the next slide static analysis data coupling data flow analysis so there are data which are undefined but reference the variables we have so we should remove from the code unnecessarily we have some data which are either reference but undefined so there is no point in having that similarly we have variables defined but not used in the scope either should be documented in the code or should be mentioned separately and variables redefined with no use in between should be documented if not close to or close it it means to say that if you do not want to have a redefinition of the variable without any usage then you should document it here documentation is you should comment it if it is going to be used for latest scope or if not used you can close that variable and delete it then you know the casting is casting is very important term in the embedded software with different types of variables being interchanging the contents of that so while doing that we use a typecast and all that with the help of embedded C and while doing that there is a high chance that we lose information in terms of some bits being discarded and also there is a chance that there is a mismatch between the intended use and actual use so if unavoidable use explicit cast that means better use explicitly typecast that particular variable so we have a variable temperature other end to variable thread var is a log var suppose if this is a 16 bit integer and this one is a 32 bit integer then best thing would be temp should be assigned with the help of 32 bit 32 bit typecasting with the thread var so that so it will be aligned but we should be careful such a way that the actual value with the help of 32 bit we will not lose any information it could be vice versa also if it is 16 bit that is being assigned with the help of 32 bit on the right hand side then the chance are that the data outside the 16 bit we may lose so we need to be appropriately using it when we doing the casting such errors can be analyzed in static analysis and there could be a mismatch between the data what is being used and what is being assigned so that also will be dotted while doing the static analysis of the data flow and the data coupling global variable analysis local way local overrides global so we know that we use global variables which are used as a parameter shared object between various functions of the procedure and sometimes so the same names are used within the local function and confusion in terms of usage and and while using that function the function assumes that the programmer whoever has written assumes that the local variable will be updated or the global variable will have to be used or updated within the function but local gets the priority or the it overrides the global basically so such anomalies should be removed the best way to have is avoid the redundant variables we should have a meaningful variable names so that it is very clear either it could be a local or it could be a global so these things we can definitely brought out while doing this static analysis saying that there is a anomaly between global versus local in terms of its usage or assignment or parameter passing etc so all this can be brought out doing the static analysis typically some standards they say that you start the global variables with global variable 1 etc like that and local variable start with a variable so it will be very clear by seeing the variable itself we know that what type of a variable and where and all it is used how it is getting used similarly local variable getting identified with a small letter real so it is a clear segregation of both of them and there is no anomalies in terms of their usage and the flow so this is how the data coupling can be analyzed the next one being static analysis basically they use it in the unit testing why because the unit testing one only implementation of the code basically that is why it is advisable to use during the unit testing when we do the data flow basically so while doing the analysis of the code certain things like project standards we need to establish for the code like coding standards and the non-nomenclature of the variables and the procedure size or the function size of the calls and the brakes in the cases like how many cases it can be some of the DSPs will not allow more than four cases so in this in that case we need to be stringently following the rules of that particular process or the development environment such that the program will not have any errors it will not go for a crash similarly we know that we should not have gotos we know that it is not a good practice to have because it will have a unintended behavior of the program and it is very difficult to return from that so these things are part of the project standards that needs to be established and analyzed against that so mostly this will be done in the unit testing and data flow and control flow can be equally done while doing the unit testing and other aspect could be removed unused items that means there are uncertain sorry unreachable code or the function within the code or the variables and the declared variables that are not used so all these parts can be identified although by the inspection or review or with the help of tools all these can be avoided so such as unreachable code declared variables that are not used so this can be avoided in the implementation so such things will be brought out while doing the static analysis of the implemented program and the other important aspect of the static analysis is address development architecture problems like there are the flow control flow and the data flow within the program which is having a statements or the switching statements with the help of pointers or the addresses that will be difficult to find out while doing the execution so there will be a thorough inspection or the visual inspection of the control flow so we are clear about the architecture of the program or the program is designed and the various states this is definitely useful in events states or state machines etc so in this case better to have a understanding of the entire program then with the help of visual inspection apply the standards of the architectural rules or the architecture that we have understood in the program and with the help of control flow we will analyze the design we will analyze the entire program similarly we have anomalies in the procedures the procedures are defined referenced and not used such procedures which is avoided and the usage of the procedures in terms of parametric values also need to be analyzed while doing the static analysis that is also an important aspect okay so coming to the next one we know that control coupling control and data coupling while doing the integration testing also it is not just enough to have a control coupling and data coupling during unit testing is also important to have the control and static data coupling there will be integration testing why it is important during integration testing while doing the integration testing we will address the different models and we know that we are going to integrate various models and the models are with respect to software to software that means application level it could be software to hardware such as device with the application system versus application such sort of a bottom up or the top down approach integration testing while doing that it is better to analyze the the integrity details of the models so that the control and data coupling can be analyzed okay so that is the highlight of that control and data flow while doing the integration testing the next one is being the software software integration testing review of the control and data flow of the program you know that basically the software software integration as I said application level or program level within the modules of the entire program will definitely bring out the control flow of the program and equally the data and the objects or the variables that are flowed on within the program so that is how the static analysis become in the entire embedded software program so that is how we will do the control coupling and data coupling in static analysis so now we will come to the tools part so what are the tools that are used and how they are being used and what way it is useful for static analysis okay static analysis tools a static analysis tool is like an automatic reviewer for your code that means whatever the human being does the machine the tool will take care of that basically on the intended embedded software program so it basically analyze the code or the reads through the source code of course without execution and looks for cases where it will behave in an undesirable manner for example the referencing the null pointer dividing a number by a variable by 0 or workflow of a memory buffer you know that a software program should definitely have a upper bound and lower bound of the memory and if any variables or procedure feeding that so it will identify those anomalies so the static analysis tool will work like a automatic reviewer of the code static analysis tools do not depend on sample input definitely no input dynamic input or input actual input is required they can infer the software's behavior based on just the source code because source code will have definition and the flow of the variables that are intended to be used within the procedure of the function so with the help of that so what the tool does is it will try to feed a various values and it will try to analyze the flow of the particular parameters based on its definition and declaration so that is what the meaning of so second bullet they can infer the software's behavior based on just the source code so when a bug is found the tool reports its location to the report or the software engineer whoever is running the tool along with the information needed to diagnose the problem that means it will just point out the problem where the issue is there in terms of the static analysis aspects of the what we have seen earlier like control flow data flow issues which is automated which is done in the tool and it will report the location basically it will identify which location of what issue so this is very important why because there are stringent rule checker that are being used and that needs to identify the various anomalies within the implemented program especially the rules like a MISRA MISRA C is a motor industry standard regulatory rules so MISRA standards suppose 2004 has about 120 plus rules are there those are rules which are stringent to be followed to be implemented in the program also there are mandatory rules and guidelines that also will be understood by the tool where the violation is there violation of the rules and the intended usage of the program and the data it will report as a error so since so they do not this they do not depend on sample input static analysis tools can initiate program behavior in corner cases that are not anticipated by testers and human inspectors some of them such as very small sort of a like pragma or if it is a macro usage which is very difficult to unlike or inspect visually so those things are definitely and we caught out with the help of the static analysis tools because for that the programmer dependency of a sample input is not required by simply inferring the program flow or the data flow of the embedded software the tool will identify such issues which are hiding from the human while doing the inspection okay while no tool can find all bugs of course modern static analysis tools generate valuable results with minimal false positives that means it will definitely aid the tester or the testing team in terms of where could be the problem areas where are the issues the likely that is going to happen while the program is going to go for a field or program is getting executed so even for projects with millions of lines of course definitely the tools will give some sort of a hint or issues or it can report directly but whereas there is a problem which it cannot report directly or if there are errors or issues so because of one issue it could result in a runtime error suppose it identifies a variable as a variable improperly defined this is an issue this at least will give a issue suppose you say issue n this issue n is definitely a clue for the program that can have a bug while executing on the field or when there is a dynamic usage of the program it could be any issues like memory performance speed timing etc anything it could be so at least if it is not in now entirety it may support but what it can do is it can aid or help the tester in terms of results that can have a some sort of a positiveness in terms of errors or issues that can be fixed to have a major crackdown in the latest stage of the program so that is how it can be used the tools can be used okay so what are the tools that are available in the market or that are most likely to be used in the embedded software industry of course there are hundreds of tools each one have been used or each ones are being evolved based on the usage and the feedback from the embedded industry so there are certain few examples which I try to put here which can be used directly or indirectly partially fully depending on the type of embedded software systems it could be polycom could be a radio signaling systems it could be a automotive or aerospace etc one such tool is understand per cc++ or other from sky tools basically this is a good static analysis tool which will identify which will identify analysis static analysis aspects like make up a complexity range of code it will report the dead code dead objects initialization variable improper usage in the call tree you know all this by now so as a static analysis aspect all this can be done with the help of understand for cc++ lot of features I will try to go through a snapshot of the understand for cc++ as a practical understanding of the tool in the next slide probably similarly we have a tool like poly space it has its own advantages and disadvantages they use it in the automotive industry similarly we have quality for inspection reviews and static analysis we have a QA checker QAC and Tata then we have a LDRA testbed so some of these tools like LDRA or RTRT they can also be used for unit testing instrumentation you know that during wakebox testing we need to do the instrumentation of the code with the test hub and test drivers so meanwhile the same tool can be used for that this also can help in terms of static analysis code inspection reviews etc of course as I said the MISRA tools for tool for guidelines and rule checker we have the ID is such as code compose studio multiple etc they have an inbuilt feature so which will which can be triggered and used on the help of the code so which will report a error or the violation of the MISRA rule saying that such and such rules have been violated and that violations can be reported as a static analysis output that is how these tools can be used similarly we have a logiscope rule checker these from paralogic is also one of the important tool across the industry they use it and we have the PC lint so that is also one of the static analysis tool basically it is more on the olden days tool basically Linux based systems they have used and the C program C plus S program is a inbuilt along with the compiler of the compiler series programs is available these are open source static analysis tool this can be used for static analysis so these are some of the static analysis tools so we will try to understand a snapshot of static analysis testing tool here it is understand for C++ I think I will try to create a sample project and explain the flow of understand C++ how it can be used in one of the practical session in the later part of the course probably in the end software testing course this is a sample report you can see this tool has a lot of features such as we can build the program build the project with the help of embedded C you can see the source code here like into main is there there is a blue letter which it understands from C perspective that declarations and definitions are part of the implied few source it will highlight and the variable name it has highlighted that letter here so the definitions can be easily understood by the user in terms of analyzing the code and understanding the source code and here you can see a invocation of a sample program which I have wrote there is a main there is a delay which is called by main and delay is calling the other program called nothing so this is a call tree similarly we can have a complexity in the call tree may be one more slide I have to show you that what the deep level we can use the understand for C++ tool and another window we can have another type of report which shows the other side like the nothing is called by whom it can be called by multiple people here but here in this case only it is a one-flow or the one-sided one direct call from main delay and nothing so on the left hand side you can see the snapshot of how the particular function of the group of program it can highlight here it is defined in the main job see it will list out the global as functions how many functions are there are three functions delay main and nothing and it will report for this particular main dot see a matrix in terms of what is the count line so what is the count line code you know that executable lines of code identify is there and there are 49 lines having commons and there are inactive there are 0 that means if there are inactive lines it will report as so many number so likewise we can have the report I will try to show you in one of the questions how we can generate the report with this tool and you can also download this tool for free or not for commercial use from www.skytools.com they will give a 15 days evaluation version where you can create a sample project sample embedded three or C++ project and analyze the same may be the download part of that you can take care and I can provide some exercises so that we can go through them and try to understand the tool so this is the snapshot of the understand for C++ static testing tool so there is one more tool we will go through this is a check marks check bits from checkmarks.com you can see this also have a various analysis output in terms of a vulnerable code line what is having issue here and it will store the project files here I think it is taking an example snapshot of I think it is a database program right side right hand side you can see the flow of different programs it is called attack vector how it is being ordered similarly results it will show in red if there is a found errors or issues and there is a optimal mitigation point that means how different programs have been controlled and mitigated with this control flow likewise we can use the static analysis for generating the static analysis report with the help of this check marks tool in the next as I said in my slide understand for C++ here you can see a simple program having only three programs of the functions which are being moved in the next one you can see the complexity it is not the flow or control flow complexity you know it is you may not be able to see complexity may be when I explain the tool you can understand the call tree in detail here you can see the main function calling its ADAP program there are about six to seven main next level function we can call it as a level one then each one of this program of the function will call the level two individual functions there are each one can have four five likewise this is a level two at the next after the first invocation is done similarly we have a level three we have a level four we have a level five etc and it also can report in terms of no issues with a green color and issues with first we have a level six so this the complexity here we can presumably say that as the level six of course it depends on each function on its internal details and it will highlight with different colors of the control flow so this is a one of the good example of understand for C++ call tree with the help of this the user tester can analyze how each program of the function have been architected how each function or the functionality of the individual procedures have been used or the card within the entire embedded systems so this very important aspect for static analysis okay the next important aspect for the static analysis is double set it is also called as worst case execution time analysis so we know that timing is very important systems especially hard real time embedded systems so we need to have a timing aspect clearly analyzed in the program so that we know that what it is going to take the worst case for a program it could be any functionality it could be entire program or it could be a device driver whatever so all these have to be analyzed for its time for the timing that particular piece of software it takes it takes from two aspects best as well as worst so both have to be analyzed so very important aspects in embedded software testing this needs to be analyzed statically so worst case execution time was possible execution time of the code before using it in the system before actually we use the system on the field or for dynamic testing the devil set or the worst case execution time analysis of a piece of code depends on both on the program flow like we have for loops iterations then we have a decisions such as if else statements then it depends on the various function calls that are within the program and of course the architectural factors like cache and pipelines of course embedded industry they use cache memory you know that embedded software will have a cache which is a temporary storage which is being used very frequently and these factors can be understood while doing the devil set or the worst case execution worst case execution time analysis and the pipelines pipelines is the thing but an important process aspects such as a fetch load store and execute so these are some of the stages that core core processor or the processor does for executing so definitely if you instruction cycle are involved those need to be analyzed in terms of how many instructions it requires a line of code or a group of lines of code or the entire program of the piece of function so very important to have worst case timing analysis in the embedded software testing so we know that how much it is going to take for worst case execution of the particular software with the help of timing analysis we do that and we will report as a failure if it is going beyond intended timing and it is to be fixed by the development thing in terms of optimizing the time or reworking on the code whatever it is okay so the devil set analysis was used to optimize the programs to compare algorithms and definitely we need to have a comparison of the various models for that we know how much each model is going to take so that entirely we know that what is more complex what is going to take more time etc and also we know and analyze the programs which require optimization and so and so hardware is behaving this way it is taking more time suppose you have some 3-4 drivers I would say device drivers each one takes its own time like one takes 2 millisecond other one takes 5 millisecond other one takes 10 microsecond etc so we know that overall that particular program if that is using 3 device drivers we know how much is going to take so this 2 millisecond could be one of the operation or write operation or access whatever it is so all this can be evaluated with the help of the timing of that particular program so this also is a measurement measurement of the time one of the measurement is timing analysis tools used for emulators tools used are emulators time accurate emulators for the program emulators will support in terms of the call sequence and the total time of the calls or the stack usage in terms of how much time it has taken or the call tree or the sequence tree etc so all this can be done with the help of emulators as well as emulators then we have logic analyzers which can be hooked into the embedded software and with the help of logic analyzers timing analysis can be done and we use oscilloscopes to measure the various timing aspects we know about oscilloscope will help in doing the plotting we have something like this and we can measure time like this or anyways in the oscilloscope where we have the time on the horizontal axis and we have the value on the vertical axis so that we know what is the value across the time it is going to take and we do the analysis with the help of oscilloscope to probe any signals or the variables of course we have the timer readings of the particular emulators procedure of the variable or the signals and of course so we have inserted programs specifically for measuring this into the software and we have the profiling only spelled here it is nothing but the software profiling profiling is a very important aspect in the embedded software systems with the help of profiling we will do the analysis of the timing that that procedure or the intended program is going to take so that is about worst case of decision making you can see a example is from one of the web reference that I have used so here both aspects have been put in this diagram which has beset beset and W set so beset is best case for analysis best case execution time and W set you know that is a worst case execution so in between in the middle we can see that there is a possible execution time and beyond that there is a one safe zone called best case execution estimates best case execution analysis so across the horizontal time and the program can take from best to worst so we will do the analysis we will say this is the title one and we have a border actually where we define the best and the worst so within this mostly it is going to this group optimally if it is less than beset it is called as a safe beset and estimates among this if it is beyond that zone of this execution time we can see in the measure it is called as a worst case estimate so like this we do a analysis of the best case execution and the relationship we will draw so the goal of the beset analysis is to generally a safe and tight estimate of the worst case execution time of the program fragment or the program so your related program related problem is that of finding the best case execution time of a program you can see in the program example here figure how the different timing analysis have been put together so this can depict the average execution time the best execution time and the tightness safe estimates likewise we can have so analytically it may be difficult to arrive at every execution time because there are lot of statistical profiling of the input data all are required it may be very difficult but definitely we should be able to analyze the worst case and with the intuition and the analysis of various worst case execution we will know what is the possible execution time and the best execution time so we for doing that worst case or the worst case execution model so the important steps that we need to take care is the program flow I will start here then we need to have low level flow or analysis because individual low level analysis will matter in terms of high level program flow collectively and of course we have a calculation of all this flow this we are very important steps in terms of worst case worst case execution timing analysis so this is how the worst case execution timing done there are as I said logic analyzers oscilloscope emulators emulators any of the tools can be used for doing the worst case execution analysis you can see another snapshot of W set this from bound P.com so there is a what they do is they will define a boundedness of the program and surrounding that they will define the various aspects like source code being used from the compiler and linker which will be applied within the bound P and that compiler linker can use in various of tunnels and user assertions in terms of for loads values call counts all this will be analyzed in the bound T analysis and the analysis is done with the help of decoding the instructions and control flow soft program calls and loop bounds first case path all this will be done with the help of bound T analysis the inputs are analyzed and as a result we have a flow graph as a result we have a call graph as a result of this bound T analysis we have first case bounds in terms of the cycles the various cycles that each program fragment or the programs will take like main will have cycles of 9352 cycles there is a program 4 which will take 121 cycles and the count is 105 solve is 920 7 count is 303 once is 720 each program fragment will list out how many cycles and with the help of that we will do a static analysis of the timing or the timing analysis the next one is a static analysis in terms of stack analysis is also very important aspect in the next type of static analysis you know what is a stack stack memory has to be allocated statically by the programmer that means the implementation code should have a stack memory definitely because we use the interrupts or preemption of what the various small functions by bigger function preemption of bigger procedure by smaller functions definitely the program state and the variable state all have to be saved and should be recalled so that is the use of the stack definitely embedded systems will have a stack memory and the stack memory has to be allocated how much it is going to take each embedded software fragment or the entire program understanding or sorry understand underestimating your top usage can be a serious runtime errors so we need to have a definitely a safe bound of the stack memory and allocation is very important and which we should make sure that the implementation of the embedded systems will not go beyond the stack memory so that is why there will be a hard requirements saying that at least 50 percent or this 75 percent stack memory should have a reserve that means in worst case it may exceed so that on the safer side better we should reserve 75 percent of the stack so that if the program spills beyond 25 percent still it is safe because the embedded systems will not crash because there is no stack issue or stack error so underestimating stack usage can be to serious or runtime issues program behavior can be unpredictable when there is a stack issue and which is very difficult to find similarly over estimating stack usage also we cannot have we cannot have 200 percent of stack usage buffer or allocation because we are wasting the memory stack analysis calculates the stack usage of embedded application we know that the analysis results are valid for all inputs and each task execution stack analysis can directly analyzed on binary executable exactly as they are executed so stack will be same and stack analysis not only reduces the development effort but also helps prevent runtime errors due to stock overflow so it is a very important aspect is that any time of the embedded software system or the program we should we should make sure that there is no stack overflow during the runtime so this also will be analyzed during the stack analysis of the embedded software system in the static analysis the analysis can also done on the map points generated during the build process so we know that program will be compiled linked and as a result we have a executable with map file basically map file will have the complete picture of the program which is going to be executed on the browser so the picture will clearly tell the pointers on the memory like it will list out basically the stack segment memory program text and the data so this needs to be analyzed and definitely this will have a stack memory usage and the upper boundary how much it can have so that needs to be analyzed and we know with the help of map file what is the stack may be as a practical example I will try to go through a sample memory file the memory file will have a name with the dot map so this needs to be analyzed statically so with the help of that we will know the memory aspects and the stack aspects of the embedded software program so with this we will conclude a today's session we will continue the next session on the static analysis with the stack overflow and and what are the guidelines that needs to be taken for stock analysis in the next session