 Welcome to the next session of embedded software testing unit 2 series, this is lecture 21. So today we study more about the white box testing techniques and continue our white box testing understanding with more details and we will try to conclude this unit 2 in today's session if not in the next session. And in yesterday's session we discussed about the other techniques of white box testing such as branch condition testing and before that we studied statement testing, branch condition testing, data flow testing. So in branch condition we know that testing does source code which finds out decisions and individual program operands and the Boolean operands within a decision conditions will be tested with all power decision condition testing or this also called as branch condition testing. The next type of testing which is also called in terms of aerospace industry, NCBC branch condition combination testing where the testing aspects will be done on the resource code which recognizes the decisions and the individual Boolean operands within the decision conditions, all the possible values that are going to be fed into the Boolean operands within the decisions will be tested. So accordingly the test cases will be designed such a way that the individuality is achieved. Then we have studied about modified condition testing where the outcomes especially the result whatever it is going to arrive at will be tested. Also the last one linear code of sequence and jump of S6, here we have seen three types of TER test effectiveness ratios where TER one is a number of statements, statement coverage basically, TER two is a number of control flow branches versus control flow total branches, the last one is LCACJ is executed so they used to use it initially but nowadays they use the other type of testing. So we also went through some of the samples and this is a LCACJ table and the total number of LCACJ numbers with the start line, finish line and the jump line will be addressed. The last one is the D1 standard the specific testing is called NCDC, basically the D1 standard the life cycle of the processes addressing to each life cycle activities such as planning, development and testing, testing is called as integral process and under development it has requirements design coding and integration. And we have studied a few examples of the AND gates the OR gates where truth table can be arrived so this type of testing is also called as a truth table approach where all the possible combinations which yields the independency as well as the outcome of the testing will be tabled and accordingly the test will be driven okay. So in today's session we will be talking about gray box testing as I said in the class it is a mix of both white box as well as black box sometimes what will happen is the black box may not be sufficient to support the coverage or the justification in terms of testing. Similarly the white box sorry the black box also may not be enough so we need to balance between both of them especially the integration test cases and the system some of the testing cases on some of the white box of coverage we need to be balanced so the type of gray box testing that is addressed white box test can be intimately connected to the influence of the code they can be more extensive to known to known black box test because of the complexity and other factors. Test that only we know a little about the influence of components called as gray box test here what I said is some knowledge of the internal will be known gray box test can be very effective when coupled with error guessing so another type of matter called error guessing where the errors are guessed based on the knowledge of the unit under test and that will be implemented the test design will be applied so these tests are gray box because they cover specific portion of the code they are error guessing because they are chosen based on the guess about what errors are likely that means the test will be carried out as a pre-test sort of a thing then user has a knowledge about what could be the likely failures that could come and based on its knowledge about the system and about the code so he will balance between specific portion of the code as well as the black box features so he will apply certain test cases those test mechanism is called gray box testing so this testing strategy is useful when you are integrating new functionality with stable base of legacy code so that means that we have a base code which is working most of the time and which does not have much issues or unknown issues and there is a new piece of code or functionality is being added so what we do is we will try to understand how the functionality will be moving with the help of a piece of code and with the help of a system understanding of the knowledge we will try to apply new test cases so these kind of new test cases mechanism is called as gray box testing the next one is all basically the additional details what I am trying to give in terms of white box testing the main methods in white box testing in terms of coverage and all we have covered in earlier we will add to that there are different testing methods and the testing philosophy and the test details that needs to be studied okay so we will study about the test driver and the test tub so what is the test driver what is the test tub you might have heard about this so all the embedded systems I mean embedded system testing mechanism will apply this test driver M test tub it is particularly useful for software testing where unit testing is done so okay a test driver so software which executes software in order to test it that means test driver is again a piece of software it is a test software which executes what the software that is embedded software unit which is under test to test it so providing a framework for setting input parameters executing the unit and reading the output parameters I have a diagram in next play I will explain you can understand better so what will happen is sometimes some of the input parameters execution and expected output may not be possible with a realistic inputs so what we need to have is development of a small driver which drives all this inputs and calls the piece of software under test and expect the result so with the help of test driver this is this will be done a test tub imitation of an unit used in place of the real unit to facilitate testing so this is the complement basically of the test driver on the other side where the actual piece of software which is supposed to work will be replaced with a stub so that we know that test driver is working or not properly to see that what is expected and the same thing will be replaced with the actual piece of software to compare the expected result and arrive at the conclusion that the tests are being passed or failed so that is how a test hub is used you go through this diagram this will give you a clear picture of how the test driver and the test hub is used so basically the book refers to this diagram of course test bed is a test setup or environment having both the driver on the one side and stub on the other side and the unit is used in the middle you can see the driver calls unit under test and the unit under test can be replaced with the help of stub it is like a wrapper we can have for this so what does the wrapper do basically whatever the information that we need to get it from this unit will be driven from this driver and wherever the possible expected output that is expected from this unit will be done with the help of the stub so stub is going to replace the actual unit which is unit under test so that is the basic of test hub and drivers so interfaces between two systems two system parts can how it is used basically test hub and test drivers are basically for the interfaces so two system parts in order to test the interface so on the one side we use the driver because the other side we are going to have an interface and when they are closely in conjunction we may not be able to test it so for example so we have a UUT1 and suppose we have UUT2 and you want to test UUT1 with parameters such as parameter 1, parameter 2 and expected output as expected output 1, expected output suppose and how does this UUT1 is going to be done is with the help of UUT2 so what will happen is there is a interaction between these and what we are going to do is we are going to replace this piece with the help of test driver and similarly vice versa while testing this piece of interface with the parameters we are going to use a test driver here may be one we can call here in this case it is so likewise the interfaces between two system parts can be tested with the help of this mechanism if both systems are available here available means available for the tester to test it independently that is what it means or if it is not available then we are going to club both of them and drive it at a higher level so this can have consequences for the testing time because the time is more required for this actually so why this and to start testing the system part as early as possible steps and process means still suppose some functionality in the embedded systems completely developed and other functionality is not being developed how we are going to test that implemented already functionalities with the help of drivers their specification detail problem and with the help of inputs from the parameters the test drivers are developed and tested basically it is useful for testing the interfaces so a step is called by the system on the test and provides the information the missing system part should have been given and a driver calls the system part standardization and a use of a test bed architecture basically if you have this test bed architecture defined in the early stage of the project it will be very good for each piece of the software unit under test so it will greatly improve the effective use of stubs and drivers the test bed provides a standard interface for both the tester to construct the construct and execute test cases and for the stubs so each separate unit to be tested must have a stub so we need to have a stub as well as a driver to test both of them interchangeably and techniques for test automation such as data driven testing can be applied effectively where data is very important and all the type of data what we have studied in all the operations like pks, cks, cus and all that can be tested with the help of this the combinations of inputs that needs to be driven can be done with this actually. So such a test bed architecture is the testing of any unit the use of such test during integration testing and large scale automation of lower testing is easier basically so considering all these aspects we need to have test stubs and drivers okay so that is about test stubs and drivers and a day box testing with the white box testing mechanism now we will come to the various coverage testing tools that are used in the industry in general so logic analyzer, software performance analyzer, timing analyzer, vector cast, LDREA, RTRT there are lot of tools like this so which will help which will basically used for having the coverage and the instrumentation unit testing etc will be done with the help of this it could be one tool or multiple tools depending on the complexity of the embedded software that is being used okay let us try to study understand the basic thing about these tools okay vector cast this is a tool from vector cast corporation in aerospace is being used more where they define different levels and they do the instrumentation of the source code and they run that tool and we will get the report such as this what is being shown below so for a database sort of application in aerospace this is being used we can see the matrix what are the matrix it generates so with the help of this matrix the conclusion will be done so as to see that whether the vector cast tested the output is 100% coverage done or not done you can see there is a database total number of database have been tested here and the complexity is 5 state for testing and due incident be level a state is 100% that means all the statements or the decision or the conditions that are there for each of these there are about 10, 10 out of 10 have been executed saying that the coverage is 100% similarly you see another piece of software that is been tested with the help of vector cast some functionality or a package for the manager and this has a 1, 2, 3, 4, 5 types of sub functionality like place order, player table, get check total etc each of them have been tested with the help of vector cast with the instrumentation mechanism and you can see the complexity it has a complexity of 5 we will try to understand what is complexity in the next unit I guess unit 3 and complexity of other pieces is 1, 1, 2, 3, 4, 3, 4, 5, 2 so there are total 12 complexity measures that have been tested here with the help of vector cast and the coverage is something like 63% in the first where the place order is being executed and the coverage is 14 out of 22 that means to say that the complexity sorry the coverage needs to be done for 22 executable statements or decisions or whatever it is but out of which only 14 of it have been covered saying that 63% have been covered similarly for the next you can see 100% coverage and this one is 77% 7 out of 9 and the last one is 0% none of the statements have been executed or have been invoked with the help of this tool whatever instrumentation we have done so all together they generate the matrix of course we study about test matrix and all that in separate session in detail but to have a glance of what tools commercially they are using I am just trying to present it the various tools. The overall coverage is 71% and I can say the new column that is for a different piece of functionality or things that will be showcased the next one is LDRA it is from LDRA and it is also one of the popular tool that is being used in the aerospace industry here also similar to what instrumentation we have seen so the report can be generated which will help in terms of doing the in testing we can see how many test cases have been used what are the past so what is the report and likewise we have a complete coverage of the unit under test. So the next one is being RTRT this is also one of the good competitor and popular tool that is being used in different industries including application, finance, embedded automotive, telecom, aerospace etc widely it is used but they have a different variants of this tool such as RTRT embed likewise so it is from IBM you can see the details in that website or their website there is a data sheet and all that which talks about this so the rational test real time is called as RTRT so what it does is the source code such as C C plus S or whatever it is fed into that RTRT tool so what it does is the below steps are there part of the RTRT in terms of configuring and using it so the environment will be defined first test harness will be created and with the help of test harness steps will be generated then we will have executables for the corresponding test harness then actually we execute the test after execution of the test on the target environment we have the results and the results will be used to report it what is the coverage and all you can see on the reference is the code coverage, performance, memory, trace analysis all these aspects of the testing white box testing will be done and similarly test results will be reported to support this coverage or justify the coverage to analyze it and it uses the cross compiler as GNU, VCC etc to compile the developed test harness and the builds and it uses the target such as micro controllers it could be having the RTOS or it could be with the host machine simulators or emulators on the target machine so that is how the RTRT will be structured to use on the white box testing methods okay so that is what is about commercial tools used for white box so let us try to study what is the logic analyzer basically logic analyzer can record memory accessibility in real time it is a potential tool for measuring test coverage what it does is we have various test hooks in the embedded system and the test hooks will record different data in a dedicated piece of memory and that memory can be recorded with the help of this analyzer that analyzer is called logic analyzer and there are logic analyzer probes that will be hooked into the memory and with the help of probes it will acquire the data in real time and will have the coverage and will provide the coverage results and reports. The logic analyzer is designed to be used in trigger and capture mode that means the logic analyzer can be used as triggering and capturing mode for the memory or the interfaces that will be hooked for that piece of it is basically hardware and software both the logic analyzer has I may not have a diagram here it is a plot I will go to show how it looks the logic analyzer is the logic analyzer is designed to be used in the trigger and capture mode it is difficult to convert its trace data into coverage data so what it means is the trace data whatever we got it we may not be able to cover it we may not come convert that into coverage with the help of this but what it does is the overall measurement it does we do a sampling method it is called statistical sampling with the help of this logic analyzer can be used to have a coverage on the trace data on the triggered mechanism in continuation of the logic and level in particular it is difficult for sampling methods to be a good picture of ISR test especially the piece of software having ISR is nothing but interrupt service routine you must be aware of this so basically this is the pattern parcel of the embedded software where an interrupt occurs for the normal flow that interrupt has to be handled with a piece of control flow or functional flow or whatever the action that is needed that action and all will be part of the routine code interrupt service program a good ISR is fast meaning to say that ISR has to be short and sweet it needs to get fast and come out of that routine that means it needs to do certain flag settings or certain minimum option so that it can come out very fast ISR cannot be bigger it cannot be complex and it cannot take effort to take more time if an ISR is in the infrequent that means the frequency of the ISR happening in the embedded life cycle is less the probability of capturing it during any particular trace event is corresponding to load that means capturing the trace data is lower because ISR is happening very less frequency that is easy to set the logic analyzer to trigger on ISR access so but easy to set the logic analyzer because the capturing mechanism will be easier this coverage of ISR and other low frequency core can be measured by making a separate run to the test suite with the logic analyzer set to trigger and trace just that core so what can happen is we are trying to focus only the piece of software which are part of the ISR so what we can do since ISR is very fast and very frequently happening within the system and the system having complexity to improve more number of ISRs what best can be done with the logic analyzer is that we can comment out the rest of the core and focus only on the ISR what is and trigger that and capture the data what ISR is supposed to do and analyze the capture data with the help of logic analyzer and do the coverage whether ISR is capable of doing or covering all the intended result. So there is another set of tool it is called software performance analyzer so performance could be in terms of memory performance could be in terms of memory that means memory has to be accurate or intended for certain portion of it suppose say requirement could say 50% of buffer memory to be resolved for feature upgrade or scalability etc so what will happen is we need to say that for example if you have a 1 MB of memory we should have 512 KB of memory as we go against 1 MB so that is the requirement and how do we test how much the software is taking so there are a lot of methods and there are a lot of tools also so those are all coming under performance analyzer by using the information from the link of your map these tools can display coverage information on a function or modular basis model basis rather than raw memory so this is a map file for each of the embedded project this will be done with the help of this compile plus link with the help of this map file will be generated this map file will have all the information such as the addresses the top codes and the BSS stack and all the information with the help of that memory map we know that how much it is going to take this build for that particular project with the help of that we should be able to arrive at the performance of that particular unit so for that there are tools analyzers from different vendors but mostly it will be done manually or statistically basically okay so performance testing performance testing and consequently performance tuning are not only important as part of your function testing but also as part of important tools for the maintenance and aggregate base of the embedded life cycle so another important aspect of the performance is memory we know speed, speed with load they say so it is very important to have the performance of the embedded system stable and continuous without the change and with efficient with efficiency having the scalability on modular this is one of the performance requirement they use it not only it is enough to have memory satisfied speed means not fast basically with various conditions on the field we should be able to perform consistently and scalably etc the other thing is timing it should be accurate and stable so this is also one of the performance measure they have it so performance testing and consequently performance tuning are not only important as part of your function testing but also as important tools for the maintenance and the upgrade, upgradability is more because embedded system is ever living and growing because of the fixes in the requirements in the embedded life cycle. Performance testing is crucial for embedded system design and unfortunately is usually the one type of software characterization test that is most often ignored that means as seen and many of the embedded system world industries they give less priority to performance testing and tuning especially in the beginning or the middle of the problem but they struggle in the end because they would not have met the criteria of performance and lot of bugs and errors will occur due to the performance issues that is very important to have an understanding of what is the performance of the embedded system accordingly we need to have testing mechanism especially on the memory speed timing and the load of the unit and we should use the analyzers that is performance analyzers on map files now probably we will try to touch base simple map file to have an understanding of what it does but basically it is part of the embedded system course I will try to just address it in the future okay the next one is I have said memory usage what sort of a memory usage is there in the embedded system so basically we use a memory map with the help of memory map we will be able to analyze the stack inbuilt memory it could be the RAM ROM ROM is it could be flash or we can have a small footprint in terms of faults and all that usually that will be stored in MEM it is also called as e-square prompt electrically irresistible programmable data on the memory so there are various types of tests for example for e-square prompt they use the patent test and for flash they use the integrity test integrity test can be done with the help of working ones where each memory cell is tested with zeros and ones that means whether each cell is capable of flashing or programming a zero a programming one so that is also called as a patent test usually they use if it is 16 bit 5A, 5A, 5A, 5A you can see why 5A it is because 5A is nothing but I will write it on in binary 4 is 5, 0, 1, 0, 1 whereas A is 1, 0, 0 is 10 so we know that the cells the first cell is addressed here this with 0 and the same cell is addressed with 1 similarly the next cell is addressed with 1 and then next the same cell is filled with 0 so likewise we are going to have a patent test or working one test where each cell in the memory will be tested so on the memory like flash or RAM the same testing is done so mostly these tests are basically built along with the code because there are requirements which talks about these tests have to be there frequently should be done in the embedded systems so they will have a implementation in the embedded system itself at certain frequency when the system is running but we need to log whether there are any failures for such tests, these tests are called built-in tests built-in tests there are different types of built-in tests that is not scope of this depending on the embedded systems and all this will be logged with the help of those tests so that is part of the memory usage and memory testing in the embedded software testing okay the next one is the timing analysis so there are stringent timing requirement to test and analyze the timing how much the system is taking, timing analysis have to be conducted and mostly the timing analysis are done in the help of the time machines and trace machines which are available in the debug itself the ID is nothing but integrated development such as multi-lotard by code warrior etc all these debuggers have inbuilt time machines or trace machines and that will be used that is helpful in finding the timing requirements in terms of unlocking code and putting the break point and measure the time how much it is taking and all so we can validate with the help of that some of the embedded systems will provide the ports and that port which are the LEDs you can say can be instructed or IO ports so which will toggle for certain range or certain frequency and that can be put into the scope, scope means oscilloscope we have multi-channel oscilloscope from Agilent we can use any of the oscilloscopes to measure it so such embedded systems so we will have to have ports available but it may so happen that the ports may not be there because it is additional hardware and they may not be able to they cannot or they may not be able to afford to have that because it is going to occupy a space and no current etc for intermediate testing before the last build they may be having it within the part of the board or the target board of the FPGA over this with the help of that the timing can be tested where the embedded software will be triggering upon certain events and that events can be captured in the oscilloscope and for ISRs we can analyze through IDE where ISR so we can have a counter and how much the counter is between the counter also can have registers the timing registers can be used along with the counters to arrive at how much time it has taken so manually this will be done to analyze the ISR timings etc then we have inbuilt registers on the target systems such as watchdog and all stuff with the help of those we can test it watchdog registers, timing registers, RTCs, RTCs in the remote time clock all this can be used for doing the timing analysis so with the help of this we can do the timing analysis okay so having understood the various tools involved the applicability we need to understand for the life cycle coming to the testing tools so there is a life cycle that is being categorized so we will try to understand in brief what is this tools life cycle basically it is called the various tools how they are getting used how they are getting categorized so what do they use and all that quickly we will try to understand based on what are the type of tools that we need to have in the embedded software testing okay so you can see a diagram here that depicts about the various tools related to the testing life cycle so we know that there are as per the TM method or the liter principles what we have studied there is a life cycle PSEC the preparation, execution and all the stuff completion and all that stuff so we have various tools for P preparation we have a case tool analyzer complexity analyzer for specification we have a test case generation or test case generator that will help in developing the test cases and for execution we have test data generator record and play back to load and stress tools and we have a number of tools does not mean that all need to be used but it is a categorization basically that need to be applied and planned these are all be part of the planning software verification planning which we have studied in our earlier sessions similarly for completion we have a reporting mechanism and all that and overall for P and C planning and control we have different tools in terms of defect management, test management, configuration management, scheduling and progress monitoring code okay so this is how the life cycle data or the life cycle aspects of the testing will be considered in terms of categorizing the testing tools and we will try to study some of them as we go through some of the sessions like defect management or test management, configuration management in the future classes okay so that is about the life cycle we will try to quickly understand each tool such as planning and control there is a defect management tool a defect management system is used for defects, press them and generate the process and status reports defects detected during the test process must be collected in an orderly way it should be organized properly but a small project a simple file system with a few control procedures is sufficient whereas a more complex projects need to have at least a data base sort of a thing with the possibility of generating progress reports that tool should help basically this will help it is in team to analyze where they are so defect management tools such as bugsy law can be used for management of the defects so that is part of the planning and control the next one is a test management tool so the tools so with the ability to link system requirements of test cases basically all the tests are managed how they can be based all this will be part of this so the tool should have a ability to link the requirements into the test cases they have become very useful if system requirements are changed or might change why this we need all these tools is very important to have control of the changes so how do we do so with the help of these tools so it is very easy to change it to plug with the new requirements or initial requirements etc so all this can be done with the help of test management tool then we have a scheduling and progress moving tool there are number of tools available for scheduling and progress monitoring such as MPP and all there are different tools also which can be integrated along with the test management tools such as this link or bugsy law for defect management so it is very easy with the help of these tools for scheduling and progress monitoring very useful for a test manager combined with information from the defect and test management systems and preparation phase and specification phase tools on the left hand side you can see is the preparation phase and specification phase there are case tools analyzer complexity analyzer first case generator we will try to understand what are those probably we will try to detail it out in the future sessions okay case tools generator preparation phase they use it specially for those where we use object oriented as one of the systems or model based systems the tools such as UML based tools are used for consistency checks and all that so they will be able to help in terms of the preparation phase so they can be used to check whether or not the design has omitted anything so this is basically testability review of the test basis what we is getting planned in terms of preparation. So the next one is the complexity analyzer we try to study the complexity software complexity basically this complexity analyzer understand for physical process from sky tools is capable of giving indication of the complexity of the software the degree of complexity is an indicator of the chance of errors occurring and also the number of test cases how much we need involved to test the system thoroughly so for this they use a standard called the macaday complexity is one of the important or good complexity measure generally they follow in the industry which identifies the complexity this is basically getting identified I can explain that in one of the next class with the formula and all that we need to know about agent nodes basically we know that embedded software can have multiple edges and nodes on the software with the help of that the complexity is arrived the next one is the test case generator this for the specification test specification or test cases so there are test case generators such as MATLAB, lab use and all that but sorry lab use and all are used as a script and all this stuff so using something like excel sheet based tools or DC or Python even or these can be used from inputs such as requirements to test cases so with the help of for this tool test cases can be generated and consistently it will be used for different requirements engine requirements and all that so that is how test case generators are used in the preparation and specification course of the embedded software system testing so next one we have the execution the execution can be done with number of tools and the various types of things that we have definitely minimum 3 to 4 tools to be used for any of the embedded systems having a normal complexity and here is the list of the type of tools that are used in the execution phase test data generators, record and playback tool load and test test simulators we know we have studied the books from simulators, steps and drivers we studied this because we know that ID is something that is above for code analysis and all that we have a static source code analyzer such as understand for ccbc, cc lane likewise and cover T is also another code analyzer error detection tool in the code performance analyzers memory analyzers and we have code coverage analyzers such as instrumentation analyzers, vector cast, RTLT and LDRA request so we have a thread end event analyzer where we use in RTOS which is living with the multiple threads and all that then other one is called thread detection tool the specific tool where if there is a thread to the embedded software system due to error code or a dangerous way of having the crash etc that can be identified with the help of this mechanism with this tool so that is how the execution phase the tools are categorized so having understood this testing tools and all that we will try to understand the testing terminology and all that stuff in the next class is a configuration management tool also we will try to go through that in the next class