 We continue with what last time we said about the VLSI historical perspective and future trends in the CMOS circuits and system design. This is the part 2 of the same lecture which I gave last time. Before I actually start the course, let me tell you that what this course is all about and what is the course contained and why we called it advanced. In our first course on VLSI design, we talked of basics of VLSI design including the circuit performance as well as circuit design based on which we layouts can be created. However, in this case, we will go little on the higher side on the performance side, particularly we will look into design of issues of high performance digital systems, we will look into how to design high speed circuit using logical efforts technique which was very popular or which has been made very popular by the effort of Sutherland, Sproul and Harris. Then, we look into the present requirement of circuit design in VLSI, typically we call low power circuit design and effort will be made by us to see power and energy minimization and in particular how to control leakage under sub 45 nanometer circuits. Design issues in arithmetic will be another area of interest because I think the data path is the one of the major block which is used in almost all processing systems. We also look into the some new rams and new rams and ROMs and also we talk about something on the contained addressable memories called CANS. Then we will talk about advanced techniques in VLSI design for performance and physical design automation. Continuing with the same strategy, we will talk about design methodologies, architectural simulation, hardware description language, how we actually put that design as an entry in the compilers and we will talk about silicon compilation and also some part we will spend on verification. One of the major worries right now in high performance circuit design is the delay cause because of the interconnects. So, this part we are looking into what we call wire modeling and interconnect aware designs. Then, we will talk about the advanced sequential circuit design and particularly we will spend time on synchronous and asynchronous system design and how to actually minimize the clocking overhead. One of the major emphasis in this course will be how to synthesize a circuit and how do we implement that circuit or system on an FPGA board. So, this area which is most common area for the most of the people who will probably use my this course probably will spend more time on FPGA implementations. Of course, depends on the speaker who will talk about then we will also like to introduce to you how most of the system in the world which you want to actually put it on a chip are generally finance state machines and this area is very important for all circuit designers and chip implementers. So, we will talk about how to implement a FSM and how to model it and finally and if not the most important part in the whole game of VLSI is the testing of VLSI chip or VLSI logic and I think one of the major part of this course will be spent on VLSI test methodologies for logic and how do we design a circuit or chip for testability. This area seems to be one area, but I think we spend sufficient time to actually bring to you how this test methodologies are actually available and how new things will come up in this area. This is typically a slide which I am little you can see there it is a packed up slide essentially it is the same topics which I wrote last two slides, but who will be the instructors for these each area have been listed for example, first six areas will be covered by me I am in Sandorkar as I said then there will be areas covered by people like Prasadinesh Sharma Prasadinesh Sharma Prasadinesh Sachin Patkar and Prasadinesh Vireen Rasing these are the three other faculty members who will actually contribute in this course because of their expertise in certain areas of research in which they are working. So, those area will be covered by them. So, coming back to what we were talking of historical perspective and trends let me tell again that today the silicon device is indispensable and most important devices for our human society. If you see very carefully everything which we work or everything with which we work or use is now controlled by a silicon device particularly the silicon integrated circuit or simply called chip. The silicon relies extremely high frequency operation with extremely low cost extreme very low power small size high reliability and because of this most of the electronics is actually revolving around silicon integrated circuits. If you see today's information technology areas such as internet, iMode, cellular phones, GPS navigation, game machines, entertainment robots you just think of them and you could not have realized them without silicon ICs available and that area and therefore the development of integrated circuit essentially revolves around nowadays mostly on the consumer electronics and also some part into the defense areas. Now, what has happened over the years? There are structural changes in electronics industry happened from say we as we started in 1960 say let us say so by 1990 the 2000 area we see there are lot of changes are occurred in the electronics industry itself. So, I will give some example to show you how the electronics has modified itself in the new era. A very interesting slide shown here there are two components I showed one is PC the other is what we call as communication area and we see in early 90s the most of the effort was on PC development because consumer wanted more PCs to be available in the market and therefore up to 95 or so there are huge percentage of effort and money both was available in PC area. However, by 2000 98 to 2000 onwards you can see now mobile phones iPods televisions all have they have become now a very important part of the society most people cannot survive without a mobile these days many of them including in villages in India and therefore much of the effort in last 8 to 10 years have been now in the area of consumer and communication area rather than PC. I do not mean to say that PC market is really very low or it will not continue but compared to PC market now the consumer electronics market has actually taken over and because of this market structure the VLSI design area has to move on those areas where there is lot of consumption and therefore lot of money. One of the major thing happened over the years for example this is slide from sony it says that initially the TV broadcast was analog broadcasting way back up to say 2000 or 1995 or so and this analog broadcast continued for many years but after say 2000 or 98 onwards one finds that the analog broadcasting shifted to digital broadcasting for variety of reason one is the integrated circuit technology was more importantly developed for digital circuits rather than analog circuits and much of the advantage of digital is that you can compress data very much and because of that the actual transponders will have smaller amount of circuit but can do much larger applications and with larger powers as well. Now you can see from here that since analog to digital conversion occurred much of the research now is more on digital than analog however one must give some rider here that since the nature is basically based on analog systems you do not talk in digital you always say something in analog continuous in time therefore one area of research will continue to happen in analog design which may be more important in the sense that it is rather difficult to manage that however most of the silicon market will be actually revolving around digital. So one just if I put it again the same way the first wave one can say was analog wave most of the VCRs or TVs in consumer market were analog based or analog electronics was used the first digital wave was essentially based on digital for PC market and now of course the second digital wave is essentially on the consumer and network market. The problem right now of course this slide is slightly may be two years older but it still wants to say the same we started with gate level design in early 90s then we went to RTL designs and now we are even talking about architectural designs. So there are three areas in which we have progressed over the years but one of the major worries as of now when people are saying that there are not sufficient engineers who actually can work on architectural level designs or RTL level designs. Of course there are couple of old kind people like us who still believe that gate level design or transistor design is very relevant and therefore we keep generating such designs in the form of what we call IPs and those intellectual property circuits can then be employed directly at the RTL level or even the architectural level. So the other area as I said apart from the consumer in this is the network area or wireless applications area. The digital systems integration is now compulsorily with analog and RA blocks in realizing this wireless systems. For example there are properly known systems called Bluetooth, WLAN and also at the homes we have a set of box transceivers then we have IF and baseband part of radios. So all these require both analog RF and digital blocks together so these are essentially more mixed signal designs which are becoming popular for wireless and an effort has to be made by digital designers so that they actually properly interface with both analog and RF systems. Just to give an idea what is network marketing is doing or broadcast on that area say broadband network shown to you here there is a on your left as I see there is a infrastructure gateways which passes packet network to this kind of storage and from there there is a transceivers through antennas they actually then pass to access to gateways and gateways have connection with PCs, mobiles, iPads, camera, TVs, name printers, laptops name everything. So now you can see broadband can have internet connections through internet can connect almost all kinds of equipments sitting at one place and that is the area where much of the research is in the chip area is done because for such applications the interface will be separate for each of them and one has to really work hard to get chip for all such areas. So what are the drivers of broadband growth and impacts? So here is some table which I gave you why I am talking you all this is that the end of the day now as a VLSI design engineers probably you will have to know much about the communication area because there is where your money is or there where is your butter and bread is so let us learn what exactly wireless people or broadband people are looking at and then start looking for designs which actually meets their specifications. For example the in a broadband area I am told that demand for high speed connections streaming videos and audios probably you know it better streaming videos and audios is what all that you are constantly working at or listening at or seeing at and therefore that is the major demand people see and because of that lot of effort has been made to make such chips. Then there is area like home networking which has multiple pcs and internet appliance at home and you want to connect to one of the internet connections or multiple internet connections or put together through a server router sorry. Then there are multiple services to be delivered to multiple end points providing information communication entertainment and home control consumer requirement for ease of use and shift from PC world to embedded world. So, what are the impact the impacts of these drivers or these requirements one is expecting much more bandwidth consumed per home this is the impact one sees then the quality of service needed to end to end now then one also expect network capable consumer electronic devices video audio distributed in homes various internet appliances end points and services through home network improved security to protect consumer and provide and content and consumer provider and content and seamless inter portability which is the major work right now going on for network devices required for RGK and embedded. So, here is some interesting slide evolving network home meets lifestyle presently you are sitting in a home you need lot of home automation your equipments washing machine to some kind of microwave ovens to heaters and all kinds they can be now directly controlled from which are these are homes then you need connectivity to mobile and all such laptops and other this you are in entertainment parts sitting here and you have a productive parts like laptop printer and everything this now all this can be network to a home meets life this is how we are actually now working we have so many equipment at house that one needs to know connectivity instead of getting every now and then the another evolution which has come over the years in wireless area is cable network and the essential part of cable multipoint network is that it can then through a cable modem one can actually connect the IP network or cable network PSTN directly to the home since cable is normally available in from the providers these days to home. So, you can have all kinds of standards can be met through three of these networks and directly to your house. So, this is and since it has a largest bandwidth one possible way instead of using essential telephone lines or any other probably cable has the largest bandwidth and therefore, one will like to continue to have at least local area networks using cables. If you look at the video market right from digital cameras and cams, you have 3G cellular phones, you have video DMA, we have network camera, we have video DMA, we have DVDs, we have video surveillance and all kinds of large spectrum application multiple standard resolution infrastructures starting to get in the place, you need MPEC standard, JPEC standards and what not. So, in these applications there is a lot of scope for digital designers in rather of VLSI designers to actually work for individual area and actually help this equipments to be network as well as do their performance far better. So, to summarize on that there are we call it application fields where VLSI will be required. For example, in the area portable electronics like PC, PDA wireless one of the important application based on application one has to worry about what is the cost of the chip you are making particularly we are more worried about packaging and cooling. Please remember it is much easier or much cheaper to make a chip, but it is much costlier to package and keep it cooled and I think much of the research is now shifting and many industries on packaging and cooling area. The other problem with applications like PC, PDA wireless or any other thing is an issue of reliability particularly reliability due to in the chip itself due to electro migration and latch ups in CMOS. The other area of worry is of signal integrity, you are too many interconnect lines moving on the chip, this is something like a transmission line now working and those high frequency of signal going which may give lead to switching noise, it may give voltage drops like power voltage droops or ground bound noise, ground bounces and that may actually change the signal from one line to other or may actually switch over. Now, these are the very relevant issues in 2010 and onwards and therefore much of the work or much of the chip design work is now going in the area of signal integrity. The other area of interest as I said is the chip has going to have large number of transistors typically millions of them hundreds of million and we are going towards a billion now and therefore even if 50 percent of the transistor are on and even if you are working on a very low power design the kind of heating which chips are going to have is very excessive in particular power density is very high and now much of the effort needs and we is being done right now is to avoid or is to see how to remove this heat from the chip so that the chip continue to work. Please remember that around 175 degree centigrade all semiconductor junctions becomes no junctions and therefore chip starts failing into their operations. So, one has to keep worry about this thermal designs particularly the differential thermal design is very important because one spot there may be large heating because of power dissipation the other may be cooling and therefore the gradients creation may have more problem than the heat at a spot itself and people are trying to see how to take heat out and the large area large effort right now as I said is ultra low power applications particularly all handled system like mobiles or PDAs or NIM 1 you will find you would not like to charge your battery often and therefore you are really looking for chips which themselves consume low power so that the battery last longer and you do not have to recharge every now and then and particularly the last and the most important area of field right now is space missions. There you have much more problem you have problem of power dissipation you have problem of size problem of weight and very high level of integrity of functions to be implemented and therefore space chips are very very difficult to design however one need not worry because we know the techniques if someone wants to pay for it those chips also can be designed. Now there are as I said I already said some but I quickly repeat the there are different constraint for different fields for example, portable devices essentially are limited by battery lifetime, telecom and military are decided or rather worried about the reliability performance high volume products of course are always decided by unit cost that is the constraint because if you make a chip and it is very very costly how many buyers will buy it and then the cost of the product which you are going to have may not survive in the market. So, the particularly the off shelf products like memories or something you must know why they are comparatively cheaper because they are we are trying to make them in millions. So, that the per unit cost is minimized and also of course they also require reduced power decreases packaging cost if you can reduce the power essentially one can also reduce the power packaging cost. Here is a road map once again about the wireless applications road map when can see from over the years this is we are I have taken it from Philips it shows the frequency range of operation over the years. We are started with FM radio AM radio FM radio TV VHS recorders then TV with ultra high band ultra high frequency UHF instead of VHF and continuing with this one can see now we are talking of LANs we are talking of radars we are talking of 3G 4G mobile systems we are talking of other micro applications we are looking into I mean most of the circuits we have system which I have written here are known to you and one can have look into these these are the wireless systems which are coming into market and day by day they are asking more and more applications and they are asking much more performance improvement in what we have so far able to give. So, much of the research in VLSI is now dedicated towards communication and particularly to the wireless there is also as I said we always initially thought we are talking about computers in mostly chips where actually designed mostly for logic which went into PC market. However, looking at the two figures here you know there are one is network area there is a server area and one has now a link between the two and there is a convergence going on between the communications and computing. For example, another area which most of you are aware of is quad band mobile phones you want to have all four band 658, 598, 1800, 1900 covered in the single handset and because of that and they also has to work probably on the Bluetooth's generally they use their ARM processor in most cases wireless has become many times the standard has come from ARM processors and now to design around these systems and for all such quad requirements it requires a lot of effort and lot of hardware to actually support such kind of systems. The other area is a smartphone I do not have to say much about you are you know much about smartphone these days every now and then on the advertisement and any this you see continuously you are talking of smartphone what it can do. This is the same these are standards of wireless where we are trying to work. The problem with standards why I shown you this graph is the kind of bandwidth one is expecting and one is getting. For example, GSM requires say around so many k b b s say 60 10 to 60 k b b s and up to 10 kilometers range. So, the range becomes most important in the case of GSM where as other UMR, WLAN they have limited local area network or small area low medium area networks their range are required smaller, but they actually require one m m b s and above kind of bandwidths. So, one of the work is how to improve the bandwidth and how to improve the range or the power and one probably knows from BLSI design basics that both frequency and power cannot be improved simultaneously, but that is the area where one wants to work how to improve the bandwidth even with reduced power. This is the wireless broadband please remember wireless is the multi channel multi point distribution system which are now coming up they are the forerunners of BLSI design in communication. For example, you know the new standards which are coming is a to 2 1 1 1 2 1 3 a b c d then there a a to 2 1 6 each has a different bandwidths, different ranges to act on different applications to work on and for you have to work on every kind of area and design chips for them. This is another interpolative micro access worldwide which is y max popularly known you are trying to connect your PCs through such kind of network. You want a micro access network and this is an area again where interconnection interconnectivity is very important or inter portability from one area to other is very important and the access is through micro. So, now we are trying to see whether low frequency signals can be connected through or transmitted through very very high frequency signals to other areas and retransmitted back to your original frequency or base advance. The other area which is required in BLSI these days is RF modules which may be part of any wireless system. If you see your left part if there are more than say 9 or 10 parts on RF modules. So, 10 modules may be required to make a function and now you can there is a new technique of packaging has come what is called system inside a package and if you do that you can see from here these all parts can be minimized into a smaller chip you can package in one area and such a module will be much smaller in area as well as you do not need a breadboard or you do not need a board to put them on and associated problems of interconnectivity. So, what is VLSI design in 21st century? The real issue is the current VLSI is about designing system on chips. Now these designs are extremely complex and one needs to use structure design techniques and sophisticated design tools to achieve complexity of the design. One should accept the fact that any technology details which we learn over the years will be out of date soon. To give an example in 1960s or 70s when I was a student I did not know anything about semiconductors in my undergrads, but when I did my graduation masters I did learn basic semiconductor theory, basic semiconductor diodes, basic silly and then we also learn something about integral circuit in a smaller way a TTL circuit design was tried way back in 70. But that was the great thing we were thinking we are ahead of time we used to think in those days. However, the technology from TTL has shifted to MOS as I have already shown you my first lecture and we can see from there that the current technology trend because the reduction in feature sizes is going down very heavily from say 10 micron which we worked in 70s, then we worked on 5 micron in early 8081's and 3 microns in 85 and now we are working on 0.25 micron down as well we may go to 7 nanometer technologies. So even if you master a design for any technology when the technology upgrades or rather improves because of whatever you say Moore's law then one has to redesign many circuits. So effort is now being made because if the technology moves faster than what everyone thinks then one must learn to develop and use technique that will transcend technology. But remember even if you are designing independent of technology you must know the technology well you must respect it because otherwise it is not possible to actually get the best performance. In 1977-78 time the first VLSR design book appeared in the market and that was written by Conway Mead, Len Conway and Carver Mead. Carver Mead is a professor at Caltech, professor of computer science, electrical engineering, electrophysics and what not. He is one of the most respected person in VLSR area. What Mead suggested and that is what this last line is all about that if a computer scientist or computer engineer wants to design a chip he need not know much about technology and that is what he actually made some kind of a rules which he said or both them said design rules. Now design rules were essentially technology constrained for a given technology because anyway finally when you go on silicon you have to print and you have to etch or deposit or do some things to make that process go there. Now because the technology features are very very technology based or difficult based and they have no retrace paths. So, as long as you actually follow the constraint of technology in your designs or the layouts you make then one possible one of the major advantages that the when those masks which are converted from your design are sent to technology house most likely they will actually reproduce whatever designs you are asking. However this word that you do not need technology any much now is valid maybe it was valid till today or maybe few years ago as of now when you are looking at 45 nanometer down processes unfortunately the technology has not only improved many things but also has introduced many problems. Now these problems cannot be categorized in some kind of a design rule and they are very individualistic process technology based rules or in a technology constraints and therefore even a designer now has to understand the limits of technology in every case particular chip part of that maybe have different rule. So one has to now has to sit with a strong technology person or also you need good models to actually translate into new tools and therefore the physics people or physics people technology people device people even optics people and also the VLSI design as both computer scientists and electrical engineers all should sit together and essentially decide if a product has to meet how will they go about. Now areas in which VLSI design software are now available and will need upgrades every now and then are the following one have software available in the area of technology there are good technology tools available. The first technology process simulation tool came from Stanford over the years TCAD people have developed many new model based tools which can actually make technology go on the computer itself before actually realize in the lab. So, there are TCAD tools which also includes in the vocabulary of TCAD there is also good device simulators are required or device designers are required which will pick up data from technology files and then actually design a device and then find its performance. Then we having made devices of your choice we know the models of them one need to design a circuit. So, you need those models to be put into like there are programs like specter or spice these are essentially circuit network solvers and they can be used to actually design a circuit and using a circuit you can design in multiple circuits and then bring them together to realize what we call system and therefore, you need to know having which blocks to put when the system performance can be attained. So, need a system simulators or system tools which will can see the performance. After making technology device circuit simulation system simulations or tools associated with them you can use for example, a famous such this come from synopsis, cadence mentor graphic all of them give lot of such CAD tools for every technology node. But finally, at the end of the day since silicon your designers to translate on a silicon you need what we call the mask or and each on each mask you need the layout. So, there are layout simulation tools then after you laid out a circuit you want to know whether the circuit itself may work or may not. So, you extract the circuit back from its layout already drawn by you and you resimulate both at the circuit level and check whether the specification given for the system can be made by this layout which you already drawn. If it works fine otherwise you go turn around change tweak again your may be device or device model sometimes or change the device redo the circuit analysis circuit simulation redo system simulation redraw layouts re extract till the specification what you started with actually means. So, you need extraction tools there is an effort to reduce the cost of design. So, you every time you do not have to design a circuit specific to any says you can use a library of circuit systems and you can have their schematics available and then you can have a program which can directly converts your schematic to the layout for a given technology. So, such tools are also available and will need constant upgrade as the technology upgrades. As I said earlier also one of the major worry in the VLSR design is the test because there are on a way for there may be thousands of chips may be more at times and in every chip there will be millions of transistors. So, there will be at least if you look at the small block level design test even then there may be hundreds of them on a chip. Now, how do you verify your performance can you go for every block checking or do you actually test every chip because if you test every chip and every block for its performance it takes excess your time that is called verification time is very high therefore, manoeuvres required are very high therefore, the money required is also high. So, one has to now generate some kind of tools which will as if say if I test this much it almost gives you confidence level of test to 90 percent success. So, to reduce your verification and testing time this verification test are not exactly same, but they may be together and one has to really find the process in which both can be used to actually minimize the effort before actually chips are made. Then there is a last area particularly for when you want to see whether circuit can be directly put on a silicon or even on FPGA kind of boards or many other standard set based designs. So, you have to synthesize. So, you need a lot of synthesis tools which will actually be able to then decide whether your performance due to interconnects and putting all blocks together will actually work. A typical what I had to bombay we have a device design flow just to tell you what kind of T-CAT tools we use. For example, if I want to design a MOSFET I have I had to first say what is for a given technology node from the ITRS I know what is the oxide thickness and what are the feature sizes as per the technology node I am working at. Then I use a process simulator which is DOS I see DOS and it requires some inputs like it needs to know all kinds of implant doses and energy. It requires RTA that is rapid thermal anneal times and temperatures these are the for individual process in the process simulator for oxidation diffusion implants or there are at least I must tell you there are at least 450 processes through which we go through when we make a chip. So, for each of them you have to provide the exact data which the process will use and based on that process simulator will tell what is the performance of this device will be. So, having made a process properly required then you actually use a device which is made out of this process may be a MOSFET may be a transistor other than MOSFET may be HEMTs or may be diodes or may be capacitors any device which you made out of this simulator has to be then tested for device performance and we say there is a program which is called device simulator this is and using this we know what are the device parameters like for example, MOSFET you get off current on current we monitor them what is you get it and you must see that whether these are as per the process you have started with which is as per the ITRS. ITRS stands for international technology road map services and this is a body of engineers all across the world from industry and academia which decide what next that is today's ITRS will tell you something about 2012 road map next year they will say about what will happen in 2013 based on their previous experience. So, ITRS is a very interesting body which is essentially managed by semiconductor industry association and they actually predict what will be the next and what how will you go about and they are which will be our standards. So, if your process you started with for an ITRS process step 0.13 or 90 nanometer or 45 for those devices ITRS has already prescribed the off current and on current maximum and if after your simulation device design you find those are not met you actually change once again your process parameters you go back to your DOS with change process parameters you redesign the device and get device simulated and check for off and on current any other specs can also be tested, but these are the major ion essentially we are looking for IDSAT. So, once we get those specs which we start which we are looking for then we have a process parameters are used for statistical characterization because there are variation in process parameters. So, one has to do another realization whether they are process parameter or statistical in nature otherwise you may have to do redo again your DOS and this is once again. So, this is how a device is designed. So, you can see if you do not have a simulator you cannot keep tweeting tweaking so many parameters like for example, in the case of implant energy DOS and many steps there will be at least thousands of combination you will have to try and manually it is just not possible to do any such simulations. So, what essentially in nutshell we say the process of LSI design is well it consists of many different representations and abstractions of the system that is being designed. For example, there are many levels of abstraction and representations for a given design one on a chip or system which you have to actually look into as a individual hierarchy level design the highest of them hierarchy system level design. So, one starts with design what system you want to actually require what are the specification and how will you design that system. The next level in hierarchy is architecture and here where we do mostly algorithm based designs and having decided an architecture for your individual blocks which you can algorithmically test and design then you look for the how do you implement those algorithms or architectures using digital systems. So, you come back and then start looking for digital system level design which can then use to create systems or blocks which can then go to architecture and which finally such architecture with different blocks can create a system. But each digital system we know is essentially consists of logic. So, we have to come down again in hierarchy and we say ok we will look into logical level design for each block of digital system which you want and at that time we will only simulate the logical performance of the block. For example, if you have NAND gate or NON gate or higher level arithmetic unit or multiplier. So, you actually test the performance based on the logical level design. However, logical levels for a given technology will be decided by what technology rules you have. For example, you are working on certain technology node. So, what is the kind of speeds you are looking for what is the kind of power dissipation you are looking for what is the size of the transistor you are looking for. These essentially will decide the performance of your logic gate. Therefore, the next level of hierarchy is the electrical level design and this is the most important design for an engineer of my kind who basically believes that the electrical designs are the ones which finally decides the performance of a system. However, I do not want to reemphasize again that only electrical design is important. If people have already designed many of the electrical level design blocks they can be reused in logical you can have more logical redesigned blocks you can use in larger digital system and many such systems can create into systems. So, it is not my point to say that one only has to work on electrical level designs, but if you are a computer scientist or computer engineer probably you may prefer to work on the first three hierarchy levels where system basically things are available and you are trying to rigid them into a given system performance. Whereas, if you are a more of a electrical person or you know who thinks more about electrical performance as the only thing for him then he should more work on a logical level, electrical level kind of. If you are a process level man or something at the end of the day you need mask to be created out of your design. So, someone has to make layouts for each of the mask you are designing for the electrical performance essentially transistor sizing transistor connections through interconnects at each level of mask you have to design. So, you need layout level design is very important because many times the failure of the chip occurs at the bad layer layout done by you because there are parasitics associated which many a times you are not able to extract and if you do not do that well then your circuit does not work. Therefore, this level design is very very crucial these days particularly in the case when you are doing analog RF and digital on chip as a layout essentially decide the success of your system. Having done layout designs the final and the most important part is the semiconductor level design which is essentially talking about the process. There are many other levels possibly one can go down more specific on the process side, but this course since it is on more on the VLSI design side I will restrict myself to only this level. Now, most important thing out of all this what I said that each abstraction or a view is itself a design hierarchy of refinements which decompose the design. So, essentially many times I person like me will say you actually design from the top. So, it is called top down design, but when you implement you always do bottom up implementation. First you will have to look into the device process available based on which what layout you can create then the layouts are created to make the electric performance extract back. So, what we say top down design and bottom up implementation is the best hierarchical design one can try in most of the VLSI system VLSI system designs. So, basic idea is to decompose a system into blocks of algorithm architectural blocks each such block can be converted to digital blocks each digital block can go into different logical blocks each logical block should contain the electricals that the transistor level blocks and like for example, a NAND gate consist of 4 transistors. So, electrically you have only 2 N channel to P channel transistors in static CMOS where a logical level you have 1 single gate. So, hierarchically you are actually decomposing from the top side towards the bottom side and you actually design from the lower side. So, that you achieve the top side which the specs you are starting with. This is a typical process of VLSI design there are other direct methods one interesting feature of this hierarchical design is you are allowed to enter at any hierarchical level for example, if things are available up to this take that thing logical blocks available and start designing digital architecture system of implementing them. However, you are limited by how many blocks you have available, but if you have large number of blocks which can among from which you can make a requisite digital system and therefore, based on a given architecture can create a system so well and so good because it will save you a lot of effort money time in the last 3, 4 parts and therefore, it will save lot of economic it is it makes economic sense to go on the higher level. However, not every time it is true or not every time it is so easy. So, to give the same thing what I said so far I may give a figure the first hierarchy downward the lowest level of hierarchy is device. The next level is the circuit you can see a inverter CMOS inverter has been shown using one n channel device this is say one transistor. So, I have one n channel one p channel to create a circuit the next level using these inverters and separate connection for different transistors. I can create a gate for example, if I put 2 p channel 2 n channel devices in series 2 p channel in parallel and 2 n channel in series for 2 inputs it becomes a NAND gate. So, I can create using 4 transistors NAND or NOR gate. So, we say the next hierarchy is the gate using gate number of gates you can create a module for example, a simple module shown is a MUX 2 to 1 MUX if you have 4 to 1 or a whatever MUX you are or decoders or encoders or even higher levels multipliers even adders they can be called modules and based on these modules you can do interconnection and then you can make a system which can be packaged. So, this is typical hierarchy in which VLSI designers work and as I keep saying you have the possibility of entering at any level if the lower level data is available to you. Now, this is just to show you what people like us in 70s we were trying to design circuits integrate circuits and what at your level now you are looking for there are 2 circuits shown to you 2 graphs shown to you. The lower one shows circuit complexity versus years and one can see from here we started with mask then we went for transistor logic gate as in this you know we were trying to improve the productivity that is number of transistor per chip and this is what we did for many years on the circuit complexity side. Then when you see now ahead we say now number of designers required at the mask level are some numbers you need much higher number at transistor level you need much higher now at logical level and present day probably once you know the behavior of the core itself you need many more engineers at RTL level are creating using IPs then architectural level and system description level. So, the attitude of VLSI designers have now moved from basic transistor design which electrical people always believe as the best possible design. Now, is what is available from your library of functions and then you can do RTL's or architectural level or system design descriptions and can design a chip much more efficiently and much more cheaper way. At times it may not be able to you if you are not able to meet specification exactly you do one turn around take another level of gates and redo all this, but you certainly do not have to every time go to a transistor design because these kind of blocks are already made available to you by something called library functions. So, the idea behind all these designs which I said is essentially cannot be done manually and since it cannot be done manually you will have to use computers and the tools which help you are called CAD tools CAD design tools or CAD tools. So, for example, you have a tools available now in editors these are called editors there are tools which like simulators there are tools like libraries where the functions are stored then you have tools like modules and sys tools then you have place and root kind of tools in which you have a different blocks standard cell blocks you can place them automatically and also reconnect them a number of ways to get the best performance place and root algorithms are available and tools are available then there are chip assemblers number of such chips can be together put to make a larger system and finally there are attempt to make all of it together in one area which is like a computer compiler one can do a silicon compilation automatically design a chip giving a performance. So, you give a behavioral description and probably it will give you the final silicon requirements on that. However, you need to do all these kind of tools you actually need experts on logical design you need experts on circuit and electronic circuit design you also need lot many physics people now after we are going down in technology nodes to 65 45 nanometer 30 to 28 because lot of device properties are changing as you scale down and because of that the ill effects of scaling probably have to be taken care and these device physics people are the only ones probably who can suggest modifications or how to handle that then there is a you need for layout for example, you need a good artwork people you also need people in application system design people and you need people in the architectures. So, if you are in any of this areas I think VLSI designers look for you it is not that only circuit designers are most important or logic designers or layout designers or devise everyone has to contribute to a success of a larger VLSI system. So, what are the new design methodologies the new methodologies which are now based on system level abstractions and versus device characteristics abstractions logic structures and circuitry change slowly over time you know they change over the time. So, you need to do tradeoff due to this change, but choices do not the unfortunate that you may have to change tradeoff will change, but the choices are very limited and therefore, it is becomes very difficult whether you should do system level abstraction or you should do device level abstractions. Then you should also now look for designs which are technology independent essentially what we say scalable designs if I design a chip for 65 nanometer by 0.7 if I make 45 nanometer. So, my design should automatically go to 45 nanometers. So, unfortunately the layout techniques also do not change as fast as the scaling, but the minimum feature size is steadily decreasing with time and also because of that for technology nodes the voltage is going down die sizes increasing. Since these are the bad points which are happening, but we need layouts are not really working as improving as fast the scalability is an issue which one has to take into new designs. So, there are number of ways the design can be approached and there are two ways of doing most popular being the second part, let us start with the first one. The first design procedure or method is fixed specific design which essentially is called custom design. A customer gives this specification he gives the money that this is the money I have these are the number of chips I want and they must perform to this specification this is called customer requirement. For example, a video game chip or video games for a kids may require a some kind of video kind of processors which can we need not be very very fast in all areas, but at least the video should be good enough because I cannot do more better than certain seconds to milliseconds to retain the figure. So, you can use those kind of things in many such applications. So, one has to see that a user or a customer decides this is what I want. For example, a space scientist or a space engineer in ISRO or in NASA he may say oh I want a processor which is 120-bit processor and should work at 8 gigahertz which should have a system speed and it should not consume so much power. So, such specs will be very odd specs, but he may come out with it, but he may say I do not mind the size of the chip because then probably something can be met. And therefore, this essentially is called full control of a design occurs in custom. You can get the best result because you are optimizing every time. It actually slowest design time is I mean they are it takes long time to design. The word slightly is misleading essentially what I meant was they are very slowest slow in design because you have to keep optimizing and it takes large amount of time before you get a correct one. And therefore, in nutshell it is the costliest design available, but if someone is paying money for why not. For example, some of the off shell projects products like say microprocessor or memories chips they are sold in millions. So, at that time probably a custom design is most obvious because that can make your advertisement show that my particular memory has excess time of so much, but has larger density of so much which is better than available memory on the market. So, you probably will like to improve the specification to stand in the market for off shell product and therefore, they need full custom design. The advantage of learning custom design is obvious because even if you are doing what we call the second area of recon profitable one has to use the same techniques of custom design to create what we call semi custom blocks. And therefore, as a VLSI design course people I always insist on teaching custom design to most of the students because once you learn how to design a block or a chip or a part of the chip then you can reuse in any number of times in a modified form or otherwise, but you should know how to get those blocks unless of course, you are working in an industry which are all such blocks are pre assigned to you which not many industries are every time can provide. The second area of design approach is recon profitable which is the most popular essentially because of economics. It has very popular areas known to us one is called semi custom based on standard cells. The basic idea in standard say library is or now what we call IP course which is essentially generated by the manufacturer or called vendors. These vendors will provide you the schematic and the performance of every block they have designed fabricated and tested. For example, he may give even a simple NAND gates 3 layouts it may give you 3 driving capabilities 1 milliamp 2 milliamp 10 milliamps. So, each will be one IP core with different layout strategies different capacitances available. So, each will such cell which is pre designed, pre fab, pre tested for his performance is guaranteed by standard cell people and that is why it is called standard cell. And once you have such a library of such standard cells in the schematic form the best idea is to use those blocks call it a marks call it a decoder or call it arithmetic unit or whatever blocks you have you can just connected by placement and root techniques which can be automated and then you can get a system out of it as fast as what you are looking for and you can then test it for his performance. Since you are using already designed blocks in number of ways only and doing almost automated placement root systems of the creating a system this will be very fast in design time and because of that the amount of money relatively used for design of a system will be very small compared to the custom design. The other technique which is which is slightly get which is not so very popular now possibly because of the other systems have become equally cheap is the gate array. Gate array is essentially number of layouts number of transistor put in a standard layout forms and all that you need is to create a mask metal mask to give interconnection to create any different system they are of course the fastest in any design time you can create the chip in a hours time. However, they have the bad part because it is a standard W by L size transistor put in a fixed layouts they will have not necessarily the best of speed may be sometimes the worst of speed best of power and worst of density as well because they are limited by everything because you have pre designed the best idea is that you are already prefabricated all these transistors everything on a layout on a particular structure only thing you left for a people for design is to put the metal interconnect mask for a given system performance and since the wafers are ready all the time in numbers 100 numbers 10 numbers or 200 numbers even smaller numbers can be used and then you can actually put your design in a very short time since it gives you very fast design but and also can give very low volumes it is very popular with a small electronic product people because it gives them in a output of a chip in a smaller time and smaller money. However, they are not the best in their performances the other possibility is to use electrically programmed logic arrays or electronic devices and the most popular among these based on these are FPGAs which are electrically programmable in the field that means at the user end you can actually connect them. So, basically nothing is really fabricated as of chips are anyway fabricated and you can electrically program them to create a system. So, these are popular design approaches to put a system on a chip and that is what people are actually looking in a design course. If you look at the available choices for designing a VLSI. So, the first and the foremost technology which started the all integrated circuit progress was bipolar they are essentially current control devices the kinds of technologies available from them were TTL, Schottky TTL, then emitter coupled logic ECL, I square L and finally by CMOS. However, recently heterostructures have come back and using these heterostructures you may have dual junction voltage control devices even in bipolar area and they may probably start competing in some sense beyond more area of silicon technology. The other technology areas which is more important for us is for example MOSFETs are essentially from the dual junction side and they are mostly voltage control devices. The materials which probably we have been working all through last 40 odd years is silicon probably it will remain silicon for many more years in respect to whatever people see. The other alternatives tried and being tried at least partially on this with silicon technology silicon germanium silicon carbide and a very off late people are also looking into gallium nitride. The devices used essentially are MOSFETs typically of those kinds are NMOS, PMOS or complementary MOS. We have modified this transistors now from simple MOSFET to what we call double gate MOSFET or FEN kind of surround gates, FENFETs, number of FENs of gates and people are also looking into similar device in which the channel is connected by a carbon nanotube and therefore, they are called CNFETs or CNFET. These are dual junction devices. However, there is an attempt now going on to go beyond more as I said and we are looking for single junction voltage control devices with similar like JFETs. The other devices JFETs are normally tried with 3, 5 ions. We are also looking for single electron transistor which is a quantum device and another effort going on to this area is optoelectronic ICs or quantum lasers based integrated lasers or quantum built devices and these are the essential part in technologies which probably make then integrate into a chip and may create even different kinds of combination of functions. Now, you can have an optical block along with silicon along with electrical block together working for an optoelectronic product. One of the major worries over the years is the die size growth. For example, in 70s we have around 4 mm square area when we started, but in 9 2010 we are talking about 2.5 cm by 2.5 cm. So, it is almost double it is growing twice every 10 years or one can say it is 7 percent growth per year. Since the question is always asked that the like say Pentium Pro or Pentium P5 P6 there are millions of transistors. So, what made people not to have a larger die size in 70 itself and could have put larger number of transistors. The reason why one has to worry about the area then was the silicon technology or silicon material technology was not very good. We could not produce a wafer which is defect free for a larger areas and since it was impossible to create single crystal silicon large area wafer with uniform properties one has to restrict to at least get some finite a smaller chip area of the order of 2 mm by 2 mm. So, that at least out of a wafer of 2 inch we may get at least 20 or 200 or 300 good chips and may be 600 not working good chips. However, as the technology of silicon growth is improving over the years we can go now larger area chip and therefore over the years the area of die started increasing. And as I say we are right now working around 2 and half centimeter by 2 and half centimeters and I am not sure at the end you may have actually even 4 mm 4 centimeter by 4 cm chip later. You can see die size growth 12 percent to satisfy Moons law. If you see a power dissipation over the year which is another constraint of VLSI design is the power against years I have plotted which is also an Intel slide. You can see initially from 4004 we have a few microwatts of power or milliwatts of power consumed to a Pentium 386 is relatively low power we change the architecture to some extent from 286 to 386 and when it went to Pentium onwards again power dissipation increase enormously. And then you can see from here the power dissipation is increasing over this and the consequence can be seen from here this is one slope this is another slope and this is the third slope. And as I say what it is trying to do for us now this is the next slide of my if you continue to work on this the power delivery and dissipation will be extremely prohibitive and one does not know what how things can be managed. Here is a figure here is a graph which actually says what I just now said in words I have a power density watt per centimeter square against years the P 6 which came around 98 kind of time on 1998. It has a power density of typically of the order of 10 sub watts per centimeter square which is like a hot plate in 2006 or 2007 we have now chips which are available which has the temperature such that the power density is around 200 watts to 300 watts per centimeter square which is the kind of power density in a nuclear reactor. And if you continue to improve I mean improve the chips whatever number of transistor going to put higher and higher the problem will be that the power density will rise to 1000 watts per centimeter square. And of course this red mark therefore I have only put red because they are we are trying to improve on that but if you continue on this what we started in 2004. We can see the temperature may rise as much as to rocket nozzle and at those temperatures take from me the silicon will actually evaporate. So power density is too high to keep junctions at low temperature is the major worry and therefore any designer must take care that the heat removal is the major important design issue which you should take care in your designs. This is called a road map coming from ITRS from semiconductor industry association over the years from 1999 to 2004. This is of course an old slide from them slum numbers have been modified later beyond 2008 but it is fine. We expected 180 nanometer are we started with 180 nanometer or 0.18 micron in 99. And we believe that we would go to 35 nanometer by 2014. But to your amazement we are already working in 2012 on 28 nanometer process and by 2014 probably we will work on 16 nanometer or even 11 nanometer process feature sizes. The million transistors we were looking for 7 so many transistors per centimeter 79 and we already crossed 700 million transistors. We already have a billion transistor circuit now of course this is per centimeter square. So I do not know exactly if I divide probably we are yet to reach this number. The chip sizes we started with 170 mm square which is around 2 mm 1.1 mm by 1.3 mm kind of structure. And we are now looking for area which is of the order of as I say 2 centimeter by 2 centimeter are above. Number of pins we could then put 768 pin chip and now we are looking for 3200 pins. So you can think what kind of level of complexity road map is asking for. The clock rate was 600 megahertz earlier and we are looking for of course this is slightly odd because we already crossed the 3.2 gigahertz. We already reached 4.8 gigahertz chips right clock rates. Wiring levels the interconnect levels on a chip were 6 to 7 earlier and we already crossed 10 number we are already on now 12 to 14 interconnect levels are available. Power supply so one can see from here what we predicted in 2000 something. We are already crossed the performance what we thought then and what is now available. If you see power supply 1.8 volts we are already reached 0.6 volts in many chips. I was not all microprocessor chip but some chips we are made of 0.6 volts supply. And the high performance power has reached 180 bats. Yeah we are trying to reduce it but I think that is our major worries. One of the thing why we are not going for very high performance circuit is because of this wattage. Once this thermal problem is solved probably one will be able to reach both the frequency and power in requirements. And the battery which supplies is we are trying to put better batteries which can deliver larger wattages on a smaller area. We are looking a microprocessor unit. At least there is few L1 cache on SRAM 32 kb and now every double year this cache is in 2012. There is already megabits of SRAM or cache available on the wafer chip itself. So, what are the driving forces as far as the international technology roadmap people which is essentially because of semiconductor industry association driven. So, these are driving forces for example, we see the cost per function must decrease 25 percent every year it decreases in fine. The packing density or what we call form factor the feature sizes are reducing by 0.7 x every node. Now of course, earlier it was every 2 years now every 3 years. And therefore, the devices are getting double per centimeter square at every node though node may actually come in now not every year, but every 2 years or every 3 years. Integration level if you see by Moore's law bits per chip grow by a factor of 4 x every 3 years. And we may now slow down that instead of 4 x it may become 4.5 something in 4 to 5 years instead of 3 years that may become 4 years or 5 years. So, it may slowing down now. The reason as I say is the heat is a major worry and if you can solve the thermal problems I think much of those can be even better electrical performance is guaranteed. If you look at the other side which is the power side particularly for laptop or cellophones or any hand held system the battery life and the heat dissipation are the crux which is worrying us most. If you look at the speed the microprocessor clock frequency we have a 5 x growth every 10 years. However, it is slowing down in last few years. So, it is now only 3 x growth every 10 years. If you look at the functionality the logic if you see digital CMOS analog mix signal and CMOS RF all 3 areas are now covered by digital CMOS logic is available for all 3 of them. If you look at the memory SRAM, DRAM and embedded DRAMs are coming into it added with e square prom flash and ferro electric RAMs and magnetic RAMs are also possibilities of memories. And there are actuators and sensors are another area of interest where electro optical MEMs are being tried chemical sensors, electro biological sensors and actuators have been tried using semiconductor technology to a great extent. So, one can see when I am working on a improved things I am working for cost I am working for form factor I am working for integration level other driving force or speed power functionality and what I can do. And therefore, the SI or ITRS every year describes what next and I think people actually go seriously to follow what ITRS or SI wants. This is an old slide we are actually improving technology from 45 nanometer to 25 nanometer and voltages may go to 0.5. We may look for 8.5 gigabit per centimeter at DRAMs and excess times of 10 nanoseconds or lower latter for a custom standard cell gate array single mass GFPGA. These are the clock rates we are looking for we may look for FPGA with 1 gigahertz sooner or later. So, that you do not have to go every time for full silicon solutions. These are the important milestones. So, what are the challenges we are facing the microscopic issues ultra high speed power dissipation supply draw rail draw growing importer some interconnect noise and crosstalk reliability in manufacturing and clock distribution. These are the microscopic issues and if you look at the macroscopic issues there are major issues in design is when to market your chip that time to market is very crucial for any success of design. How much is number of gates you are going to use or what we call design complexity? What is the level of abstraction you can start design which is very important to save the time as well as money? How much reuse and IP and portability is possible and whether one can use system on chip approach to improve the design to get the design faster and as good as possible on single chip and whether the tools used for all of them are inter portable for technology or any level of hierarchy. So, typical one of the major worry which we are seeing right now this is an old slide I still wanted to show you I do not have recent data. In 97 we were working on let us say on 0.35 micron technology and staff cost the frequency was at complexity of 1.3 million transistors the staff cost was 90 million dollars. By 2002 we are working on 0.13 micron process 130 million transistors yeah 10 at least 100 times we improved. Frequency went from 4 and mega to almost around a gigahertz let us say 3 year design staff size is around you require now from 200 people to 800 people and because of that the amount of money you will spend is 360 million dollars. So, essentially the cost of making chip itself is also increasing because of the staff cost you are increasing. So, when I say that you do inter portability or system on design or use of IP essentially we want to reduce the staff cost because the design time then can be minimized. This is standard Moore's law which I already said transistor on lead microprocessor double every 2 year. This is DRAM chip we are already 64 gigabits of DRAM possible in 2010 though it is not really marketed by 45 nanometer, but it is growing every 3 years for a 4 times every 3 years. This is equivalent of a page something written on a single page of a book this is something for equivalent of a book which is around 4000 kind of bits then this is 1 million or something is encyclopedia and ultimately we are looking for memories equivalent of a human DNA. This is what actually driving us. A typical 100 gigabit DRAM in a galaxy DRAM trend already began to slow down from 1 gigabit degeneration we can wait until the cost becomes reasonable. You can see this is a galaxy which has 100 billion stars are involved and which is equivalent to saying a DRAM is right now is actually looking for 100 gigabits which is like a galaxy of memory. There is another effort because of the thermal issues and power issues we are looking for low power designs. This is the hierarchy of design I will discuss this more in detail when I discuss low power design later, but essentially the level at which power can be minimized is at system level, architectural level, arterial level, logic level and physical level more details will come later. This is where the opportunity of power saving the highest power saving probably can occur at the behavioral and system level and not so much as the physical level where 99 percent of effort is really going on the physical and logical levels. What has worked up till now? We have been able to scale process that is feature size and we also are able to scale down voltages not to the Moore's law, but at least we are scaling down we started with 3.3 volts 5 volt 3.3 volt 2.1 volt 1.5 volt 1.2 volt 1.1 volt 0.8 volt 0.6 volt 0.5 volt. So, we are going scaling down voltages definitely. Features we are definitely going in feature sizes and there is no denying on that we are already reached 28 nanometers, 22 nanometers, 16, 11, 7 are on the way. The methodologies we are now looking into power aware design instead of just good design we are also seeing that the speeds are attained, but at the same time power is minimized. There are trade areas for low power and newer tools are required. Then we are also looking into we are able to manage many of the architectural designs and we need lot of effort to improve architectures now. We are also looking to reduce power by what we call power down techniques getting the clock getting or dynamic power managements. We are also looking for dynamic voltage scaling based on workload that is when to switch on or switch off outputs. Then we are looking for power conscious RTL synthesis and we are also looking for better cell library design and resizing methods to reduce capacitances threshold control and transfer layouts. Something we have already worked on something needs improvement. This is the kind of three kinds of circuits which SI suggest one must work on. The first one we call is high performance circuit which are essentially performance is speed. So, high speed circuits require so much on current and so much off current ratio typically 1500 microns per micron is the on current expectations. Voltages will be as low as 0.4 volt and the gate thickness will be equivalent oxide thickness will be around 0.4 nanometers. We are then the low power circuits which has a which may have a gate length of around 11 nanometers. However, we are looking for ratio of 90 in that on current may be slightly lower 10 compared to such a large number earlier 1500 to 900. Then we are looking for low standby power circuit like a mobile. When you are switching you do not you keep the mobile fully off it is on, but not really working. So, it is called standby power and major worker effort for these handheld system is on LSTP circuits and there also we are looking for low power off current and on current like for 700 micro ampere is on current 0.01 is the off current. So, you are reducing the off current from 10 micro amp per micron to 0.01 micro amp as we go from high performance to low standby circuits. So, this is what we have to achieve by 2016 these numbers are not for today, but possibly can be attained over the time. The second way which is now coming is what we call system on chip and this is something we want to improve the design, improve the rather fast turn around on the chip design. So, we went from what we call system on board. For example, on the left you can see on a board you have a CPU, you have DRAM, you have a DSP processor, graphic processor, you may have some ASIC which is doing particular functions. So, you can see performance wise SOBA let us say is initial level of 1X power performance chip and if I use this 3D graphic engine which is shown here on a system on chip which is on a single chip putting all those blocks the way I shown then one can say I can improve the performance 4 times by speed essentially I mean I can reduce the power 1 5th and I can reduce the chip count from say so many 1 to 1 4th. So, one can easily guess from here that the new approach for design will be now based on SOC design because SOC is somehow is somewhere reducing the power, improving the performance and also reducing the number of chips per centimeter square. We can see over the years the digital market over PC market has improved enormously and forecast is it will go to 500 million units sooner. So, Sony was the first to make an embedded DRAM way back in 90s and one can see its improvement one the left side is logic gate count the other side is DRAM capacity and one can see from here somewhere around 2019 around 2000. The logic level count logical level count is say 2 million gates, but by then we are already created 250 megabits of DRAM. So, we are already now we can see the DRAM technology embedded DRAM technology as cross the logic gate technology much more in a speed or better way and therefore future circuits probably will have to now take care that you have a embedded DRAM which you can very interestingly use in your data flow and therefore, can improve both speed and reduce powers. This is typically an handy cam of Sony, these are the blocks of Sony early on the cam and their old model has following things it was working on consumed 3.2 watts, it has a MPEC encode, it has no embedded DRAM, it has around 1.1 million gate when they changed over to embedded system this using embedded memories, you are now video codec, you have 48 MB embedded DRAM, you have 1.5 million gates, it is now we are working on that this is the old slide as I say Sony because they do not give you old newer slides. So, it is a 0.18 micron DRAM embedded process and the power dissipation has reduced from 3 chip 3, 3.2 single chip 170 milli watts. So, SOC is 1.5 million gates, 1.5 million of the panacea for better performance of the BNSA systems. Typically system on chips have following blocks, you have a some kind of control unit processing engine, there is a power management unit, there is a clocking engine, this is your interface and this is IO on chip storage memory and then you have test and debug on chip. So, these are typical blocks in an SOC, here is some example again taken from Sony, this is an analog block which is separate because analog circuit cannot be in those days put on the same digital technology. So, you have analog chip separated from digital chip which has a DSP, a microprocessor, a RAM ROM and a network and if you put it in the system on a chip analog in the same area. So, essentially analog area has grown compared to digital, but overall area is smaller than the earlier model. So, what is an SOC design essentially? So, let us read up and tell you what exactly I thought of it. An SOC design is defined as a complex IC that integrates the major function elements of a complete end product or a system into a single chip or then I call it chipset. In general an SOC design incorporates at least one programmable processor on chip memory, accelerating function units which is implemented on a hardware. It also interfaces to peripherals and or any other real world device and it encompasses both hardware software components together because SOC designs can interface to the real world, they often incorporate analog components and can in future also include any opto microelectronics systems, microelectromechanical systems on chip together. Typically if you see a SOC requirement from the transistor point of view, this is analog requirement, this is digital requirement and one can see basically in the case of digital, we are looking for high IDSAT current that is on current, we want lower off current, higher on off to ratio lower VDD and lower VTs. However, if you look at analog blocks you need lower VT which is good anyway then you need because you need larger GM, you need larger lower GO2 or larger RO, you want a ratio of GM to GO to be very high for the gain, you want GM by ID ratio which should be very high and mismatch be less than very small percentage, you want higher voltage to operate because the GM will be higher because IDSAT would be higher and you want a very small off current which is same as digital and you want practically no noise in the circuit. So, you can see there are there is a hexagon for the management of this. In the case of digital you only look for power, speed and area, here there is a there are more than 8 parameters you have to optimize, but when you optimize better performance of analog or RF, many of them are not same as required for digital and therefore there is a conflict and therefore therefore something kind of a trade off between good performance of analog and good performance of digital has to be met. So, that on system of chip both can be implemented together. Please remember that system on chip why so important is system is more important than a chip that is why we can say system on chip set. Today's chip set will be tomorrow's chip and more than such chips can become newer chips a newer chip set. The system must be designed as an entity with trade-offs across boundaries and the important aspects are therefore for such a hardware, software, analog, digital, chip package everything has to be taken care in a system design. So, you need to have aspects of design alternatives, you show good algorithm design procedures available, both hardware software code design, their trade-off, partitioning, space exploration has to be available to you, good embedded software it must be available to you, design of hardware and software IPs must be known to you and integration of hardware and software IPs is also very important. So, when you do a co-design you have a associate design in the center, this is hardware software co-applications, this is tools available, product engineering, EDA tools, fab, packaging, this is how you interface. A typical in nutshell this slide gives you everything together, what is scalable architectures, what is the hard to hard copy, which is the software IPs, which are reconfigural IPs. If you do all such thing on a single chip, single platform that will be an ideal ideal system and you one can think of it now that a semiconductor IP can be hard, soft or a firm, it can be analog or digital or RF, a software IP can be source or an object code and if all of that is a variable creating a system on chip will be much easier compared to any other way and therefore, can make a complex associate circuit or systems. The design requirement or design dies as estimation, power estimation, physical design methodologies to be used, what substrates you are going to use, finally where which board level design you are going to use, package electrical modeling, package thermal modeling, chip package board electrical modeling, cost modeling, all these are issues in SOCs. So, one possible last option I may quickly run through is called design automation, design option from packaging. So, for example, we started with quad packages of plastics, dual-in package, quad-in package and ball grid arrays kind of thing as we start improving technology for a single chip. We have multi-chip modules, then we have boards and we are now discreet available. So, now I can show you in this a typical SOC has all those blocks shown here, this is a complete system on a chip. One can have a different chips, but you can have in a package, you can have different modules bonded together or called multi-chip modules inside a package on a single substrate and these are ball grids array as we say, this is ball bonding and not wire bonding. So, to improve the resistance, small resistance and lower inductance, then there is another possibility what we call system-in package. Now, this is a stack, once you can have a number of such stacks, number of chips available on single vertical package, a bonded one over the other and once you do this kind of this system inside a package, you have now the total area of this package will be smaller. However, the heat transfer, electrical connections, everything will be not very easy in the chip, but it is being one of the major interest area of current do that you can make a good system inside a package by actually vertically stacking them and there is a finally, the system-on package is another area in which you can have number of chips and of different SOCs together can be mounted on a single package once again and therefore, can create a very large system inside a package, since system-on package. So, this is what essentially options we have in the packaging area, which we have to succinctly use to improve your larger system VLSI design. I will skip this, this is typically what I said, if this is your stacking, this is your wire bonding type, you can see this is an example of a single chip and you can bond number of such layers, 4 layers, 5 layers, layers on top of each and you can make a system inside a package. So, at the end of the today's talk, I may say you what is future for VLSI design. The first thing is as far as I wrote, ma'am says we are started earlier with 0.25 micron process in 99, 90s and by now, we are expecting to reach towards 7 nanometer technology node, which was the oxide thickness of the order of 0.1 nanometer EOT. Currently, we are mostly working on 130 nanometers, some chips, 90 nanometers, some chips, 65 is still the workhorse. Now, recently last 4, 3, 4 years, 45 nanometer chips are major chips available in the market in 45. The Intel processors now will be coming on 28 nanometer process, which is essentially 30 nanometer node and they are also made, other companies also made some chips using 22 nanometers. And as I say, our ultimate aim is to reach 7 nanometers. God forbid, it should not become 0 nanometers. So, what is going to be discrete direction for future? For the digital size, one can see you will have a different technology. Ultra thin CMOS silicon insulator, newer wafers, newer process of making chips will be found or rather available, but has to be improved much more. We will have to work on other materials of silicon related like strength silicon, silicon germanium. We can do something on band gap engineering. We also have to work with more with the FENFETs, vertical device, vertical transistor FENFETs. We can also work with double gate. So, instead of the simple MOS transistor, which we have been using so often, the basic transistor structure will have to change to FENFETs or any of the above using different technology. The memories may become sooner or later, spin based, magnetic grams will have to be worked on, magnetic tunnel junction memories have to be worked on, phase change memories. These are non-silicon memories one has to work on. Nano floating gate memories are being worked on. The single electron memory is being worked on and molecule memory. So, these are the area in which research will continue on memory side. On the devices side, we may have a even more different devices to use. Resonant tunneling diode, single electron transistor, rapid single quantum flux, quantum cellular automata, nanotube devices and molecular devices. So, these are the areas where VLSI design has to concentrate and therefore, change the concept of design itself. Start looking into newer models for newer devices, newer circuit simulations, how to create better architectures and which digital technology to use and which memories can be used which have a different kinds of memories which consumes lower power, faster access. This is a typical photograph of a digit basic double gate fin-fat structure which appeared in 2000 which is based on an SOI technology. The advantage of this fin-fat is that it improves the performance of MOSFET drastically and avoids many of the short channel effects which normal MOSFET faces when you scale down the device from say 10 micron channel length to 10 nanometer or to 1 nanometer or 0.1 nanometer kind of device lengths you are looking. So, the problems because of scaling which are appearing are called short channel effects. They can be elevated partly by using double gate or multiple gate structure called fin-fats. What will happen on the system side is because we are improving embedded memory designs, we are improving embedded memory technology as well, we more and more effort will be actually on improvement in memories and actually the logic part in the SOC will be comparatively reducing over the years because it will then we are looking really at three parameters. Now, design productivity, yield and low power and these are the three parameters based on this one can see more and more logic or more and more memory based designs will appear compared to the logic based designs. At the end, what is the way Moore will look? If you see its horizontal line more than move diversification and this is more and more on the miniaturization. You have the circuits of the kind of analog RF passive components HV low high voltage devices, then you have actuators and then you have biochips. Then on this side, you have technology going from 130 nanometers down to 7 nanometers. Lot of information processing in this triangle, this area digital contain system on chip. On this side, you have interaction with the people and environment because you are looking for bio, sensors, high voltage devices and mostly it is non-digital content of SOSIP which is on this side as you move and this you have to combine SOC and SIP for higher value system and below this 22 nanometer cloud which we say beyond Moore, we do not know what is the other side of the cloud and because of that what will happen on this side is also not very well known. We hope that the research in VLSI will continue and I assure all VLSI design engineers who are taking this course and try to become engineers later in VLSI area till 2060 may be or at least 60 I would say at least 50 years, then you will be productively employed in the VLSI area come what me. I cannot predict much more than that neither I am competent and maybe I will not even see that day because you cannot come to me then to say sir you said something and it has not happened. So, 2060 is what I keep saying you are safe keep working on VLSI, you will be not only paid well but you will also get satisfaction of actually making things which are interesting and also useful. At the end of the course at the end of this lecture I will like to thank many of my people who helped me in this forming this course teaching this course. One can always say from the student side that if I teach a course which this VLSI design I taught in IIT Bombay for 15 to 18 years now maybe earlier even 5 more 25 years I have been connected to VLSI design courses and VLSI technology courses. So, when you course I teach whether it is good or bad you can always know what is students response in the class in the exams and afterwards and I trust that my students have if not extreme favorable to my teaching and my method of teaching and my content of teaching mostly they were favorable and based on this this course have been actually designed and given. My sincere thanks to all researchers whom I acknowledge below for the help of preparing this talk I must tell you many of the PPTs on which I write many of them I removed some of them I kept in this because a huge lecture I had truncated for the course mostly silicon technology growth what will happen in future kind of thing they are curtsy due to one of my good friend Professor Iwa Hiroshi who is most distinguished faculty at Tokyo Institute of Technology in Japan. Many slides on scaling associate charge channel effect performance are due to curtsy of director of Intel at Portland Dr. Shekhar Porkar he is one of the outstanding Intel fellow who made the architecture of a microprocessor and improvements in that which makes Intel number one microprocessor industry in the world and Shekhar Porkar has a large share in progress of Intel he being from Mumbai he being friend of us from last 20 odd years 25 years we are some connection with this gentleman and he keeps feeding us some interesting data happening if not of immediate this at least five years ahead what they did many of the slides shown here on the CMOS design are also curtsy due to one of the faculty from Purdue University Professor Rundstrom and also from the book and the site therefore Professor Rebebe's book on digital integrated circuit Professor Rebebe teaches at UC Berkeley digital integrated circuits so many of the slides have been taken from his site and slide pertaining to Japan electronics are due to curtsy of the vice president Sony in corporate Japan. Thank you very much all of you for this listening.