 to a million fold. Because of things like a smarter planet, the fact that we will soon have 50 billion devices somehow that collect data and that generate data. And that explosion of data really needs to new applications that become possible, poses new technical challenges. And then at the highest level really how will we interact with these computers in the future? How will we be able to program these? These systems I believe will evolve from something that so far have only been programmed to something that will become learning systems. And some of it is based on the kind of device architectures that we have. Some of it is based on the kind of software that we put on these systems that we will have programmed still, but it's going to be learning software. And I will now essentially spend the next 10 minutes roughly giving just some examples how this thousand fold for argument's sake improvement will happen over the next 10 years and give some examples of research projects that we work on to deal with this. When you look at the devices, one of the problems that we currently have with computer chips is they're already quite power hungry. If you look at the power that gets consumed by commercial data centers, it's two percent of the world's energy, the same as commercial airlines. Don't think that while cloud is somewhere in the in the cloud, it doesn't consume power. It consumes a lot of power. That's one of the challenges that we'll need to meet. If we really want to grow this still in capability by a factor of a thousand, we'll need to come up with devices that are a lot less energy hungry, both for building the big servers, but also for building mobile devices. Our cell phones, the smart phones nowadays, if we're lucky they last a day, if we use them, but usually they even run out sooner than that. That's something that we are involved in in a project here. You see the logo steeper. Basically it's an EU funded FP7 project. It's a big conglomerate of companies and universities, research institutions working on it to in fact generate and create the next generation of transistors that are a lot less power hungry and perform better than what they currently do. The other picture you see there is an investment that IBM has made in Zurich, a public-private partnership. This is a nanotechnology research center that we've built up together with ETH Zurich, the Polytechnic School of Switzerland in Zurich to do research, joint research on the future of nanotechnology, be it sensors, be it also faster chips. At the next layer of systems I claim that the systems in the next 10 years will be a thousand fold more powerful, but that will require in fact a lot of technological innovation that we will need to work on. I will mention two things where we've been involved in. One that just came to completion, which is the logo on the right where it says LRZ, which is the public data center in Bavaria, that high-performance computing center that serves the universities in Bavaria and in particular in the Munich area, where we just inaugurated a hot water cool super computer this summer. It's Europe's fastest super computer and it's the interesting thing about it is it does not require primary energy to cool it anymore. You can use hot water that's 50, 60 degrees warm, heat it up in the in the computer and then use the hot water for district heating and if you get more sophisticated you can also use the hot water for cooling those parts of the data center that still are traditionally air-cooled by powering the air conditioning with hot water. Another project that you see there where it says SKA is a square kilometer array, which is a big radio telescope that the astronomy institutes of major western countries really want to build. It's a large physics experiment just like CERN. The idea here with this square kilometer array is to build this radio telescope in the southern hemisphere to really look at Big Bang. It will generate information to the tune of 10 times what you currently have on the internet is traffic. So just imagine 10 times the internet traffic in such a radio telescope that will require processing power at the excess scale and that's again it has to be low power. That's a project we're involved in in the Netherlands with the with Astron the Dutch Astronomy Institute to really build out the technology that and create the technology that will be required to run something like this square kilometer array. What's interesting about this project is we will create a user platform that will allow the companies as well to innovate on that platform and make use of that platform as part of this research project. The research project itself is called Dome. When we talk about big data as said and and I think it's plausible if you have look at all the devices and sensors that we have out there the video cameras etc we're moving to something that is not only big but it's also going to be fast data. Fast data again if you look at the square kilometer array there's no way you will store that information you will need to process it in real time and condense it down to something that's more manageable and more storable and you will analyze the data as it comes and so that's the fast piece and an example of where we are involved in both in generating big data is one of the candidates for the future emerging technologies FET flagship projects that the US funding called Guardian Angels where the idea is really to build sensors that get deployed on the body to help with well ambient assisted living aging etc etc that will generate huge amounts of data it will be research in energy efficiency again because those sensors will not be powered with batteries but ideally they'll capture the energy from the body various ways of doing so and will generate huge amounts of data an area where we are involved in consuming huge amounts of data is our Irish research lab in Dublin that we opened there specifically because the city of Dublin was one of the first ones to really give access to its data and allow for novel applications to be built in analytics to be performed against that data now that's something that many other cities have taken up my home city in Zurich as well this summer launched something open data Zurich to in fact enable an indigenous IT industry that builds apps around open data and then branches out from there to other parts of the world the last area moving from programming computers to building computers that can learn is in a very wide area of research learning itself is not really fully understood at least not the way humans learn in the way our brain functions which is the ultimate computer to some extent the brain runs on 20 watts of power and imagine what we can do with our brains if you look at the computer that already was quite amazing this Watson computer that my colleagues in the US built to play Jeopardy that ran on 85 kilowatts of power and all it could do is play Jeopardy couldn't really do much more couldn't walk in fact the questions had to be typed into the computer so it wasn't doing speech understanding yet so there is a huge gap still that will need to be bridged but what's interesting about Watson was that it was built in fact on open source it was built on Linux UIMA unstructured information management architecture something that IBM open source a few years ago and it enabled in fact a whole bunch of collaborators universities to help with building out the Watson computer this deep Q&A system where we're moving towards is really learning systems in the future Watson was already learning along the lines of the more it played the better its statistical machine learning engines got tuned to know which knowledge sources it could trust more than other knowledge sources for certain domains of questions now that was learning in the statistical way it was still programmed now we're working on better understanding the brain again one of the FET flagship projects the candidates is the human brain project and the precursor where we already collaborates with the EPFL the federal technical school in Switzerland in Los on is the blue brain project basically the idea behind it is to model in fact all the electro chemical phenomena that take place in the brain in a computer and human brain is essentially just to scale this up to the dimensions of the entire brain there it was just a part of the cortex that got modeled that's on the side of really understanding the learning systems on the side of building systems that can learn we have a project that we're involved in in the US again it's a consortium called synapse the idea being to use the rudimentary understanding that we have of neural networks and really build chips in silicon based on neural networks synapses and neurons to the scale and again the roadmap that they claim to have is in 10 years time they will have as many neurons and synapses as we think the human brain has it will not be a human brain it will be a computer that has that many synapses and neurons and the challenge will be how you train that computer how you in make it learn something and how you make use of it so those are the some of the examples I wanted to give you in where our research is going and we can only do this with partners and a lot of the stuff that's far out is really for better or worse government funded and this is a requirement to really move the needle forward and I think as our industry really we're really moving into a new era we started out with just counting things the tabulating era the computing era that's where we're still in and that's still gonna go on for some time but I think what we're moving towards is this what we call the cognitive systems era computers that will use a lot less power will be a lot more intelligent will be learning not be programmed anymore fully and will be have an exciting time ahead of us I think just think of what we now have what we didn't have 10 years ago imagine a thousand fold more performance in 10 years from now what we'll have then thank you