 Hello, I'm Satoshi. I'm going to talk about the infrastructure based on supercomputing for IoT and AI, but really about plumbing. IoT involves tremendous amounts of data that have sensor repair, like zettabytes of data. The zettabytes are so big, if you burn it on a Blu-ray and stack them up, they'll encircle the entire Earth. So that's how big it is. Also, these IoT has to be coupled with simulation for predictive capabilities. Suppose you have smart cities. Not only have sensors, but need to be coupled with detail simulations like this. So unless you have these simulation capabilities, you're not going to really drive IoT. And that's where supercomputers come in. And now also the hot topic is AI. And a lot of the principles of AI has been around since the 1960s, way back. But wouldn't they have computing abilities to drive any of these algorithms? But now we do, because now supercomputers are 10 to 100 million times faster than what they have been in the 1960s. What are supercomputers composed of? Well, there are big machines that weigh 100 tons. And they have a million processor like Art Sabami, too, but also have very fast network, because their network is so fast and in high capacity that they can encompass the entire global internet traffic. It's that magnitude of computing we're talking about. But despite all these simulation capabilities and big data analytics capabilities, the supercomputer weighs 100 tons. So compared to a brain, which is only 1.5 kilograms, they use a few watts. We're very inefficient. So how do we advance computing to cope with the IoT? So how do we advance this? Well, one way is to pursue a traditional goal. We've been growing supercomputing power factor of 1,000 in the past 10 years or past 30 years. So that's one way of doing it. But the other way is to see four alternative solutions, new computing models. And hopefully they meet. So a traditional way, the so-called Exascale Project, which is one of the premier projects in most countries, including Japan, US, China, these projects are aiming to build computers that have hundreds of millions of processors. And somehow we can harness those technologies for big data and IoT. This will be good. However, it's not trivial, because building these supercomputers involve the latest technologies, IT processors, networks, but also software. How do you program even program million processors? That's not trivial at all. And that's where our research comes in. And that's for IoT and AI. So how do you leverage these technologies in supercomputing? One way is to leverage them directly. For example, this is our test supercomputer at Tokyo Tech, because supercomputers use so much power and data centers even. They use few percent of electricity in the entire world. So if we build really green efficient supercomputers, that'll do good for society. And this is one of the greenest supercomputers. The other way is to customize supercomputers to be better at AI and not only hardware, but also in algorithms. So how do we scale, for example, deep neural networks, which take weeks of training now into literally minutes by immense parallelization supercomputers? So that's what we are, again, working on right now. The third way is to go to a new computing model, like Neomorphic Computing, where we try to mimic the activities or human brains more directly by hardware and also some software techniques. And there are several research in the area, and I think this is a very promising avenue towards AI with supercomputing. But all of these have to be put together into a big machine. And that's what we do. Not only do we do research, but we build real machines, real data centers. And our latest supercomputer, Tawami 3, has been just awarded the greenest supercomputer on the planet, basically. And this allows us to grow our infrastructures much better than in the past. I'll leave you with some thoughts. So as we can attain this immense capabilities in AI using supercomputing power, how will it be if it's self-applied? That is, what if we apply this power to the machine itself? What would happen then? Can we make our machines more efficient? Can we deal with faults? For example, Tawami 3 has hundreds of thousands of sensors reaching up to millions, just like we had in our nervous system. In some sense, if we apply this AI technology to make our machine efficient, they will become in some ways very self-cognizant, right? To make them efficient, deal with security issues and so forth, but they are being self-cognizant. And this may actually expand to an entire cyberspace. We field these machines everywhere in the cloud. Then we'll have a very self-cognizant machine. But this is a sort of a scary scenario, maybe, for some of you watching science fiction movies. But this could be an eventuality. So how do we cope with these problems? Could be our important problem of the future as we progress in IoT and supercomputing. Thank you. Thank you.