 What is a computer? Simple, one of these, right? Or if you're young, you might think one of these. Or this. The truth is, these days, computers are all around us. But if we want to understand the modern computer, we have to understand that it really is on the cloud. Every time you search Google for an answer, check on news of your friends on Facebook, look up directions on how to go somewhere on Google Maps, or enjoy cute cat photos and videos online. You are sending data to the cloud. It's processing that data, and then it's sending back recommendations. In fact, every second, we send 230 terabits of data to the cloud. That's equivalent to 5,000 HD movies. But the cloud isn't made up of water vapor. It's made up of vast warehouses, full of servers. And it's important to know this, because these warehouses of servers are what make the modern world work. In 2001, cloud computing didn't exist. By 2021, 95% of all cloud computing will take place on the cloud. It's a $150 billion industry. And it's dominated by some household names, Facebook, Amazon, Microsoft, and Google. So it's time to understand what actually happens in a cloud data center. Let's start with its building block, the server rack. Each rack has 20 to 50 servers. And a single data center has 1 to 2,000 racks. They're all connected together to make one giant computer. It costs $1 billion to make one cloud data center. The way that they work as one big machine is that Google, for example, might store your history of looking at cat photos and videos on this server over here. And store your location history from Google Maps on that server over there. You might be typing into your phone that you're looking for a cafe where you can meet up with your friends later. And this means that Google has an opportunity to show you an advertisement. So a third server might pull your cat video history and your location history and process that information through algorithms to decide to show you an advertisement for Musti Yamiri, a local pet store here in Helsinki, where you may want to pick up dinner for your own cat while you're out. So these warehouse scale data centers are very good at processing huge amounts of information to make smart recommendations and send them to your own personal portal to the cloud, your smartphone. But the crazy fact is that 80% of the time, the chips inside that data center are idling, waiting for data. And that's a problem because they use almost as much energy while they're idling as they do when they're working. Data centers today emit as much CO2 as the global airline industry. And by 2025, one-fifth of Earth's power will be used by data centers. So why is it that processors are idling most of the time? And how do we fix it? To understand, let's take a look inside one of these building blocks, a single server. A typical server housed in a server rack has two processor chips in it, and each is surrounded by memory chips. It turns out that this way of architecting servers isn't too different from how your home computer is built. And since all we're doing is stringing together hundreds of thousands of servers that were designed based on an architecture developed decades ago for a very different purpose, it turns out that moving data between all of those chips is really tough. So they spend most of their time waiting for data. In the past, when we've wanted to improve our computing capacity, we've done that by growing the computing power that we have on a chip. One of the co-founders of Intel, Gordon Moore, predicted decades ago that we would double the amount of computing power that we have on chips every two years, leading to exponential growth in the amount of computing power that we have available to us over time. That prediction, known as Moore's Law, has turned out to be true. And it made the exponential improvements that we've experienced as a society seem so predictable that they feel commonplace. But the scary truth today is that Moore's Law is running out of steam. The building blocks that we make chips out of, transistors, are getting to be close to the size of individual atoms. And it's pretty tough to make them any smaller. So to improve our computers, we have to figure out new ways to improve our chips, besides making transistors smaller. Moore than Moore refers to solving any problem in computer chips besides that of making transistors smaller. And today, in this era of big data, the biggest problem that we need to solve with Moore than Moore technologies is in moving data between chips and figuring out how to solve the choke point in moving that data that exists today. Data comes in and out of chips through copper pins and travels across copper traces and copper cables to other chips. The technology that carries data across those pins has been improving over time. But this year, experts reached a consensus that we have reached the physical limit of how much data can be sent across copper. What's more, as data bandwidths increase, there's a fundamental trade-off in how far you can send that data. By 2020, we'll be sending data at such high bandwidths across copper channels that we won't be able to send it more than a few millimeters. So how can we continue to scale our data centers when we've reached the physical limits of copper? Light can be the solution. We already communicate across fiber optic cables to move data long distances when we bring internet to our homes and to our businesses. Using light removes the trade-off between how much data you can send at a time, bandwidth, and how far you can send it. Copper channels today max out at three meters, whereas a fiber optic channel can transmit data at three kilometers. And using light can remove the choke hold that copper has on how much data we can send between two chips, because we can use many colors of light on a single fiber, transmitting one channel of data per color of light. This gives us a new lever to scale up bandwidths in moving data between chips. The key to being able to use this technology inside data center servers is to figure out how to get silicon processor chips to communicate using light instead of electricity. What would using light to move data inside servers mean for our cloud data center with all these idling chips? Well, it turns out that light is the key that unlocks our ability to architect data centers and computers for the era of the cloud. What if, rather than having this decades old architecture of processors surrounded by memory, we could have vast pools of processors and vast pools of memory and dynamically allocate our jobs across those pools, making full use of the processing power that we have? This concept is called a Disaggregated Data Center Architecture. And it means that you have whole racks full of nothing but processors and other racks full of nothing but memory. Making bigger pools of these resources makes it possible to allocate jobs across these pools more dynamically. And the end result is much higher utilization of the processing power that you have. With the copper chokehold of today, that architecture is impossible to build. We just can't move enough data quickly enough across the interconnect network to make this concept work. But with light, this concept can become a reality. In 2015, researchers collaborating at MIT, UC Berkeley, and University of Colorado Boulder made the world's first processor to communicate using light. The result was published in the preeminent research journal, Nature. That same year, I convinced those researchers to start a company with me so we could get this technology out into the world. We called it IR Labs. Over the past three years, we've developed the technology so that it's now demonstrating 10x improvements in speed and energy efficiency when compared to copper. What does this mean for our idling processors? Well, if we can cut the amount of time that processors spend idling in half, we can change this statistic to something much more reasonable and will not only be making computers more efficient, we'll also be making them more powerful. This means not only easier decision-making for getting you to the pet store so you can feed your adorable cat, but it also means more powerful computers so that we can help autonomous vehicles make smarter decisions more quickly, keeping their passengers and pedestrians safe. It means we can make more powerful models to predict hurricane paths and be able to run those models quickly enough on our computers so that their results are useful before the hurricanes hit. And we can make full use of the extraordinary amounts of genetic and health information that we're collecting today to design cures more quickly. My prediction is that in 10 years, all cloud computer chips will communicate using light. Let's build a computing future that we can be proud of. Thank you.