 Good afternoon. Alibaba is one of the largest internet companies in the world. Network infrastructure is critical to the success of the business. In this talk, I will share our journey of building our own network infrastructure. It started a couple years back. As the Alibaba e-commerce business become hugely successful, we also got a lot of DDoS attacks. Our commercial vendor devices cannot handle the level of attack that we experienced. As a result, we decided to build our own solution. To do that, we built a high-performance packet of processing platform using kernel bypass technology. DDoS protection was built on top of the platform. With our home-grown solution, we were able to provision tarot BPS of DDoS protection capacity with customized protection logic. It was not possible with vendor devices. As the size of our infrastructure grew, we decided to build our own switch. Because vendor switches were expensive and lacked the transparency and customization, we needed it. We started with a white box switch hardware from the market and a vendor network OS. Over the time, we expanded the hardware support to more switch hardware. Building our own network switch gave us the first-hand experience of the switch internal. However, a low-source network OS was difficult for us to choose the network hardware we wanted. This got us thinking about what the alternatives are. In 2016, we started deploying gateway servers with 100 gigaNIC. This gave us a significant performance boost compared to the 10 giga and 40 gigaNICs. We also expanded the platform to support more applications like a load balancer and the virtualization gateway. Leveraging our expertise in kernel bypass, we built a high-performance user-mode networking stack and a virtual switch. Compared to the kernel version virtual switch, our user-mode one can give cloud customers more than double the throughput and over 10 times reduction in latency. On the switch side, we announced a joint Sonic project two years ago at this conference. Does anyone still remember that? For the gateway side, we released a 400 gigaGateway server which can process more than 400 million packets per second. This is an incredibly powerful gateway. 2018 is an exciting year because we brought in lots of customized hardware to our network. Our smart link can provide close to bare mental performance while free up the CPU from networking and storage IO. RDMA offloaded the entire transport stack to the hardware. This gave us much better performance and significantly lower latency. Last year, we announced a high-performance block storage product that can deliver over 1 million IO per second through a single disk. And the RDMA is the key enabler of this product. While RDMA offers many amazing performance benefits, its APIs are very different from the socket. To help the adoption, we built a communication library that is easy to use while offering the performance advantage of the hardware. The momentum of our switch development continues. We bring up eight new switch hardware. This enable us to build the entire data center network using our own switch software and hardware. For the gateway, while we got incredible performance from our gateway servers, we know that we're approaching the end of the server-based solution. To keep up with the traffic growth, we developed a new generation gateway platform using programmable ASIC. With a new platform, a single device can deliver a terro-BPS throughput with several microsecond latency. In summary, Alibaba's diversified workload and hyperskill services give enormous challenge to the network infrastructure. At the same time, it also offers a great opportunity for us to develop the best infrastructure technology. We believe building our own network infrastructure is critical to the success of the business. Our journey started with moving from vendor solutions to building our own solution on commodity hardware. With the end of the Morse law, we started making another transition to specialized networking hardware to keep up the performance demanded by our business. Thank you.