 Good afternoon, everyone. My name is Ival Bahar and I am an Open 19 fellow at the Linux Foundation. I'm here to introduce you to a new project which joined the Linux Foundation 2021 called Open 19. The Open 19 project started five years ago at LinkedIn. It was designed to actually enable LinkedIn to go to the next generation of servers and enable operating in a more open and inclusive environment. Our goals were to actually create a solution that will work for any size data center, small, medium and large, and we were able to cover compute storage and any specialty servers into that platform. The platform was actually targeting also reducing the cost of the cages and the racks themselves, increasing the speed of integration of data center servers into the data center itself and building community of like-minded companies which actually have the need for the data center that we're working on. But at the same time, do not share right now the effort. If you look at the base building block of the Open 19 platform, it has four main based building blocks. One is a cage. A cage is a platform which actually creates the sets, the form factor of the servers and enables the servers to be shared and put into the same location. Each cage is associated with a power shelf and a networking switch. That combination of power shelf, networking switch and two cages is called an Open 19 zone. That zone actually accumulates a standalone unit of compute that can be replicated many places and in the single rack, you usually can put up to two zones. If you look at the backside of the cage, that's where some of the magic happened. We built a system of cabling which snaps to the back of those cages and creates a blindmate connectivity for the servers. The servers both on the power side as well as on the networking side is blindmate connected into those cabling system in the back, which enables the cable system to create a simplistic way of integrating servers. The cabling system is pre-installed in the racks and in the cages and when the installation of servers happen, you don't need to actually worry about cabling, miscabling, broken cables, etc. The whole thing is pre-tested and pre-installed. What you see in the picture over here are the two types of cables that we have for power and for data. Cages are passive sheet metal solution. They are defining the phone factor for the servers and enabling us to create multi-sources into those servers to fit into the same phone factor. They are also enabling the cabling on the back of the system to snap into the cages and create the blindmate connectivity for the solution. We have two versions of those cages, a 12RU and an 8RU version, and they actually divide it into blocks of 2RU. Each of the 2RUs can actually accommodate one server, two servers or four servers depending on the configuration that the end user selects to do. The cabling system is consistent of two types of cables, a power cable and a data cable. The power cable initially enables up to 400W per break, which is half width 1RU, and the data cable enables up to 100Gbps per break. However, with that said, the bigger the server is, if you are a full-width, you actually get two power cables and two network cables, which when you double up your power per server. If you actually a 2U block, you get four power inputs and four data inputs, which again enables you to increase the bandwidth and increase the power available for you. If you look at the rear of the rack, that's how it looks. The connection of the cables are actually snapping on one side to the cage and on the other side into the network switch. This is a real example of a data center rack, which has been built by LinkedIn. What you see on the right is the backside of the rack, and you can see that there aren't any cables. What you see is the blue and white ribbons, which actually aggregate those cables that create a blind-med connectivity that we discussed before, but the cabling system is extremely simple to manage and extremely simple to install. On the front, you can see a completely clean view of the rack, where only servers faceplate appear. There aren't any cables in the front of those racks. We have four brick foam factors that we support in the Open LinkedIn platform. A brick, which is a half-width 1U block, a double-wide brick, which is giving us a full 1U server, a high-double-wide brick, which is actually a 2U server, and a special foam factor, which is half-width 2U block, which gives us a solution to work in different specialty conditions like in GPU. Each one of those bricks is completely independent. It has its own cooling system, its own EMI safety certification, and it's actually operating as a standalone. It does not require anything from the system. Like we said before, the bigger the brick, the higher the power and the higher the network connectivity that you have, the baseline connectivity is for 100 gigabits per second for the smallest brick. We support a block called Bring Your Own Switch. That solution enables multiple different types of topologies in the rack itself. If you want to do a single switch or a double switch, what we created is a cable, which on one side gives you the ability to create and enjoy the blind connectivity to the servers, but on the other side operates off a QSFP or QSFPDD connection to connect to traditional standard switches off the shelf and create any topologies that you prefer to do. A single switch, a dual switch, whatever you want to build, you can build with this configuration. We have a power shelf, which is actually the source of the power for the whole rack. Like we discussed before, every two cages are operating with a single power shelf. Those two cages are actually getting direct connection from the power shelf to each and every one of the servers and enables us to monitor and protect each server independently and not create a shared fate of the servers within the rack. We have a control management system inside the power shelf, which connects to the rest of the world with a gigi connection. It enables us to actually build a solution, which is fully managed remotely, but control the servers and monitor the servers in real time independently. Those power shelves have dual sources for the power shelf and for the power modules, and they're available now. If we summarize the Open 19 platform benefits, it goes into any 19-inch rack. It's a fully redundant system with full desegregation. It has a very short integration time, and it leveraged the existing servers which are in the industry today. It's one of the most power efficient solutions in the industry, and it has multiple server tiers and multiple server sizes. It is also one of the most cost-effective integration solutions into data centers today. Let's talk now about Open 19 version 2. We are introducing a second version of the Open 19 platform, which enhances every aspect of the data center requirements of the future, from power to networking to cooling systems. I'll talk about the different features of the new Open 19 version 2. Let's talk about the Open 19 version 2 cooling system. We took a step in integrating a fully liquid solution for direct-on-chip in version 2 of Open 19. The liquid is going to be delivered to the servers via a blind-med connectivity, the same as the power in the networking, and will enable to cool hotspots in next-generation servers where the CPUs or the GPUs go to levels of 300, 400, 500 watts per component. That integration of blind-med connectivity for liquid cooling will enable Open 19 to step into the future and protect the future generations of servers for the next five years. The distribution system of the liquid within the rack is very similar to the way we distribute power and distribute network. It's based on a common element that's shared across the servers. We have an external interjection unit which takes the heat off the servers and eject it into the environment. Let's talk now about the Open 19 version 2 data enhancement. We are enabling double the number of the connectivity pins for every one of the servers to enable to go into higher speeds and to higher capacity per server. The new system will support up to 400 gig per server and will enable you to actually do a much more interesting solution in topologies in the system. We are also introducing an optical version of the interconnect for the Open 19 servers which will enable direct optical connectivity to the servers as part of the Open 19 offering. If you look at what we're doing, we're actually maintaining exactly the same configuration exactly the same blind-med connectivity just enhancing dramatically the capacity of networking that each and every one of the servers can actually accommodate. On the power side, for version 2, we are doing 4x the power per server. We're going to offer 1600 watts for every brick and 3200 watts for every RU of the Open 19 version 2. That is a 4x increase on the original version 1 and it's coming as a result of a very strong need to go into an environment where we have high GPU integration, very high-end CPU integration and an ability to actually range from very low power servers to very high power servers in the same integrated system. The power shelf concept will be maintained and the server-by-server protection and monitoring will continue to be part of the Open 19 solution but in a much higher level of enhancement from a power perspective per server. Let's talk about Open 19 and sustainability. Open 19 works in two dimensions of sustainability. One is power efficiency and that one is fast integration and elimination of rack and roll. On the power efficiency side, because we operate with a centralized power system, we can operate in a system which is a much higher level of efficiency in the conversion and as a result we eliminate a very large number of components which are power supplies directly into the servers, centralize it into a central space and enable to work in efficiency which are at 97% plus. This is very rare to see in standard data centers and it's one of the diamonds that Open 19 offers. The second one is the integration and that's a little bit more tricky. Why fast integration is actually sustainable? Fast integration is sustainable because in traditional data centers which utilize what's called rack and roll, the servers and the racks are being packed twice, tested three times, shipped three times and powered on only after about seven stages of packing, unpacking, shipping, unshipping, docking and docking. In the Open 19, because you are not required to do rack and roll anymore and you can integrate on site very, very fast, there's a single cycle of shipping, a single cycle of unpacking and a single cycle of testing. That is tremendous savings on all the dimensions from the tracking to the testing, to the packaging and the waste that you create in it. This is very important because it's actually enabling us to eliminate multiple aspects of data center waste and enabling us to be much more efficient with the way we actually treat that. Let's talk about the Open 19 project and Linux foundation. Let's talk about the Open 19 foundation history. The foundation was established in May 2017 and it was a community-based solution for open source of hardware and parts of software. The foundation was launched in 2017 and during the first 18 months we achieved six founding members and 18 corporate members and over 2300 individual community members. That community was very strong and very tight and drove into adoption of the technology very, very fast. It was a business-friendly but also a business-oriented open source environment which enables the participants to still have business value, to still create a differentiating product even though you operate in an open source environment. We are currently 12 companies since we joined the Linux foundation and here are, let's look at the 2021 membership at the Linux foundation. We have two premier members, Cisco and Equinix and nine different members from different disciplines of consumption, production and manufacturing of the Open 19 platform. You can see the brand names on the slide in front of you. Let's talk about the new program that we're working towards. As an open source platform we are trying to create certification for compliance into the Open 19 spec. The Open 19 certified program is in progress right now and in development to create a full compliance model for open source technology to enable smooth integration and no need at the end users to actually run real-run integration again and again and again. We're trying to do this based on a self-certification of the suppliers and the program is actually in evolution right now and we're very tuned for more information about it. In summary, the Open 19 is about defining a common form factor. It's not about defining servers. It's about creating a community collaboration to solve the problems being addressed by everybody. It's based on a common infrastructure which fits into a standard 19 inch solution. Open 19 is in production since 2018 and the V2 is going to go into production in Q1 of 2022. The Open 19 is actually a standard in development constantly. We constantly evolve it and enhance it and create different extensions to it. We would love for you to participate if you're interested. Stay tuned and go to open19.org for more information. With that, I want to thank you very much for taking the time for listening to this and I appreciate the time from the Linux Foundation.