 Hi, my name is Michael Datespin and I'm the director of the Mass Open Cloud or MOC as we usually refer to it. Open infrastructure labs grew out of lessons learned by the team running the MOC. The MOC is a unique partnership between academia, industry, and the Commonwealth of Massachusetts. And as we spoke to dozens of other open source providers, we realized that everyone was creating their own unique cloud installation, each built on different hardware configurations, unique installation steps, and different services. There was no common set of monitoring and billing capabilities integrated into all the services and that last point is really worth repeating. Many of the difficulties that operators face are due to the cross project nature of running a cloud. This means there is no way to encapsulate best practices into the configuration, no easy way to compare information between clouds, and each of us solves the problems we encounter in a different way, largely by ourselves. And it makes it difficult for software developers, developers, because there's no visibility by the open source developer into how their software is actually deployed, operated or used. And when the operators turn to the community for help, it's really hard for the people that developed the software to reproduce or debug it. In some ways, public clouds have it a bit easier because they're highly prescriptive regarding their hardware and software, they have real visibility into what users are doing, and they can evaluate their software with real usage and then evolve things dynamically using continuous integration and deployment. Open Infra Labs is the place to bridge the gap between open stack development and operations to create open operations. We're starting with the MOC, the NERC, and the Open Cloud Testbed, providing a real platform and engagement with research. We'll deploy cloud in a box, gain real experience, and when it's ready, work with other institutions to replicate and federate. And what we learned from academia and other cloud operators will be reflected back into the cloud in the box releases so that they begin solving our operational problems together. With common code bases, we envision these institutions not only federating, but sharing operational skills and potentially staff. The plan is for the MOC to be the place where telemetry around the cloud will be available for both researchers and open source developers, or as we like to think of it, open telemetry. Open Infrastructure Labs is also a great place to incubate initiatives which cross multiple open source projects and research areas. Let me tell you a little bit more about three of these initiatives, Operate First, Project Wenju, and Project Keras. Operate First was introduced by Red Hat at the open cloud workshop hosted by the MOC and at the Red Hat Summit. Operate First is an initiative hosted within OpenInfo Labs, and the idea is simple. Ops is as important as code. Operate First says we must open source cloud operations. It says we must operate software projects in the open to identify and build in operational considerations, and that we should treat new features the same way. We must accept patches from a community that has an interest in the deployment of code in monitoring dashboards in the upgrade process, in reference architecture, and in best practice documentation. It says we have to be really scrupulous about putting our operational patches into the upstream, just as we do with the code that we ship as developers. This is called Operate First, and we believe it will make operations at scale the property of an open community. And Wenju is about something a little bit more artificial intelligency. The rate of AI adoption has lagged the level of interest, and despite a good number of AI pilot projects for evaluation purposes, only a small portion ever get turned into full scale revenue bearing products. Some analysts have pegged that number at less than 20%. The challenges arise from the advances in hardware technologies, the breakthroughs in machine learning algorithms and the explosion of digital data, which in combination have made it feasible to incorporate AI into business operations and processes. However, it takes a huge leap to move from the development of machine learning prototypes in a lab setting to the development of an enterprise AI system for production. This program is an open source collaboration project between the open source community and researchers with open design, open APIs, open communication and open governance to create a new approach to AI engineering to meet these challenges. Project Keras is looking at a similar sort of problem, but from a different angle. These data and AI applications are growing rapidly thanks to new technologies such as the Internet of Things 5G and machine learning. These technologies are compute IO and memory intensive, and they create performance and scalability challenges on the underlying compute and storage systems. Project Keras is an initiative focused on bridging the gap between distributed compute and distributed storage platforms. It aims to create a new open ecosystem that allows compute and storage platforms from different sources to operate in concert to improve application performance, resource utilization and application developer productivity. Project Keras is an open collaboration project with open design, open APIs, open communication and open governance. It welcomes contributions from the broad community in all forms, and it looks forward to close partnership between the open source community, academia and industry to impact both the state of the art and the state of the practice for big data and AI infrastructure. Project Keras is an organization supporting open info labs, and we hope you get involved too. And here's some information on how to get involved with open infrastructure labs and we'll make sure that that's also made available in the slides and in other places as well as the chat for this. Now, I'd like to open the floor for questions.