 So just so that you know my boat as a floating IP and its name trace route Good morning, so it's been Something like 10 years since we first had this conversation where we described the need for something open and modular that would do public cloud services so we went from a moment where we Talked about it to a moment where we started The opens what would become the open sack foundation a few years later eight years ago We started focusing open stack on how to build public clouds and we were quite successful at it looking at the slide that Jonathan presented earlier today and very quickly we realized that if we wanted to implement the Ibridity of the cloud enabling people to move workload from public to Private cloud we also needed to care about private cloud and I think we've been also quite successful at dealing and offering us a good solution for private cloud and On the way we got carried toward the telco use case and These three use cases which we've been covering quite well as a community had a very strong focus on delivering VMs But really what's going on now? Well, the reality is that we have to take care of new types of workloads containers AI Big data all kinds of analytics HPC Workloads that needs direct hardware specific hardware access like GPUs the FPGAs and more and all of this Is not solely on VMs For example when I deploy a container platform I want to be able to deploy it either within a VM or on bare metal when I deploy Big data application Some of this application will perform a lot better if I'm on bare metal so What we are dealing with right now is Something that is quite new from what we had envisioned at the start Nova is not anymore the center of our universe and Thanks to the excellent work that the community has been putting together work in Ironic work in Cinder work in so many projects. I won't be able to name them all We are able now to offer the same scalability the same isolation the same automation On bare metal as we've been able to deliver on VM and that's absolutely Phenomenal that mean that we have all the basic function to deliver the software defined at a center Regardless of the workload that you want to deploy. I mean a little example here the career project which really is CNI implementation a Kubernetes container native Sorry a container networking infrastructure plug-in that talks directly to neutral That enables security groups to be synchronized between the two Layers that enables to remove the overall the overlay stacking This is phenomenal the integration with Ansible and particularly Ansible networking Which allows an ml2 plug-in to dynamically configure any top of rack switch? Depending on how you want to configure your isolation. This is phenomenal so we have an Open stack that is delivering all these primitives We have Kubernetes which is clearly the winner of how to orchestrate workload across cluster in many people describe Kubernetes as Being the equivalent of what Linux did to a single machine orchestrating multiple processes Turned into how do we have something operating? Processes across a cluster of machine. This is what Kubernetes is delivering and when you combine the two This is where you get the open infrastructure the foundation is speaking about This is really a non zero sum game is the assembly of the two that deliver the complete value of what people want to achieve so Today thanks to the great work done by the community on Rocky. We are announcing readout open stack 14 It will be released in the next month and It's including all the work that is needed to make Kubernetes or more precisely our Distribution of Kubernetes open shift run as smoothly as possible on top of open stack It's Also a day where we are announcing our first official design For an edge use case, which we call the virtual central office This is based on work that was done in the opna v community that is assembling Open stack Kubernetes open daylight Ansible to deliver the ability to have data centers that are closer to the end users It's the first use case that we implement. We are going to implement quite a few more in the next few releases another thing that we are delivering And we'll be announcing very shortly is the ability to provide Dynamic migration of workloads off of VMware on to open stack all through a graphical user interface and There is a lot more coming up in the next few months all centered around this idea that Delivering an intro an open infrastructure is not just about VMs. It's about the entire picture In order to give an example of what can be done using software that we deliver today I wanted to show this video about these projects from the children's hospital That is really helping radiologists to make faster interpretation of the x-rays and other imagery MOC tries to change the public cloud model of today, which is mostly stood up by a single vendor It has been founded by the collaboration of the five research universities in the New England region MIT UMass BU Northeastern and Harvard Right. My name is Ataturk. I'm a research scientist at Massachusetts open cloud I'm leading the big data and healthcare analytics efforts With the scale of MOC and with the deployment of Creson MOC now They can use some of the solutions that they were unable to use one example is the image registration that they do for MRI images, which is practically taking multiple MRI images and using those multiple images to Cancel out the noise within the image normally when they do the same operation using the hospital resources This takes almost a day, but with our facilities we can reduce this to minutes So we are now at a stage where we need to build a community around Cres and an open source infrastructure And that's where Dan comes in My name is Dan McPherson and I'm a software engineer at Red Hat We have several folks here that are based out of the Boston office who work on the Cres project they do so with about 20% of their time and We treat it like we do most of the products we treat at Red Hat Creson the MOC runs on top of our OpenShift offering and OpenShift provides the job framework for which we run the image processing software OpenShift gives us the ability to utilize our resources very very efficiently So any resource that we don't use are given back to OpenStack to use and service to other infrastructure users The OpenShift environment gives us this whole model of encapsulating the applications and their dependencies today and Now it comes a really exciting part because Red Hat engineers are working together with us to exploit GPUs on On the platform so that we can actually plumb them all the way through and make them available to the application And then how do we exploit the parallelism so we can set up, you know A dozen or a hundred containers to work in parallel on a problem the opportunity that we have here with an open cloud With researchers and with industry and specifically Red Hat technologies working with us meeting the goals of this application is a transformative opportunity so that's a Chris but Right now we are experiencing something that is a bit of a dilemma When we started this project we started it with a very innovative principle doing deliveries every six months But I think that it is now the right time for us to start considering accelerating our development cycle I'm not talking about how quickly people will upgrade. I'm just talking about how quickly we Deliver new releases. We've now proven with fast-forward upgrade that you don't need to wait You don't need to upgrade through each version in order to do an upgrade we can Skip three version or even more version. This is now proven. We now have customers able to do that So I think it's now time for us to be able to innovate faster to deliver faster Along with the other communities that are already on the stream on cycle So that's my proposal today. I hope we'll be able to discuss that further in the next few months In any case the open infrastructure is our shared mission We have tons of opportunities to collaborate and I hope this week will be Very fruitful. Thank you very much