 Well, hello, everyone, and thanks for coming to our talk. So I want to tell you a little bit about what's going on with Hill Packard Enterprise. And what we're doing with, is it too loud? Seems too loud. With OpenStack and our products there, see if I can make things move. Boy, if you think about how the summit's gone so far, it has really been an illustration of how OpenStack has gone mainstream. When you look at all the different use cases that we've seen, and we've got what I call the 100,000 core club, which kind of clearly got articulated in a lot of talks over the last couple of days, those people running it at that kind of scale. What you see, kind of going back, folks, sorry, I have a little bit of an AV challenge. Backwards and forwards. Okay, we'll try again. So what I was saying was, we're seeing OpenStack get embedded in more and more ecosystems, because over the last year, year and a half, we've done collectively as a community a tremendous amount to make it more successful in different workloads, raising from high performance computing to telcos, we have whole work streams on NFV, to different, guys, we're bouncing around a lot on the slides here. Okay, cool. Real-time adaptation. Cool. Sorry about that, folks. So when we think about the products that HP Enterprise offers, we offer Healy and OpenStack across all kinds of vertical customers. As we try to serve the largest 1,000 global enterprises and then move that out, what we have is customers that are trying to build sophisticated private clouds to accelerate the innovation that their lines of business are creating, so that they can basically get out IT out of the room. And what I mean by IT is, the line of business is creating software products, services. And what they want, in essence, is no meetings. What they want is the ability with hundreds of APIs to get their work done without asking anybody for permission, without asking anybody, can I do this? Can I get this? I need a meeting to get some servers. What we're trying to do is provide a sophisticated data center automation, which covers all of those services that we have. And what we're doing is we're applying them to a wide variety of different customers and customer types. When you kind of look at the customers that we're often working with, they kind of fall into this pattern that seems to work pretty well for us. The pattern is people have somewhere in the order of between five and 15,000 applications, which is truly a colossal estate of code that they have in these systems. And they're independent and they need to figure out what to do with them. And they're trying to evolve their computing infrastructure to be able to both manage cost, but also some small fraction of these, hundreds, maybe a thousand, are rapidly evolving or need to rapidly evolve. And they currently can't because they're trapped in a virtualization infrastructure that doesn't provide them the kind of abilities that keep their developers agile. So as we sort this out, what you end up seeing is some number, 10%, 15%, they don't change. Just basically help host that, let it run. The middle part of our applications, they are under some evolution and they would like to have a more agile infrastructure so they can do the kind of periodic releases they have to that code. And that's where they want to be able to make use of private cloud technology. A subset, 5, 10, 15%, it makes sense to move to a public cloud for whatever reasons they've got there. And then there's a whole set that basically they built them five, 10, 15 years ago and these are really replaceable by SaaS applications today. So back then there weren't expense systems and there weren't kind of building management systems. The kinds of things you have today, they're very common SaaS applications. As we talk to customers like look, this whole part of your estate over here, you move that to SaaS. So you can focus on the areas where you're actually creating strong value. And to do that stuff in the middle of a private cloud, what you have is the desire to have a wide set of APIs instantly accessible that allow people to build both kind of virtualized and cloud-native applications. Now you're doing this for a large company. You need other things. You need protection. This application can't see that application. You need multi-tenancy. You need to be able to then build back because your project has got a budget of X tens of thousands of dollars of IT and your budget has some different budgets. We need to be able to provide charge back, provide security and compliance. A lot of the work that the community's done and we've done specifically is on making the OpenStack more compliance-friendly. And I'll talk a little bit more about what we're going to go in that vein. But customers have been very clear with us about what they need to see to be able to put this into a kind of a full production. So as we kind of think about the last year, how did we get to where we are and what's kind of made things more and more successful? So a few things that have evolved when I think about kind of OpenStack over the last year or so, HA for the control planes, being able to do live migration of both the control plane and the data plane, absolutely critical for folks. When you see, when people talk about in the old days, I'll scroll back two years, I'm going to do a version of OpenStack and they're now going to repave the whole infrastructure. It's just not acceptable for anything except for the most extreme cases where you can do that. Now you don't have to do that. You can actually do live migrations and you can do live control plane updates. Getting the, I mentioned security earlier, people want to have more and more of their e-commerce, more and more of their business critical financial information, certifications, being able to certify PCI, HIPAA and other kinds of compliances need to be possible within kind of a product or an OpenStack kind of framework. And part of that is logging, metering, monitoring. You've got to be able to make things visible to the systems that then check that. Workload diversity. When you think about the various different kinds of structures that people put in place, everything from a classic VM workload where the VMs last for many, many months or even years to people who are building more elastic VM type applications to those that are putting a Kubernetes on top of something where the workload itself is very dynamic but the VMs underneath are actually kind of hidden from the developers. So I think that over the last year we made a, collectively we made a lot of progress in these different areas and we're starting to really start to see the fruits of getting into the large enterprise and the large specialized domains of that. So despite the fact that we made a fair bit of progress, there's still a ways we want to take things. There's a ways to go here. I gave a keynote back in Vancouver and I talked about how hard it was to run OpenStack and it would have made tremendous progress in the last 18 months and making it more operationally effective, making it more easier to have people skilled up and being an OpenStack operator. We still have ways to go there. I think that's a continual journey. Until you're running 10,000 physical machines with a handful or less of people, I don't think you're there. So how do you keep approximating that and finding out where the bottlenecks are, where the time goes for any kind of operations and bringing that further and further down? Pushing to more scale, like I said earlier, I think we're seeing more and more people in the 100,000 core club. It's really exciting to see people pushing between five and 20,000 physical machines across multiple availability zones in multiple data centers. We're kind of at the verge of having a multi-tenant ironic. So when we talk to customers, there's a very strong desire for bare metal as a service and one of the challenges I think we have as a community is sometimes we associate a strong use case with a project, but they're not always the same. What you have is people want bare metal as a service. They want a multi-tenant. They want a strong network isolation. There's a variety of things that people want which encompasses four or five projects to be able to make that thing happen for a customer case. I think we're getting there. We've got multiple projects that have made a lot of progress here in the last year. And then there's kind of the business fundamentals of being able to do backups of your control plane and being able to have VM high availability. Some of those are projects that have come out from other open source projects and some of those are things that we're kind of doing generally in the community. And the collection of SDNs that we have, what we see is a real maturation in the SDN landscape. You see more and more companies are adopting this. It's getting out of the kind of the leading edge and getting into more of the mainstream and people have made choices there. And those choices then need to get reflected into having OpenStack adapt and evolve within and live within that existing network design. So what I'd like to talk to you a little bit about is our latest product. So today we're releasing a Helion OpenStack 4 at the summit. The last summit we released three. The summit before that released two. So it's an enterprise grade product. It has all of our latest technology we've baked into this. It's based on a version of Mataka. And we're very happy to have a lot of significant improvements and features and capabilities in this and we'll talk with you a little bit about what some of those are. So I think the first thing is our third party ecosystem and our ability to have third party adaptations or plugins go through the migration and update process. So a framework that allows us to take the customizations that people have done to their existing OpenStack installation, whether it's a two or three, and be able to take that forward in a four so that it's a much lighter weight update process than we had previously. A lot of increased performance. Everything from stuff at the Linux level to network authorization to things like DPDK and SRIOV improvements. So we're now much more ready for high network, high bandwidth VM applications, which you'll see is important in a lot of different workloads, specifically good in some Mataka workloads and I'll talk a little bit about that. And then the other big theme we've had is operability. It's been an area that we've been trying to focus on, we've been trying to understand how we can improve our monitoring, our auto remediation, our HA capabilities, so that basically all the items that make it challenging to run an OpenStack instance, we've been working very diligently to reduce that complexity, reduce the path and improve something we call Ops Console, which is our operator dashboard, which allows you to take many actions. It allows you to look at capacity, understand where bottlenecks are, and basically have a full view into your live application and the way the cloud is running. Part of the network that we put, network integrations, Nuage, DCN, Mitakura, NSX, a lot of these integrations are now built into the box. It's very easy to adapt to an existing complex data center, something that we were not able to do previously, but we're very good at right now. And even some VM auto scaling, so using heat to look at the load and figure out how to auto scale VM type applications is another part that we've put in place. So as we kind of looked at the overall TCO for the application, we've really tried to understand for somebody running a thousand node cluster, what does it take? How do you make it simpler to have people onboard it and have simpler to take the repeated processes that have to happen, shrink those down and make those more and more automatic? And our framework has made a lot of progress there. We expect to have even more progress in five once people are starting to use this, but trying to streamline all the operations activities and kind of take those manuals of courses that people have to do and say, well, if this didn't do that, computers are really good at that. So how do we do this, then do that and tell you what happened versus ask you to run a nine step script? When we think about doing this also for carrier grade type applications, when you are in discussions, when you're at carrier grade telcos, they want to have many different pods of deployments, dozens or even up to a hundred or so different regions and they need to be managed remotely, they need to be able to be updated and be able to do that in kind of a remote installation mechanism. So as I talked about, we're really trying to focus a lot of core features on the communication service provider area. We think that we provide all the core product is all upstreamed. We've got a lot of capabilities that allow people to run high complex VM at type applications and provide that kind of availability that people are looking for and it really simplifies that kind of NFE type deployments in different scenarios. So at HP we're absolutely committed to open source and providing all of our stuff up as open source products, whether it's our life cycle management, our operations consoles, or the core bits of our system. When we think about kind of how we're moving forward and trying to make this better and better for people who are providing this for telcos, there's a tremendous amount of life cycle automation that needs to happen as you change, modify your network, as you try to orchestrate the VNFs of virtual network functions and try to provide that kind of management system in place. So we also have part of the open NFE partnership program, which is continuing to grow that ecosystem of products and services. So we're really excited to have a Haas 4 out there. You'll see if you go down to the floor, you can see some information, more information on it and would love to be able to talk to you about what we're able to do for a variety of different kind of customers around Haas. So with that, I'd like to also tell you about another product that we're launching today. We're launching Helian Staccato. So Staccato is a cloud-native application framework that works, you may have heard of it before. It used to run only on top of Haas, it now runs on top of many of the leading infrastructures. It runs on top of OpenStack, AWS, vSphere, and Azure, and this is also shipping today. So what we've done is we've re-architected this for those customers that are looking for a multi-cloud, multi-IS type application, which allows people to write cloud-native applications running on multiple infrastructures simultaneously. So you can spin up clusters in any of these four environments. You can manage those clusters from a unified command line interface and a unified control panel and have your application developers have access to resources both on your data center if you want, of course, on AWS and Azure also. So let me tell you a little bit about what is included in Staccato 4. The bottom level uses open-source technologies of Terraform and Kubernetes to provide a control plane that auto-updates, remediates, and very flexible connections down into the lower infrastructure. On top of that, we provide both a cloud-foundry opinionated distro, which is cloud-foundry certified. We also have what we call code engine, which basically provides a CI-CD environment for those application developers that don't want to roll their own. So it's integrations with Garrett and Jenkins and get a lot of the common application tools. So for people that want to take this or optimize it a little bit, this is a great way to get a CI-CD environment up and going in a very quick way for those people that are just working on their application itself to build 12-factor applications. And then off to the side, for the operators, we have things that are a console that handles the multiple control planes. We've got tools and a CLI that makes it very production-ready so that we can have the operator be able to work and manage these clusters in production. Again, just like you'd expect for Cloud Native, this application itself is easily upgraded with no application downtime because we're using underlying container technologies. You may be asking, what is in there beyond what you've got for cloud-foundry? We can import Docker images into this. You can really make very good use of a higher-level set of abstractions for your application developers and your IT developers. And we really think this provides, really accelerates the ability to use Cloud Native application frameworks from a Docker, Cloud Foundry. It's got integration with .NET and a lot of the classic dynamic languages. So we think this is really an out-of-the-box application experience where you can be productive right off the bat. The obligatory feature list in very small font. The team really took this whole product to the next level, so it's a completely re-architected Cloud Native certified application. And we've really tried to get in the full set of capabilities that we think an application developer and their operator would have to be able to trust this. So everything from LDAP, single sign-on, security, giving you information about logging and how many resources an application is using to be able to build and bind these applications to data stores. One of the things we offer is a wide variety of data stores built on this. Things that your application developer is very familiar with, whether it's MySQL, Postgres, Redis, RabbitMQ, Mongo. So when you think about what an application developer wants to be able to get access to, these are the kinds of things that they expect and they want to be able to get access to as data stores. It's very easy to build an adapter to other data stores that we don't have, ones that are outside of our system. Informix I think is one that showed up. So somebody really needed to be able to connect into an existing data store they had that was 15 years old, and it was not that hard to take the framework, build an adapter to that, and then everybody could go use that adapter and then connect into that legacy data store. So very excited about the kind of a wealth of features and capabilities that developers are going to be able to use kind of right out of the box. And so you'll be able to see that downstairs at our booth also. So really maybe just to summarize, our portfolio is really based on open technologies. I've just talked to you about the latest releases of Staccato and Helion OpenStack, one of our core green products, the platforms that we use. On top of that, we have some solutions. We have a cloud system, which is an integrated hardware and software, which includes both Staccato and Haas. So for customers that want basically to go buy a couple of hundred nodes of basically a private cloud in a box and land it there, turn it on, and it runs, it's all pre-integrated, everything is all kind of set up. That's what we call a cloud system. And our carrier grade, I've talked a little bit about carrier grade. The carrier grade product is an enhancement and an elaboration with more features on top of the underlying platform. So we've kind of a whole family of things ranging from really straightforward out of the box private clouds through sophisticated systems that provide carrier grade and then trying to focus the world in the middle of the enterprise developer who want private cloud with cloud-native application technology to be able to go do that. So really at a high level, this is what we're announcing today, and we're excited to be able to talk with you guys downstairs in our booth and walk you through things and answer any questions you might have. I know we've got a few minutes here, I'm happy to take some questions. We've got microphones. I think there's one over here, that's what I see. Anybody have the first question? If not, I'm also happy to take conversations on Twitter, I'm at Interonte, so you can just tweet at me and have a conversation, happy to meet up if you want to get a cup of coffee or likewise. No questions? Okay. So I'll be up here for a few minutes if anybody wants to catch up or talk. Well thank you very much. I appreciate your time. Thanks.