 All right. Next up, I'd like to welcome my good friend Dave Wright from SolidFire. Come on up, Dave. Thank you. Bear with me for just one second. I saw on Twitter that they were running a OpenStack selfie contest. And I think I just won. So, thank you. So, it's great to be here in Atlanta. In fact, four years ago, I actually left Rackspace to start SolidFire right here in Atlanta. And I started SolidFire with a very specific mission, to advance the way the world uses the cloud. And today, I couldn't be happier to be back here with a community of companies and individuals who I believe share a similar commitment to advancing the way the world uses the cloud. But how did we get here? We're here today because the public cloud has raised the bar for how IT services can be delivered as infrastructure, as platform, and software as a service. The benchmark that's been set by the public cloud, as we've heard, is about agility and efficiency. It's about speed and time to value, more so than just about price. And in many cases, enterprises are now being forced to respond by this benchmark of the public cloud. In some cases, they're taking the pragmatic approach of adopting public cloud services directly. And we're seeing this adoption from the smallest startups to the largest enterprises. But in many other cases, they're attempting to replicate the agility and efficiency of the public cloud in their own data centers. In either case, one thing is clear. The siloed model of data center infrastructure is dead. And the shift to a pooled model provisioned and managed through software is well underway. And this shift has brought a sea change to both enterprise and service provider data centers. In fact, the entire architecture of the data center is changing, and the vocabulary we use to talk about the data center is changing with it. From single tenant to multi-tenant, from physically isolated workloads on dedicated infrastructure, to mixed workloads on shared infrastructure, from scale up to scale out, from project-based IT that takes months to deliver to self-service that can be delivered in minutes. And all of this is being driven by a shift from manual administration and configuration to automation. Now, you can get out your buzzword bingo card, because I got them all on this slide, because the fact is there's a lot of names that people have for this transformation. Whether you want to call it private cloud, the software-defined data center, IT as a service, at the end of the day, all of these names describe a next-generation data center that's agile, scalable, automated, and predictable. Everything the legacy data center is not. And coupled with this shift in data center architecture is a fundamental change to the way applications are being designed and deployed. We're moving from a legacy world where applications ran on a computer, stored their data in files, and gathered input from a GUI, to one in which applications span hundreds or thousands of virtual machines, data stored in scalable databases and cached and scale out memory tiers, and interaction occurs via an API, via web interface, or via a mobile app. The way that developers think about developing, packaging, and deploying their applications is now radically different than it was even just a few years ago. In effect, the entire data center has become a big computer. Developers now think about programming infrastructure in much the same way that they used to program a single server. They spawn virtual machines instead of processes, they create databases instead of files, and they instantiate load balancers and web tiers with a single request. And if the data center is a big computer, OpenStack is the operating system for it. It provides the fundamental software-based services and APIs that developers and administrators can use to develop, provision, deploy, monitor, and scale their applications. Never before has there been an open platform with the wide scope and deep industry support that's been required to succeed at this new data center scale. The opportunity for OpenStack as a key element of the next generation data center cannot be underestimated, and I believe that's why so many of you are here today. But how are we going to get OpenStack deployed? And for those that are in the middle of this data center transition, how do we bridge that gap from the legacy data center to the next generation? So let's take a step back. So, like many of you, I've fond memories of building my first computer. So I built it with my father in 1988. It was a 286 with a megabyte of RAM running DOS4, latest and greatest. So I researched it for months. I read all the component reviews, and I ordered the components from catalogs or at computer shows. And it took a while to get it all working between a bad power supply and some flaky RAM chips. But to this day, I actually still enjoy building my own PCs when I can. And many people deploying OpenStack today are like me 26 years ago building a computer from scratch. They're creating their own next generation data centers and using OpenStack as the operating system. And these trailblazers, several of which we've already heard from today, are setting an example for those to follow. And I'd like to bring one of them up on the stage today. So please welcome Subu Amaraju and John Brogan from eBay. Hey Dave. Hey Dave. John, good to see you. So welcome Subu and John. So let's start with you Subu. So tell me what are the biggest challenges that you were facing at eBay that led you to explore OpenStack? So I think it was just what you were saying before. Agility followed by efficiency. Agility for developers and operators. Efficiency for the infrastructure. So before OpenStack came along, we spent quite a bit of time in automating compute positioning for certain kinds of workloads, which provided great benefits in agility. But our next challenge is to expand that across compute network storage, entire data center for all kinds of workloads, thinking more like a public cloud provider where any user in eBay, any team, can come to us to provide infrastructure for any kind of workload without thinking about it. Of course, the second part of the challenge is to do it efficiently so that we can sustain for the long haul. Excellent. So John, tell me more about how eBay is using OpenStack today. So the way we're using OpenStack, I kind of take two points of view on that. One, there's an external viewpoint and an internal. Externally, kind of what Subu said, our goal is to provide, our charters provide a public cloud-like experience to our end users and all the things that we get from cloud. Internally, as an IT organization, we see some benefits there. The first one that comes to mind is we, as a pre-web, pre-cloud company, we have an enterprise infrastructure. So we spend a lot of time. We have silos of compute network and storage. We've done a lot of automation in those areas. And it's a lot of effort to tie those silos together and also to maintain it. So we see OpenStack as a way to deprecate that work and just get the automation that is inherently built into OpenStack. The second way we're using OpenStack is a development platform for engineers. And when we say development, we're talking career development. So back to an IT infrastructure mindset, hey, if I was in storage, it would be pretty rare for me to go outside that domain. I might even get my hands slapped or something. So deploying OpenStack and operating OpenStack, it's so critical to go beyond what you used to just know. I no longer, I need to focus on storage, but I need to be knowledgeable in network and other domains. So there's also the part that comes in is the open source tools that go into operating a cloud. Things like Graphite, Elasticsearch, Logstash, Chef, Puppet, all these things that, hey, we're restricted to me. Now it's part of my job. So we see that as a win for the engineer and also for the organization. The third thing that we see with OpenStack, and we saw it on the screens with a lot of the super users yesterday, is it is now a gate to get into the infrastructure. You know, three or four years ago we used to be talking to vendors about, hey, we need a programmatic infrastructure. We want RESTful APIs and sometimes we get blank stares or they'd want to help, but it was just too difficult to pull off. And it's something that the community coming together and users asking for. Now our lead is, hey, tell us what your OpenStack story is, and we're actually getting responses. So it's been wonderful for what the community has put together and what users are asking of vendors. Excellent. So Subha, tell me about eBay's business. How has eBay's business been positively impacted by your adoption of OpenStack? So before I jump into the business side of it, let me say a couple of things. As an engineer, our developers love what we're doing. Our team is doing because they feel empowered to create the infrastructure they need to try out all kinds of things. By extension, they love all the folks in this room for creating this platform called OpenStack. Oftentimes they Google for OpenStack, figure out how to do things on their own. They create infrastructure. That's the kind of empowerment that's enabling us to create new kind of things that we didn't think before. The second part is quite interesting for platform builders within the company. Now, if you're building a tool for automating software deployment, monitoring, all kinds of config management, now there is one clear interface between infrastructure and those kind of platforms. So the standardization of infrastructure is helping us standardize those layers above. And coming to the business side of it, now, as you said before, getting onboarding workloads before used to take long time, a lot of preparation from data center, white space to infrastructure, networking, to application deployment, now that part is taken care. There is a standardization. And the problem is now, how do I get this environment or a virtualized environment onto cloud without starting from the ground up? So it's a time that is taking to bring onboard workloads that is coming down. So the speed theme again. So John, how do you see the use cases for OpenStack evolving in the future and particularly around block storage in eBay's cloud? So in our journey so far, right back to, hey, we really just want to expose the native APIs and give that public cloud-like field to our developers. One side of the story is, hey, once they have resources, there's no end to what they'll create as far as what they think of. There's no restrictions anymore. They want to build an elastic search cluster. They're free to do so. They can build a Hadoop cluster all on their own. So that's one driver we see for block storage. And the other one we see, right, hey, eBay PayPal both have a significant data platform, transactional platforms. And what we see is, hey, where you started first with putting stateless apps and virtualizing them and then putting your more important apps and virtualizing them, you know, same way with cloud, hey, put your application stateless apps on OpenStack and get comfortable. Well, I think now is the time for the data platform apps to be hosted on OpenStack. And we see Trove and the work that's gone into there in the last couple of cycles is a key enabler for that platform. Fantastic. All right. Well, thank you, John. All right, Dave, thanks. Thank you, Subu. Thanks, Dave. Fantastic. So eBay is a great example of a company that has embraced OpenStack to increase their agility and their speed and is now embracing on an even wider platform as they start moving their data apps onto the OpenStack platform. Now, most people today don't build their own PCs. And many enterprises and service providers aren't going to take the do-it-yourself approach to the next generation data center. And in fact, we're seeing the growing popularity of converged and hyper-converged infrastructure offerings demonstrating a strong desire to simplify and accelerate the deployment of new infrastructure services. But in many cases, these converged infrastructures come with a cost. The cost of lock-in. Lock-in to a vendor's hardware stack or software stack or both. And the exact thing that many of you are here at the OpenStack conference to avoid. So the question is, is there a way to combine the simplicity and convenience of a converged infrastructure with the openness of OpenStack? So I believe there is. And today we're unveiling SolidFire Agile Infrastructure, the first best-of-breed converged infrastructure for OpenStack. SolidFire has partnered with the company that led the OpenPC Revolution, Dell, and one of the undisputed leaders in the Open Software movement, Red Hat, to deliver a pre-validated and thoroughly tested converged infrastructure design based around OpenStack. SolidFire AI for OpenStack is a reference architecture for a scale-out cloud. It's based around commodity hardware and optimized to run a wide range of enterprise workloads. And you can download the reference architecture starting today from the SolidFire website. AI for OpenStack delivers 50% lower cost, 80% less space, power, and cooling in over two and a half times the storage performance of the legacy converged infrastructure stacks. And we've heard today, again, how important that speed and performance is. And the operational benefits of an OpenStack architecture based on the AI reference design are notable as well. The agility of a true software-defined pool of compute, networking, and storage combined with automation that allows for deployment from bare metal to a fully operational OpenStack cloud in under an hour. The ability to linearly scale compute, networking, and storage to thousands of cores without downtime combined with a groundbreaking guaranteed quality of service for storage that SolidFire is known for. The reference architecture starting point is 360 cores over 60 terabytes of block storage capacity, 250,000 guaranteed IOPS in just a 27U footprint. And it's running in our booth today. So joining me on stage today to talk about the opportunity for OpenStack and agile infrastructure to accelerate the transition to the next generation data center are Mike Warner from Red Hat and Steve Stover from Dell. Coming up. Hey, Dave. Mike. All right. So, Steve, we heard a little bit from Dell yesterday, but tell me a little bit more about how Dell sees the investment you're making in OpenStack. And why are you spending so much time and resources in this ecosystem? I think that the quick answer is pretty simple. Our customers want us to. And then another quick answer is all you have to do is look out here. A lot of interest and a lot of opportunity. But to rewind the clock a little bit about what Dell has been saying to give some more background, Dell's been involved in the OpenStack movements since it was announced in 2010. We've been an active board participant. Things like spun up our own complimentary open source projects. We work with great partners like Red Hat to deliver co-engineering and drive innovation in the community. What's also been interesting as a solution provider in this space, Dell has provided some early stage customers around OpenStack from our hyperscale customers who are really driving their entire business on OpenStack. And now we're starting to see that adoption in enterprise IT. So the opportunities for Dell and the ecosystem and our customers are pretty tremendous. And then if you think about what's really gone on since OpenStack has been announced, it's been a really short amount of time. We've come a long way in a very short amount of time and we talk about great companies who are driving innovation and deploying clouds on OpenStack. There have been companies on the stage from AT&T and earlier eBay, Wells Fargo. These are all the types of customers that Dell has a great relationship with. And we continue to see that opportunity. From that perspective, the opportunity really is abounded and we think that Dell has a unique opportunity to provide differentiated value in delivering solutions. Again, with great partners like Red Hat and SolidFire. And so we're continuing to invest very heavily in the things that we do directly as well as the ecosystem. Excellent. And Mike, where does Red Hat see the largest opportunity in the year ahead for OpenStack? Well, you know, we've been talking about opportunity here for the last day and a half and, you know, sometimes you can confuse opportunity with challenges. And I do think there's a challenge for everybody in this room to be driving more towards consumability of OpenStack. I think the agility, you know, investment that we're making together as the companies is indicative of the fact that we need to be able to move into a more consumable scenario for our customers and developers and those that are looking to go and deploy OpenStack. Clearly, there's product roadmaps that we see as an opportunity for us, not just for Red Hat but for the room as we build an ecosystem of great partnerships. Excellent. So, Steve, tell me about how SolidFire's effort with Agile Infrastructure aligns with Dell's strategy to make it easier to bring OpenStack to the enterprise. Sure, Dave. We have a complementary vision and a common vision about what needs to happen in the enterprise at your organizations. As I mentioned a little bit before, the things that Dell is doing a little bit more directly is we're supplying cloud providers with OpenStack-based solutions. We actually have our own solutions that we provide to enterprise IT, but those are the things that we do directly. From an ecosystem approach, we're very well aligned with the strategy that SolidFire has, and we provide our best-in-breed capabilities from a component perspective. In the case of Agile Infrastructure, really incorporating the Dell networking and server technologies to help our ecosystem partners realize the opportunities that they see, and again, it's very common. Another place where I think we share a bit of a common vision is we really want that hardware layer in the cloud to operate at the speed of the expectations of the enterprise IT user. You know, we've talked about concepts at the conference about shadow IT and how public cloud providers are putting pressure on enterprise IT to deliver faster. OpenStack is a great answer for that as a cloud platform, but what's really interesting is the average IT user, and you've touched on a couple of these topics, they can't afford to invest the time or really the resources to deploy a team of OpenStack Ninjas. Got a lot of great ones in the audience here, but what they want to do is leverage their best resources to provide strategic value to the business and allow those Ninjas to really drive that innovation that helps that, and it's a common theme. So one of the things that, again, that I think we're really well aligned on is bringing that operational view really to deploy and take the day-to-day operations off of those individuals to help them go drive that innovation. It is a place where we're very well aligned, and the approach of the solifiers taken with agile infrastructure to deliver a reference architecture or an integrated system-based approach to really drive that time to value, speed the time that it takes to stand up the cloud, make it simple to operate is really key, and that's something that Dell invests in and invests again in the ecosystem of partners along with Red Hat and Solid Fire. Excellent. And Mike, how do you see the partnership between Red Hat, Dell, Solid Fire, around AI? How do you see it bringing this vision of the next-generation data center to reality for your customers? Well, it's really an ecosystem at work, and, you know, we've talked about and through the lineage of Red Hat as certification, interoperability, making sure that there's a customer confidence that things just work is really, really key to driving a platform. And that's really an example of what we've done here with this reference architecture, the speed and agility of the new data center, using, you know, hardware, software, storage components as we bring together today as the three of us is an example of being able to utilize that ecosystem. Again, not for us. This is not about the three companies. We're talking about the ecosystem and customers, and taking that and moving that into an open hybrid cloud approach brings that confidence and consumability of OpenStack. I think everybody in this audience wants that, whether you're a technology partner or you're a consumer of the technology itself. We've seen great advancement over the last 18 months, and we're really excited about the future. Excellent. Thank you, Steve. Thanks, Dave. Mike. Thanks. Appreciate it. So we heard from Steve and Mike that we're going to deploy OpenStack, get it up and running quickly, and get a more agile infrastructure. And beyond the simplicity, efficiency, performance, and the economics of SolidFire AI, I think the thing that I'm most excited about is that it is based around an open architecture. And the SolidFire AI reference architecture doesn't lock customers in to specific compute, networking, or even storage hardware. And even the software layer can be changed out. SolidFire AI provides a platform that allows customers to fully leverage the OpenStack ecosystem over time wherever that innovation comes from. The best of converged infrastructure with the best of OpenStack, that's SolidFire AI. So whether you want to do it yourself like eBay or jumpstart your OpenStack cloud with agile infrastructure, comes the AI in the SolidFire booth this week in the Expo Hall. And today at 11, we'll have representatives from SolidFire, Red Hat, and Dell available to answer your questions. And if you can't get in this week or are watching this online, we're bringing this show to you in a seven-city road show starting next month. Finally, on behalf of myself and the whole SolidFire team, I'd like to thank all of you so very much for being part of this wonderful OpenStack community. Thank you very much. OpenStack has impacted our team building in a number of ways. One is that we're now a group of of resource that we had before. Developers nowadays are not just folks coding, but they're also participants in a broader community. When we're participating in the meetups, when people are looking for work, they know who we are, they know we're talking about what we're doing, and so it helps us hire people, it helps us get features developed, it helps us have a community we can go to people and ask questions from. If we're involved with the community, we can also attract some talent for our own organization. The software development mentality that started in our software development team has spread to everyone who supports and utilizes OpenStack throughout the entire company. The more people that get involved in the community, that means there's more people out there who have worked with this product and are potentially available to hire to help us work on the product. The way that we work, I've seen in my teams and the teams that I'm on become more and more distributed with more and more remote workers. Because now if I'm a developer and I can work on OpenStack, I don't just have to go to one or two companies, there's companies all over the place that may want me to work for them. I would say that's one of the huge benefits of OpenStack, of open-source software in general is that you really feel like your career is not tied to the success of a company that may or may not survive. Working with the OpenStack community has actually been a lot of fun. From a technologist's perspective it's actually been a huge amount of fun to actually work with lots of smart people that have some very cool ideas about how they want to actually develop the software and what direction they want to take it in.