 Okay, great, I guess we can get started. So first off, thanks everybody for joining. My name is Eric Sacks. I'm one of the engineering directors at Oracle. I work in the systems division and I'm responsible for our efforts around our Solaris-based OpenStack distribution. Ficus, you wanna introduce yourself? Yeah, my name is Ficus Piekas and I work in the Oracle OpenStack for Oracle Linux distribution, also for Oracle. Great. So we've been working on OpenStack for quite some time. I'm trying to remember if this is what our, what fifth or sixth OpenStack summit. I think the first one that we joined was the Portland Summit, which was quite some time ago. I think there was a grand total of what, like three of us. And so I think in total, we probably have around 50 or 60 Oracle people here. There's actually quite a lot of work happening around OpenStack across Oracle, across our product portfolio on the system side, on the application side, on the storage side. So we'd certainly invite you to stop by our booth. It's A2 kind of on the left as you come inside the Expo and you can see some demos and hear about the various areas across Oracle where we're working on OpenStack. So when thinking about what we wanted to talk about, this go around, one of I think our real passionate areas is as we work with our customers, as you can imagine, many large scale traditional enterprise customers, how can we really help accelerate their adoption of cloud? And it's something that we find that many of our customers are very excited about. I think the good news from our perspective, just about every customer that we talk to aspires to move to cloud-based management of their infrastructure. They clearly understand the benefits of doing that, the agility that they see, everything they stand to gain from being able to manage their infrastructure, compute storage and networking as a cohesive system and be able to have all that be software defined and really reduce the amount of time it takes between actually needing infrastructure and being able to leverage all of the automation that OpenStack has to offer in order to be able to get that as quickly as possible. So many of our customers are very interested in that. And in the recent survey that Talogen put out, their state of the OpenStack report, the data that they find is actually pretty consistent with our finding. So the nice thing is if you look at this pie chart, 30% of the survey respondents that they spoke with and they surveyed a pretty wide range of group across virtualization and cloud about how they're understanding or using OpenStack. So 30% of the folks that they talked to are actually using it to support production workloads. Another 30% were evaluating it. And another 36% were familiar with it, though not yet using it. And only a very tiny slice of that pie, 2% had never heard about it at all. And this is actually pretty common for us as well. One of the things that we see when we talk with our customer base, which may be a little bit different sample set than this is if we ask folks in the audience, who's heard of OpenStack nearly every hand and the audience goes up. But when we then ask the next question, who's actually been successful deploying OpenStack and doing that successfully, there's quite a different response. Maybe half or less of those hands that went up actually do so. So we're spending a lot of time thinking about how we can really improve that, how we can accelerate adoption within the enterprise. And so some of the common things that we find, one of the biggest things is that still, deploying an OpenStack cloud really isn't easy. Deploying a single node OpenStack cloud really isn't that difficult. Things like DevStack make this very easy. I think nearly every vendors out there offers a really quick and easy way, including Oracle, to evaluate OpenStack, be able to take and deploy a single VM or a single node, take that for a spin, see what it's like. But there's a huge difference between that solution and what it actually takes to roll out a production scale OpenStack cloud. And what our customers find is even if they have some familiarity of OpenStack's architecture, they understand what Nova, Neutron, Cinder, all of these services do, there's a huge amount of expertise that they have to gain and understand in order to be able to successfully go and actually deploy and operate a production scale cloud. And I think one of the things that's helped, certainly make this easier, is many vendors are offering cloud installers that simplify the process of doing a multi-node installation. But even these come with their own sets of challenges. In some case, they make the initial job of deploying the cloud easy, but they may not handle the complete things from being able to do upgrade or life cycle management. They may not provide the tooling that's necessary for being able to operationally manage the cloud. And the things that we find about clouds is in many cases, they're like snowflakes. The architecture that might be right for one customer might be entirely different for another customer, depending on what their requirements are. So deploying it is a certain amount of deployment complexity that there is to overcome there. And then in the context of a large, truly scalable cloud, we get asked, well, what does it take to actually deploy a cloud that's highly available and that can scale from one node to many hundreds to many thousands of nodes? And then you start getting into realizing, well, this isn't just my three node open-stat cloud of my control plane, my compute, my storage, that may be an okay start just to sort of evaluate this. But if I truly wanna have something that's gonna be highly available, and I truly wanna have something that will scale as my load scales, you really need to start looking at open stacks, under cloud services, more like microservices, and deploying them out in a scale-out fashion, maybe making use of load balancers. And it gets to be quite a bit more, certainly quite a bit more sophisticated. And many of the existing installers that are out there, of course, don't have this level of sophistication yet. So there's sort of this trade-off that exists where on one hand, if you have a cloud that's relatively simple to deploy, that's easy from the deployment stage, but then later on, it's hard to scale that up, and you may run into barriers there. On the other hand, if you wanna start planning through this from the very beginning and deploying a cloud that's gonna be solid from an operational perspective, well, there's a lot of initial deployment complexity that has to be overcome there. And just along the way, another piece of feedback that we frequently hear is there just isn't the availability of tools that are necessary for understanding as I'm trying to go and configure this and something goes wrong. How do I diagnose what actually went wrong? How do I trace that back? There's just sort of a lot of anecdotal expertise that we find our customers have to come up to speed with and the documentation doesn't always necessarily have all of the right answers. So again, looking back at what Talgent found in its state of the open stack report, some of the things that we're hearing from our customers indeed show up here, lack of deployment tools, lack of tools for enabling folks to effectively operate the cloud, how do you go about defining a security model that makes sense? So out of the box it meets your needs around compliance, things like that. So these I think are very common concerns, but very real concerns for folks in the enterprise and I think to some extent may contribute to why there's sort of a drag for adoption for many of our enterprise customers. And the other thing too that we've certainly identified and even in the previous talk I noticed HP made reference to this as well. Open stack, when you look at the set of things that our enterprise customers run and you look at the workload architectures, many of the workloads that are considered mission critical were designed some time ago and they have very specialized needs around the infrastructure that they run on. In fact, many of these workloads were actually designed along with the infrastructure to actually host them. So now as an enterprise customer when you have many of these applications and you really want to move them all into the cloud, how do you effectively do this when the app and the infrastructure were baked together? Does your cloud really provide the infrastructure that it needs to effectively run those applications? And so this is where talking about apps as cattle versus pets certainly tends to come up. And OpenStack certainly seems to do very, very well for these cattle type applications where they're sort of designed to be cloud native from the beginning, they're designed to scale out. They're very resilient in that if one of the virtual machines dies or even one of the physical nodes dies, that VM is effectively stateless and so it can spin up. But the reality is that many of the enterprise applications were designed such that they actually trust the underlying infrastructure and so it's very critical for these workloads that that infrastructure can meet the needs of the workload. And again this is one of the things that they highlighted is that their forecast is that OpenStack should be able to handle just about any workload in short order with the exception of pets. So it's obviously a pretty key focus area for us. Oracle invests quite heavily in the infrastructure that runs many enterprise mission critical workloads and so one of our key focus areas is looking how can we bring the kind of infrastructure that traditional enterprise workloads need into the context of OpenStack so that customers don't have to choose between one or another. And there's actually a lot of very, very interesting work that still needs to be done here. One of the things we're thinking about is how can we allow workloads to specify metadata about more data about what they actually need from the underlying work infrastructure so that quality of service can be specified. Many of these things that these workloads were able to take for granted when they're running on dedicated infrastructure, they actually need to specify this information running in the context of cloud so that the underlying infrastructure can provide that and make these workloads run the way that they were designed to run. So a lot of work to be done there. And just as an example, a few years back, Toby Ford from AT&T I think was speaking during one of the keynote summits in Atlanta, and he mentioned that they had found quite a bit of success with OpenStack. They were able to bring many of their workloads over into the context of OpenStack cloud, but the reality is that these sorts of pet workloads have to be handled and have to be managed. And certainly for us, this is one of the challenges we've undertaken. What can we be building into the underlying cloud infrastructure to better cater to the needs of these workloads? So I was looking around the time that I was doing this presentation, I was thinking about OpenStack since we have been working on this for some time now, and thinking about, I really love Gartner's hype cycle. And if you haven't seen this before, really kind of what this describes is for any generic technology, the cycle that it goes through. So obviously in the very beginning, the technology is gonna solve world hunger. Everybody gets very excited about it. Lots of investment around it. So the hype and certainly all the expectations rise very quickly, but many of those expectations are overblown. And as folks sort of realize this, there's what's called the trough of disillusionment, which I think is a really accurate but somewhat humorous term. And then over time, indeed, as the underlying technology continues to mature and folks have expectations set more in line with where the technology is and they both sort of catch up, over time you pull out of this trough and there's more and more productivity. So one of the things I was, I have sort of a rough idea of where I think OpenStack is on this cycle. How many folks in the audience think that we are, let's see, somewhere on the left side of the peak of inflated expectations for OpenStack? David, do you think we're on the left side of peak inflated expectations? Anyone else? How many folks think we're on the right side of the peak of inflated expectations for OpenStack? Okay, well there's only two sides and not everybody raised their hands. I actually think that, and maybe this is different by sector. I tend to think we're coming out of the trough of disillusionment. I think that this may account for what many, one of our customers are telling us, they're obviously all very familiar with OpenStack, but I think it's taken time for them to get familiar with it and get an understanding of today, where it works really, really well, but also getting an understanding of where more investment is really needed in order for them to find more success with it in the enterprise. So I think that we're coming out of it, but there's definitely more work to do and so focusing on some of those areas we just talked about I think are gonna be key for that. So this is really what we're focused on is what if we could actually invest in building a better cloud infrastructure that's easier to deploy and manage. So we can not only do a really good job of handling cattle and cloud native workloads, but also effectively meet the diverse workload mix that exists in the enterprise today. So that's a lot of what we're focused on. I'll talk a little bit about some of the specifics for what we have and what we've been doing in this area on the Solaris side and then Vegas will talk certainly about some of those same things on the Linux side as well. As we've been looking at our strategy in Solaris around OpenStack, one of the key things is how can we take advantage of many of the native features and technologies that Solaris has to offer and certainly what are the things that are most critical to enterprise customers and clearly security and compliance is one of those things that's top of the list. So in our OpenStack distribution, some of the work that we've done, all of OpenStack's under cloud services will run with the least amount of privilege necessary for them to get the job done so nothing runs as route. We also have some features in the OS that effectively allow you to lock down both VMs and effectively the host environment with a feature called immutability. So the nice thing about this is it basically allows you to treat your cloud, your under cloud infrastructure as an appliance. And really this is the way you should be treating it. You really don't want administrators to be logging into your production cloud environment and making changes in your control plane. Once it's actually set and it's working, you wanna lock it down and it should be in a mutable environment. So the nice thing is even, it's nice that we have the services running with the least amount of privilege necessary. Even if somebody were able to break into the system, they wouldn't be able to make any changes that could compromise the environment because the environment is effectively read only at that point. And we have some specific profiles that we've actually introduced specifically for the Nova Compute node, for example, that take advantage of this. All of OpenStack we have delivered via the image packaging system. So we have all the dependencies expressed, all of the services run as SMF services. So there's automated service restart and dependencies that exist between services. We make heavy use of ZFS within Solaris. So the nice thing about this is it's very easy to roll back if something were to go wrong in the process of an upgrade or something happens. It's very easy to snap off a new ZFS-based boot environment so you can instantly roll back to the functioning environment and that's all very integrated with the packaging system. And then overall, our OpenStack distribution, so using this you can provide tenants access to Spark environments, x86 environments, virtualized via Solaris zones and kernel zones, and then also bare metal via Ironic as well. And so just pretty much graphically captures a lot of what I described in the various Solaris technologies that we are building on top of. And then obviously ZFS is a huge differentiating technology that we build across. We ship the sender driver for the ZFS storage appliance. So that's integrated. It's actually possible too if you have a just a generic system that's storage rich with a bunch of disks, you can put those disks in a ZFS Z pool, you can run sender, and then you can basically hand out LUNs over iSCSI just from a generic box. And then the nice thing about that is you're still leveraging ZFS underneath the hoods so that gives you snapshots, compression, encryption. All those features are available to you but provisions to Nova instances via sender. So I'll hand it over to Vikas to talk more about the Linux-based distribution. Is it on? There we go. So from the Linux side, we've spent a lot of time. So we started long after the Solaris group starting to do the Linux distro for OpenStack. And we came along for a while and we decided that we quickly realized that the standard for all the private cloud infrastructure is going to be OpenStack. And that's going to be the way that most customers and most people will end up managing the data center resources as a cloud using the OpenStack. We also, from day one, decided we need to support all the hypervisors at Oracle in the Linux side of the house have, so like OVM and KVM. And we wanted to support a heterogeneous infrastructure like with the Docker containers. So to give you a simple overview of what we have today and what we support is like on an oversight. You can see this KVM, Oracle VM server. We also support Hyper-V as a tech preview. So you can actually run Hyper-V as well. The one interesting point which I want to talk about is on the right, the MySQL cluster. So we do not ship the Galera cluster like everyone else. We actually opted to do the MySQL cluster which we find is a lot more scalable and a lot more manageable than the Galera cluster. So for our register, we spend a lot of time getting the MySQL cluster working and correctly integrated into the Oracle OpenStack for Oracle Linux. So to give you a quick overview of how this actually works since the Oracle Linux distribution for OpenStack is all Docker-based. So this gives you just, if you take a look at the picture, we start with the OpenStack Docker images that we ship to you and soon they will all be available on the Oracle Container Registry. So you can get them directly from Oracle. We do suggest people use a local Docker registry to cache them. I don't think everyone wants to open up all the Nova nodes to the internet. And then you have a very simple tool that we sort of wrote on top of the container technology which we use, which is also OpenStack upstream for OpenStack Colour Project. I'll mention, talk about it in a minute. And once you configure it, you just hit deploy and it actually deploys to all the nodes automatically. It's actually pretty fascinating seeing how it works. So containerizing the OpenStack, what does it really mean? So OpenStack has, as everyone knows, many, many, many, many, many services. We started with the base services, the Dev Core services, and we picked the OpenStack Colour Project to really use that to start containerizing this. Each service, for instance, have one or more containers as well. It's like the Nova have the API, the scheduler conductor. So when you start doing this and containerized, you end up using a lot of containers. Nova, I think, ends up being four or five containers. So it ends up being like hundreds of containers that you start having to ship with this product to actually deploy it. And that presents a brand new problem, right? So you went from 100 or 200 RPMs to 100 or 200 containers. So you have to manage that. So what does it buy you, right? You just move the, it's like, it looks like you're moving the ball around. It's like, okay, add to RPMs, that had issues. Now we move it to containers that have this other issue that we have to manage all these containers. Well, what it ended up being is, since we're breaking up the OpenStack services in this micro services, the Docker containers, we can now start deploying this stuff as atomic units, right? So you can now deploy any service on its own and it doesn't affect any other service. One of the bigger problems that we run into if you do this with an RPM, which we had initially, is when you upgrade Cinder, it upgrades a bunch of packages that require Nova to upgrade at the same time, which requires Horizon to upgrade. So you had this whole, you touch one service and everything else had to upgrade because of the interdependencies between them. So with the Docker containers, we can sidestep that. Since we have all that dependencies inside the containers, we can now start upgrading container by container by container or service by service. So the patches we do is upstream the patches as well for all of these and it's reliable and fast to deploy. Since these containers are immutable, we don't actually carry any data in the container, you can repeat the deployment process. You can blow everything away and repeat this deployment and it will look exactly the same as the previous time. So Kola, so as I said, I'll mention Kola, what is it? It is an OpenStack project. It's in a big tent. We started very early on contributing to Kola. It's one of our engineers who's actually a core reviewer for Kola project. And when we started down the road of Dockerizing, we initially started doing it ourselves. And when we heard about the Kola project, we started working with them directly and started contributing upstream. So Kola provides you with two things today. One is the Docker containers and one is Ansible Playbooks to actually deploy these core Docker containers. And that was help a lot. I mean, it actually, for us being contributing to contribute all that code upstream, helps everyone in the long run because one of our big goals is not to help just us, we wanna help the whole OpenStack community. So everything we're doing in Kola, we contributing upstream, anything we're doing on Cinder or whatever, we contribute upstream. We really wanna be, help the OpenStack community at large. One of the other big focus areas for us is the Oracle products, right? So Oracle would want to actually have a way for you to use OpenStack and the Oracle products. So early on, we decided that we need to be able to do like a building block structure. So we looked around at all the products and we looked at Murano. So one of the things that's nice about Murano is we can really start doing building blocks. So with the databases, actually we will release a tick preview for the database Murano application very soon. And once that's out, and it uses all the Oracle standard templates that we had for Oracle VM in the past, so they're very well tested, very well known. They've been, actually people have been deploying for over five years, these templates, right? So we actually have them tuned to a very well. And it allows us now to, when we have the database, now the next step is now we can start building the next application, right? So we can now start building the whole RedStack up. Like what everyone is asking us is like, when is a RedStack going to be on OpenStack? Well, this is the start of that, right? But we have to start with the database. We have to start at the bottom, start building up. And when Murano, you'll see with the Database 12, we actually have a demo running in a booth for this. So come by, see it, and I can will show you how actually we can build the stack up. So a quick update on the release. Oracle Linux will release version 3.0 later this year. It's metaka-based. And it's really focused at this time around, we are focusing on the Murano and the heat integration for the database. We are adding a couple of services as tech previews, Magnum and Ironic. So we had a lot of requests for people who wanna play with Kubernetes and containers. So we're gonna do a tech preview for Magnum. And it's still gonna be released this quarter. So look out for that. Switching back. Okay. Great, so I'll spend the last few minutes talking about some of our work in progress as well. So we are also in the process of building out a cloud installer manager for doing automated multi-node OpenStack installation. So this is something that we're very, very excited about. Basically the idea is it makes it super easy to take some infrastructure systems and be able to designate and assign them as controller node or for hosting VMs or for designating certain nodes or ZFS storage appliance, for example, for cloud storage and having all the automation take place in order to implement best practice for those and actually build out a ready to go OpenStack cloud. And certainly this is something we're gonna continue to invest in as we roll out more and more best practices for doing scale out and high availability. So this is something that has actually come quite far along and we have some demos around this in our booth in the Expo hall and certainly by the time the next show comes around we'll probably have a lot more to show about this as well. And then also along the lines of Databases of Service integration we've also been engaging quite a bit with the Trove community. So we've been working a good bit with the Tessora folks and also with Morantis. One of our focus areas of discussion is really around reference architecture for Databases of Service. So if a cloud administrator wants to roll out Trove-based Databases of Service you know what's sort of the best practices architecture for what this looks like so that it's secure, it's effectively multi-tenant. So this is also some very exciting work in progress. This Thursday at 1.15, Thanu and Amaranth from Tessora we'll be talking a little bit about the reference architecture work so I certainly encourage folks who are interested to join that as well. And then if for folks who are interested in learning more about our open-stack offerings on the Solaris side, on the Linux side, a couple of helpful URLs as well. Our source code is also something that we are very keen to contribute upstream. So we've been doing quite a bit of that lately. All of our source code whether or not it's completely made it upstream yet is available on Java.net for folks who are interested in taking a look at our Nova driver, our sender drivers and whatnot. So those are available there. And I think with that, that might be it. I'd like to open it up for questions. Anybody has any questions they'd like to ask? It looks like there's mics set up on the side of the room. Yes. Well, yeah, no so this is something that is being offered on the Linux side. It's something we're also working on for Solaris. Oracle, MySQL is actually part of Oracle as well. So it seems like a pretty natural point of extension because so much of open-stack natively uses MySQL. So it's kind of a logical extension to be able to take advantage of MySQL cluster in that context because when you want to roll out a database, when you want to roll out a cloud architecture that's highly available, the database is actually one of the most critical pieces to get right. And it's really, really nice I think to be able to take advantage of MySQL cluster in that context so you can have an active, active solution there. So because of the fact that open-stack already takes advantage so well of MySQL, MySQL cluster seemed like a natural extension. Could we in the future do more and add support for other things? That's certainly possible, but that's what we're doing right now. Anything else you want to say on that? Yeah, and one of the things that is on the roadmap is to look at adding Oracle as a backend. We are looking at that, but it will be a much more longer-term project. There's a lot of hard coding done in open-stack for the database. A lot. A lot. I mean, we were actually very surprised to see just how much hard coding there is around NODB. Yeah. Just the NODB. It's like within MySQL, the NODB engine, so yeah. So what we did is on the Nova side, on the compute side, we actually used Libvert, right? So like Nova normally used Libvert, that's containerized and it talks directly. So we don't try to containerize the schedule, the hypervisor directly, right? You are containerized everything above that, anything that's open-stack. And the Libvert would be the last piece that you would call part of the open-stack piece, right? Libvert will then talk directly to the hypervisor. And it works pretty well. I mean, we don't have any, the bigger problem we had was actually getting Oracle VM in there as well, right? Because Oracle VM is then very little of the community is actually testing then. So that took us much longer time than the KVM stuff. KVM was like working out of the box directly upstream from Kola. And yeah, it works fine. We don't see any issues. Gentlemen, Mike. Can you share some details about your scale numbers? Not just the compute scale, but even your networking scale and what issues you had and how you solved them? Me? Networking scale? Neutron, the North-South is always the problem, right? The network node. So currently we are looking at multiple options there. We haven't, I would say, we haven't solved the problem. I don't know if anyone can really come out to say they solved the problem. But the North-South, the East-West, the DVR works fine. We've tested it and that's okay. But North-South still with the network node is a problem. So we are looking at a couple of partners to work with to see how we can solve that. Maybe with EVPN and Arista and so on. Yeah, I think the things that we found is that the nice thing is that most of OpenStack's undercloud architecture seems to lend itself fairly well to being scaled out. Provided you have the stateful parts of the architecture appropriately implemented with something like MySQL Cluster or something for that. But for a lot of the rest of it, you can take advantage of scale-out architecture for that. The big trade-off, of course, is then you have something that's pretty complex to manage and operate and lifecycle manage and so that's where that trade-off kind of seems to play out. Linux, Oracle Linux. Oracle Linux, yeah. They're both awesome. It's something that can't, I really don't believe can be appropriately captured in a single sentence, right? Because there's just such a, there's a lot of diversity on both sides of the fence around technology and technology integration. The other question is what sorts of other components in the ecosystem are gonna be used? Vendor, ISV availability, depth of integration with the hardware portfolio. I mean, there's just, there's many, many dimensions and it's impossible to sort of capture that. I think the best thing to do would be to take a step back and think about what it is that you're gonna be deploying, what applications you're gonna be running, what do you already have in the environment and start to approach it that way to think about what might be the appropriate technologies and tools to be leveraged. It's really hard to say, just off the cuff, yeah, you should use this tool or that tool. Other questions? All right, well, pet applications that are troublesome and open-stack? Yeah, I mean, things that would be applications that would already be taking advantage of something like cluster for maintaining high availability in the environment, maybe things today that are very performance-sensitive and you've built out the infrastructure, it requires quality of service, maybe you have deep integration with the hardware that you're using to guarantee that quality of service and then the real question is, is in the context of an open-stack environment, can you provide that application that same environment and how easy is it to make that software defined? Okay, so I think we're over. We can certainly stick around a little while longer for questions but meanwhile, thank you for joining our session today and please feel free to visit us in the Oracle booth. Thanks. Thank you. Thank you.