 Welcome everybody. Thank you for joining me today. My name is Marcus Flurl. I run Solaris Core Development at Oracle and that also includes things like Solaris, a lot of the network virtualization as well as software-defined networking. And I want to provide an overview of some of the investments that Oracle is making in the context of OpenStack. So first of all, it's obviously a huge task trying to fit all of what Oracle is doing around OpenStack into a 40-minute session. So I won't be able to do that. I'll just try to provide you an overview of some of the things that we're doing. So at a very high level, OpenStack is obviously a very critical part of the technology industry right now, the IT industry. So over the last few years, we've been heavily investing in that. So initially, the investment has been mostly focused, of course, around the infrastructure parts of OpenStack. But of course, from our perspective as an Oracle, we're obviously able to do much more than just infrastructure. So a lot of the investment that we're currently making has been around and that we've already made is around the platform as a service, database as a service, higher up the stack kind of offerings. So this is just giving an overview of the different areas in which we have been investing in. So we've obviously been investing from the Hypervisor and Operating Systems perspective. We have a full distribution of OpenStack available on top of our Oracle Solaris and top of our Oracle Linux. Of course, optimized for our virtualization. But then of course, we've also been investing heavily in bringing all the networking products into the OpenStack fold, making sure that we have plugins for the Oracle Virtual Networking, Oracle Ethernet products and so on. We've also been very heavily investing around storage. In fact, as of today, a lot of our customers are running our storage through SWIFT, through Cinder, plugging into their OpenStack environments. And of course, the other big investment is around the whole application stack, fitting the application stack in with OpenStack and providing, you know, simplifying the deployment, simplifying the experience from an end customer perspective. The particular aspect I want to highlight in today's talk is security and compliance. And I will get to why that's really critical. More and more customers who are trying to get into the space are very, very concerned and should be very concerned about the recent threats around the recent attacks that they've been seeing. And as a result, we're seeing, you know, we've been making a lot of investments over the years, but we've actually dramatically increased our investment over the last couple of years. So let me talk a little bit about what we've been doing on the Solaris side. On Solaris specifically, we've brought in OpenStack as a full distribution on top of Oracle Solaris. So it's fully included as part of Solaris. There's no separate SKU that you have to pay for, that you have to pay support for. It's fully integrated as part of our Solaris offering. Same thing on the Oracle Linux side. It's fully integrated. It's fully available as part of the base offering. We didn't just integrate that on top of our operating systems. We've done a lot of deep integration in addition to that. So that means we're leveraging our container-based as well as our hypervisor-based, traditional hypervisor-based virtualization and exposing that through OpenStack. To bring in OpenStack, that's actually been driving a lot of the requirements back into Solaris. So rather than building a bunch of complexity into OpenStack, we've been trying to, as much as possible, take the complexity out of OpenStack and putting into the underlying lower levels of the stack. The nice thing is that we are able to deal with, work on all levels of the stack, all the way down to the silicon, in order to accelerate things, in order to simplify things, in order to make them more secure, in order to make them more robust. Same thing on the storage side. We have a huge investment in CFS. In fact, a lot of our storage is actually based on top of CFS. And we, again, we're leveraging that in the context of OpenStack, exposing CFS through Cinder, through Swift, through various of the new modules that are coming, in order to provide the best possible, most robust storage offering for OpenStack. And then the other thing we've done, we've actually created this new, a bunch of new constructs to actually simplify the deployment. So you can now talk about that in more detail, but you can take the Oracle Database today, various other Oracle applications, download that from us as a complete Glance image, feed it into Glance, and start deploying that in your environment. So again, so as much as possible, we want to provide customers with not just a, as a pile of spare parts, but provide customers with a full end-to-end solution. That, of course, is open. We don't want to do anything Oracle specific when it comes to the APIs, but at the same time, there's a lot of innovation that we're trying to do underneath. I think it's also something that was mentioned this morning, by one of the speakers in the keynote, where the OpenStack does not mean we don't want to do innovation underneath. We want to fit into the OpenStack infrastructure and OpenStack ecosystem, but at the same time, we want to innovate as much as possible underneath, you know, to provide additional performance, additional security, and additional other value to the customer. And so as part of that, the latest release of Solaris 11.2 that came out last year, it's not just an operating system, it's a full-fledged infrastructure. That means, yes, you have the operating system, but you also have both container-based as well as Hypervisor virtualization built in. You have all the network virtualization and software-defined networking built in. And you also have OpenStack fully included as well. All built into one product, all supported by one team, all engineered as part of one organization. And so what does it mean for a customer? So what we've been doing is actually bringing together the traditional properties of Solaris, meaning enterprise-grade stability, security, scalability, all the things that customers have traditionally been using and been used to. And we are marrying that up with all the OpenStack, cloud agility, and all the rapid deployment mechanism. So you're really getting best of both worlds. And that's where a lot of customers, what we're seeing now is a lot of customers, OpenStack didn't start off with our enterprise customers. OpenStack started off in the web space, in the development environment. But what we're seeing now, a lot of customers are now trying to move into production environments. A lot of enterprise customers are trying to start deploying their environments with OpenStack. And that's what we can come in and actually provide that kind of deep integration and provide the enterprise-class capabilities. And I will talk about that in some more detail in a second. So let's start off with security. Why do people care about security? I think we've always been caring about security, all of you have. But the change that we've been seeing over the last couple of years has been the increase, dramatic increase in both the number of the quantity of the threats that we've been seeing, as well as the quality of some of the threats. So this is something that I'm borrowing from Verizon. This is from out of the Verizon report that came out a couple of months ago that's talking about, that's comparing the number of threats over time. And as you can see between, I believe, 2007 and last year, 2013, actually, there was a 16-fold increase in the number of attacks, specifically hacking and malware attacks that customers are seeing. So there's a huge increase in the threats that customers are experiencing. In fact, we had a customer strategy council a couple of weeks ago, especially the financial customers. They see that as their top priority. There's nothing else on their agenda. There was a lot of the amount of intense cost, I mean, immense cost pressure. But at this point, security is outweighing any of the other concerns, any of the other objectives at the half at this point. And that's, I think, where we bring a lot to the table. And when we talk about security, it's not just about providing a feature here and another feature there. We really look at this as a full-stack problem. Your environment is as secure as your entire stack. It all starts off down at the silicon through the hypervisor all the way into the applications. And that's what we're trying to address. And so just to pick out a few examples here, so obviously one of the things that we've seen many years ago is brought the encryption down into the physical hardware, offloading the encryption to the point where at this point, some of the benchmarks we're doing, we just turn on encryption even though the benchmark does require it. The performance overheads we're getting when you're running on our environments are typically so small that people just turn on the encryption just because it doesn't even cost them anything. The other big investment that we're making has been around software and silicon. And what that means is we have specific functionality. We have functionality built into the chip at this point that prevents things like buffer overflows, memory overreaches, any of these typical attacks that we're seeing that are becoming very, very prevalent out in the industry. We can prevent that with our functionality that's built down into the physical hardware. In other words, if somebody is trying to access memory that they don't have the key for, they will actually the hardware will actually detect it in real time and will actually stop them from doing that. From there on, it's about the security deployment, doing the installation in secure fashion, bringing up the boot image in a secure fashion, and then we've also invested a lot in securing the runtime. One of the things that we allow customers to do is actually locking down the runtime. So in other words, both the hypervisor as well as the guest OS can both be locked down, which means even if somebody manages to break into your environment, break into the particular VM, there's nothing they can change because the file system is completely locked down. In addition to that, we also have mechanisms for providing time-based access control, which means certain administrator needs to get control over, needs to access the machine or to do patching or do other things. They will get access to the machine, but they can only get access within this very small window. So I can specify any time frame, they will get access to the machine. After that, they're blocked from accessing the machine again. And then, of course, there's deep integration with the rest of the Oracle stack. The moment you turn on table space encryption with the Oracle database will automatically offload that down through the OS all the way into the physical hardware. So it's completely transparent. So deep integration from a security perspective. And this is what we're seeing now. This is a large service provider, as a typical example, a large service provider in Europe now that are using all of the things that I've just talked about. In other words, they are locking down the hypervisor and the hypervisor. They're locking down all the guests. They also take advantage of the time-based access controls. So they're providing the most secure environment for their for the cloud infrastructure. And they're doing that, by the way, in this context, they're using Solaris containers, Solaris zones, as some of you might also be familiar with, which means that they can do this. They can achieve super high consolidation ratios, super highly efficient, while maintaining all the security isolation, all the resource isolation. And I'll talk about in a second, have a slide on the efficiency of our virtualization, and also compared to regular hypervisor virtualization. And so security, obviously, is very critical. But the other thing is compliance. That's another key factor for a lot of customers. Customers spend a tremendous amount of time working with the auditors, ensuring compliance, proving that they're compliant in their environment. Especially if you're in the financial, in the telechore, in various other industries. There's a huge amount of investment that goes into just dealing with compliance. So one of the things we've introduced as part of 11.2 or so is the ability to install an environment, install the applications, install all the third party components that you need. You then create what we call unified archive. You then take that unified archive. You can lock it down, leveraging the mechanisms that I just mentioned, and you then deploy it in your large-scale environment. So you may be feeding that into glance, and you may be pushing that out to hundreds or thousands of physical nodes. You're all starting, all of these VMs that you've deployed all started from the same lockdown image that we started from. The reason why we call it unified is because it gives you a tremendous amount of flexibility. Not only can you deploy VMs with that, you can also deploy, you can decide at deploy time, whether you want to install these, whether you want to deploy these unified archives as a regular VM, as a container, or as a bare metal physical image. You've complete choice at the time of deployment. And with that, we also have all the compliance checking built around that as well. So we have, we brought open SCAP into Solaris. That means if you want to do a compliance check, you can just generate as a simple command, which is all the compliance checking, exports this type of report that you're seeing here, lists all the compliance checks that are passing, all the ones that are failing. You double click, you will get remediation instructions, remediation suggestions for how to go back and eliminate that. So a very, very easy way for you to go back and make sure that your entire environment is compliant. And with all that, as I mentioned earlier, we want to make sure that we can blend in with any environment. So in other words, you can leverage Oracle Solaris with any of the other hypervisors, leveraging any of the VMware, leveraging Hyper-V, leveraging KVM, vice versa. If you already have other environments deployed today, if you're running Mirantis OpenStack or other OpenStack, you can just easily just slide Solaris in as a hypervisor, as well as in a guest OS. And you just, you just continue running your environment just like you used to so far with Mirantis. So again, it's very important for us. We don't want to create an Oracle-specific or almost OpenStack-like thing. We want to be fully compliant with OpenStack, leveraging the same APIs and blending fully into these environments. And that's what we're seeing now. We see a lot of customers who want to go down their path. They like the idea of having an integrated stack. In other words, if they want to deploy application from Red Hat, they want to deploy JBOS or other applications, they just want to have a complete Red Hat stack. When they are deploying applications or the Oracle database from Oracle, they want to make sure that it's a complete Oracle stack, but they can leverage the same tools to do the configuration management, leveraging Puppet, for instance, or leveraging other config management tools, but then also leveraging OpenStack as a common abstraction layer that allows them to spin up VMs both on the Solaris side, as well as in the Red Hat, KVM, or Microsoft, any other kind of environment. So this is a very typical example that we're seeing with a lot of customers. Where customers, if they're starting from scratch, they often use our OpenStack. In other cases, they oftentimes start with other OpenStack and they just roll in our infrastructure, our OS, our hypervisors, and it all blends in seamlessly. So a lot of what I've talked about so far has been around infrastructure part. And that's obviously a very critical part. Again, as I mentioned, a lot of the investment that we're currently making and have already made is around database of service and platform as a service. And in fact, we just made an announcement on Friday. We're working very closely with Mirantis and we've actually been contributing to the two Murano. So what we've actually been announcing is that, and you saw that probably this morning in the keynote, what we can do at this point, we have the Oracle database, Plugable Database, available as part of the Murano catalog, application catalog. And what this means is that within a few steps, you can start deploying Plugable Databases into your OpenStack environment. Let me just quickly walk you through this. So this means you're starting out from Horizon, from your Horizon Dashboard. You're creating a new app environment, just like you will be doing today. And on top of that, you now configure the Plugable Databases that you want to add on top, and then you deploy them in this part of your environment. So within three steps, within three simple steps, you've just created a new app environment, you've created X number of Plugable Databases, and you've now pushed them out into your cloud. And you start doing, you can start running the database, you go into SQL environment, and you start doing that. So let me just quickly, this is just at a high level, let me just quickly show you a quick demo on this. This is actually a canned demo, I hope it's coming up. There we go. So this is what this looks like. So you go to your, coming up, good environment, you just created a new demo, in this case we call it the demo nine. You're adding new components in, in this case, Plugable Databases. The nice thing about Plugable Databases is just efficiency that you're achieving with these Plugable Databases. You then start to deploy these things, you start to deploy this new environment, and within, you see the deploy here, within a few seconds, again, you're deploying Plugable Databases with just super lightweight, super efficient. So you're very efficient in terms of your resource consumption, and it's super fast to deploy these. And then within a few seconds, you go into your, back into your log, you see what you've just deployed. And within a few seconds, you now start going to your SQL environment, you start, start using that Plugable Database. So it's a very, very quick lightweight way of deploying your Plugable Databases, your database environments. Now let me just quickly show you what this means from an efficiency perspective. So we look at the left, one of the presenters this morning was talking about the efficiencies of containers. When you look at on the left here, you see this is the, the EC2 scenario. This is where you have a full Oracle database running on a dedicated VM, and you're getting a pretty, pretty good amount of performance out of that. The bar in the middle is where you're starting using deployment, where you're deploying on containers. So that means instead of spinning up a separate VM every time, you're all you're doing is just spinning up a separate container and all you're doing is spinning up a new database. So you've already reduced the number of VMs. In this case, we actually did this on a fairly large machine, and we did it with 250, 52 Plugable Databases with 252 databases. So in the first scenario, you end up creating 252 VMs running 252 databases. Second scenario, you shrunk down the 252 VMs all into one single VM. On top of that, you have now 252 instances of the Oracle database running. There's three acts. In this case, you see a three acts increases in efficiency. And then the bar on the right is where we're using Plugable Databases. That means you have one single instance of the VM running. You have one single OS instance running. And on top of that, you have one instance of the Oracle database running. And all you're doing is breaking it down into separate Plugable Databases. So now you have 252 Plugable Databases running on a single instance of the Oracle database running on a single instance of the OS, in this case Solaris. So that means from a memory consumption, in the first scenario, you're consuming about 4 gigabytes of DRAM per database instance. In the second one, you have pretty much half that number. In the third one, you're down to just a couple of hundreds of megabytes. So significant improvement. Again, it's the same hardware. You have the same hardware resources. You have the same number of licenses. All you're doing is just using more efficient virtualization there. And so this is something that's a tremendous amount of focus for us right now. What we've actually done, we've actually been enabling a lot of the underlying resource management, as well as security isolation capabilities of Solaris. And we exposing that to these Plugable Databases. In other words, what we're doing, we're allowing you to create these Plugable Databases, but we're also providing you with a resource isolation, with a security isolation that you need in order to deploy this in an enterprise type environment. In fact, it's not just about the Oracle database, just Plugable Database. There's a huge amount of investment that's going into the Oracle database in general. And it's about tying it in more closely with the rest of the stack, tying it in with Solaris, but also tying it in with our hardware. So it's really, when we do development, the boundaries between who is working on the operating system, who is working on the hypervisor, who is working on the database, they're becoming blurry at this point. In fact, some of my engineers are now writing database code. They're collocated with the database team. So it's a very, very dynamic environment. It's very, very fluid between who is working on what code base. And this is very, very powerful. Because what we're doing now is we can actually make the decisions based on what makes sense from a technology perspective as opposed to worrying about, you know, which company that we work for, what's our organizational boundary and all these other things. So this means we, and this is not just from an engineering perspective, just from a design development perspective, but as we develop the code, as we develop the Oracle database, as we develop Solaris, every time we check code in, we make sure we test it in that context of the Oracle database, we test it with the rest of the application stack. And vice versa, for the Oracle database, at this point, the Oracle database, any of the development they're doing at this point, all is running on, all has to be running through, through some of the new software and silicon capabilities that we've built into the new hardware. So let me give you some examples. So here's an example of some of the work that we've been doing and, and the benefits that, that comes from getting out of this. So what we've been doing here is actually, we've actually created new interfaces for the Oracle, for the Oracle database, specifically to allocate the shared memory. And what that does, the way we've implemented that, again, working closely with the Oracle database team, allows us to dynamically resize the database. So in other words, if you have running database service, you have dedicated Oracle databases, you're running out of space, you want to add the, add more SGI, you want to add more shared memory, we can, we allow you to do that dynamically. Same thing, you want to start shrinking one of the other databases down, we're allowing you to do that dynamically. So a significant amount of investment that we had to make in order to achieve that, going deep down into the VM subsystem, making deep changes to the VM subsystem in order to achieve these kind of, these kind of dynamic changes. Another example here is around, another example here is around, is around observability. So again, having all the different parts of the stack, what we've done here, we've actually allowing, we're allowing DBAs to actually leveraging the Detroit libraries underneath from the native AWR tools, which means that in case you have an IO outlier, DBAs can actually easily track it down and can figure out, is that a problem that's happening in my Oracle database? Or is it a problem that's happening in the OS? Or is it a problem that's happening in my own stack? So customers by querying our interfaces are able to get a full stack view of the entire environment, and they can very quickly narrow down any kind of performance issues. But we don't stop there. We obviously doing a lot of integration work between the Oracle database and Solaris. But at the same time, we also doing deep integration with the hardware. And so this is something that we started talking about back at the last open world. What we've done here is actually we've done taking database specific functionality, we've taken it out of the database, and we are pushing it down all the way into the physical hardware. So again, it's a good example where it's about looking at the full stack and understanding where's the best place to solve the problem. Do we want to solve the problem in the database? Do we want to push it down into the operating system? Or do you want to push it down all the way into the physical hardware? So in a lot of cases, we've decided we want to push stuff down to the operating system. In other cases, we've said, well, in order to get significantly better performance, we actually have to push it down all the way into the hardware. The kind of similar approach, I mean, when you look at what Apple has been doing over the last few years, they started out developing the first iPhone was not built on Apple chips. At some point they realized, however, that if they want to get to the next level, if they want to provide things like fingerprint scanning and other functionality, they would have to build their own chips. And that's what we're doing here. We're actually building out chips that are highly optimized, of course, for the operating system, for the virtualization, but we're going beyond that. We're taking database functionality and pushing that all the way down into the physical hardware, into the silicon. And those are kind of things when you do these kind of things, you're not just getting small bumps in performance, you're getting huge increase in performance. So the three areas that we've been focusing on is obviously performance. It's an obvious one. Efficiency or compression, memory compression, in this case, decompression, we started off with decompression. And then the third area, security, and this is probably the most significant area given that, you know, given some of the increase in security threats over the last few years. So let me just give you an example here. So here's a typical query that you might be running. It's a query where you're saying, here's my SQL query. I just want to do a scan, in memory scan of a particular range. I want to look at, you know, all the orders I got between, in this case, you know, November and December of 2012. And I want to quickly find out what is kind of what the customer's been doing. So that entire query, we are now able, with a new hardware, we're able to offload that completely into the coprocessor. So rather than the main processor than the main CPU doing this, through our shared memory, through cache, we just push it into the coprocessor and the coprocessor is actually processing that while the main CPU is still available. And this is what it means. So in other words, the top bar here is what people do traditionally. You're reading memory in, into the CPU. You have to do the decompression. You then write it out again into cache. You don't have to read it in again. Do the actual scan and write it out again. The way we do it, we just push it down into the coprocessor. Coprocessor processes the whole thing in a fraction of the time and gives comes back with the result. So it's a huge increase in performance that you're seeing because instead of doing the work in the main CPU, we're now doing that in a coprocessor in a silicon itself. And the other thing I mentioned is the memory, the protection of the memory. So a lot of the reason the attacks that we've been seeing have been centered around things like like buffer overflows, memory overages. And what we've been doing here is we've built functionality into the coprocessor that will actually check for these kind of things. So in other words, if somebody's trying to access the memory that they're not supposed to access, you have a stale pointer. That pointer is trying to override memory that somebody else has started using. We have a lock and key mechanism where we can actually detect that and we can detect the mismatch between the version of the lock and the key and we'll prevent that from happening in real time in the hardware. So in other words, that is very powerful as you're doing code development. So if you think about the Oracle database, it's obviously a big pile of C code that Oracle has been writing over many, many years. Anytime you do that, anytime you have a huge software construct like that written in C, you always have to worry about these kind of issues. What we've done here is by providing this functionality built into the silicon, that means the customers are the Oracle database for our own internal developers, they can write the code in a much more efficient way. Development productivity is up significantly because of that because in real time, they can track down any kind of issues that are happening with regards to memory stale pointers and things like that. In addition to that, again, building this into the hardware has allowed us to do this in real time. In other words, we are planning on turning this functionality on with the Oracle database in production. So in other words, even if you get somebody manages to break into the environment, modify your code, is trying to read out memory. Partly I think is a good example here where you have a vulnerability somebody is actually able to get access to the memory. We can detect that in real time and we'll prevent that from happening. So it's very, very powerful functionality that we have built in in order to achieve that. So I want to leave a little bit of time for questions here. But just to summarize things, our strategy has been very simple. We've been investing very heavily underneath OpenStack. We want to expose everything through OpenStack. We want to be compliant at the OpenStack layer. But at the same time, we have been able to do a lot of innovation underneath that, innovation in the operating system in the virtualization in the hardware in order to achieve the best possible experience. And specifically around security, we believe that at this point we have the most secure stack because we're able to secure things all the way from the silicon all the way up into the applications in a very, very coherent and a very, very unified way. So with that, I just want to open it up for questions and see whether people have any kind of questions around that. Yes, please go ahead. Yeah, so we have, as I mentioned in the beginning, we have a distribution of OpenStack also available. So we have distribution available for Solaris. But we also have OpenStack, full OpenStack distribution available on Oracle Linux as well. And so in fact, what we're seeing, again, a lot of customers want to deploy both in concert. Customers have huge investment Linux. Customers have huge investment in Solaris. They can use the same OpenStack distribution on top of both environments. So I mean, obviously, the core OpenStack functionality is available for both. But of course, some of the functionality I've been talking about in terms of specific security features in terms of some of the hardware enablements, those are things that, again, specific to Solaris and in a lot of cases, specific to Spark. So the software and silicon capabilities I just talked about, that's something that's available that's unique to the Spark environment. Actually, can I just ask people to come up to the microphone because we are recording this. If people don't mind, there will be. Thank you. Good talk. So during our talk, you mentioned that you have four compliance. You have automatic tool that's going to automatic check. You know, different checks. So would you please talk a little bit more about this automatic tool? Do you need to install agent on your servers for the tool to run? Do you charge extra money for these two? No, it's actually fully integrated. So we've done actually brought OpenSCAP into Solaris. OpenSCAP is an open source tool that's fully integrated. And what it allows you to do is actually generate these reports that I mentioned. So we have a prepopulated report for PCI specifically. But you can also, you can extend that. All the infrastructure, all the tool that we have in there, you can actually go and you can customize that for your specific compliance needs. Cool, thank you. Yeah, and so it's very powerful in that. You know, it's all built in so there's no separate charge. It's not a separate vendor that you have to go with. It's all built as part of the base as a base product. Plans to pull that into Linux. OpenSCAP? It is available on Linux as well. It's actually its open source and it's available on both stars. I'm not sure I would have to check on that. Any other questions? Okay. All right. Thank you very much. Thanks for joining me today.