 Okay, I'm going to get started. We got a busy session here. There's going to be a lot of moving around to do different presentations. So I'm going to get started. My name's Jeff Applewhite. I'm a technical marketing engineer at NetApp. And thank you all for coming today. Glad to see you turn out and see the interest in use cases here. This is exciting for me, because this is where we close the loop. And you've heard us talk about our technology and our integrations and our value for the cloud. But now we get to hear about it from our customer. So I'm very excited to hear that. I work with the guys who do development. Day in, day out, I see them in scrum. I hear them in the development process. So it's exciting for me to hear how customers are using it. So that's what we're going to do today. Briefly, I'll just talk about our value. I came from operations for 10 years. I was a NetApp customer at two different hosting companies hosting ringtones for large cellular providers, 60 million subscribers. So I know what data can do. And I'll tell you, data is foundational. So your uptime will never exceed your data. Obviously, if you have an application, you have network, you have no data, you have no application, you have no cloud. So that's, I think, what really, if I had to sum it up, that is our primary value add, is that the data is provided by systems, but systems are operated by people. And NetApp, our expertise, what we do is to help you run those systems, maintain availability and keep those systems up. So that really to me is our core value that I wanted to speak to. I want to move along pretty quickly because as I said, we've got a lot of speakers today here. We are going to first talk about, just reference quickly, BBVA that spoke earlier today, is a NetApp customer. They're using Cinder drivers. They are deployed on us. So they couldn't go into the details of their deployment during the keynote, but they are an open-stat customer. And also want to talk to Russell Sim from University of Melbourne, who's gonna actually take us through the deep dirty down details of their deployment. And it's not sanitized. There's gonna be a lot of real world examples of things that they've encountered, lessons learned along the way. And this is going back all the way to Diablo, so to give you some context. We're gonna also talk about our internal deployments with what we call customers zero and one, which is our internal IT group and our engineering. Jeff Whitaker is gonna talk about that. And then we're gonna talk about some potential integrations around SAP. One of the things we also heard in keynotes this morning is the need to have all of our apps enabled for OpenStack, not just the primarily cloud-enabled application. So we're gonna talk about that in a SUSE and SAP. So with that, let me turn it over to Russell Sim. Thanks. Okay, how's it going? Today, I'm gonna talk you through the cloud deployment at the University of Melbourne, how we set up our OpenStack and how NetApp came to save our asses when our storage went bad. So to give you a bit of background, Nectar is a government-funded program, part of which was to provide an e-research platform for all of Australian researchers. The Melbourne node was funded under this program. It was the first node that was deployed in 2012 as part of this platform. As a result, we learned a lot of lessons early on and we were tasked with teaching everyone else who joined up. We initially started with one data center only because cells, as you would know, still hasn't been really fully adopted. It's getting along, but it's taking a little while. Yeah, so our first storage setup was essentially a whole lot of nodes all mounting virtual machine disk images over NFS. This was mostly done to provide two things. One was we needed it for live migration, I believe, to work at the time. We also needed it to allow us to provide a high level of assurance to customers. So if anything went bad, we could recover data. This is especially important in a research environment where people aren't used to dealing with clouds and so they're not fully prepared for the case where their machine goes down and they might, yeah, things aren't as they seemed. This storage that we had was hosted on university shared storage because we weren't under the funding agreement allowed to buy storage directly ourselves, so it was an in-kind contribution from the university. As a result, these are lovely IO weight graphs, so higher is worse. This is one of the pitfalls, I guess, of using a shared system where you don't really have a good SLA and you also have no control over how other people use this system. As a result of these huge wait times, we were getting lots of customer, angry customers coming back at us and we weren't looking the best, so we put a case together to get extra funding to allow us to deploy some storage infrastructure. NetApp was the provider that came to the party. They did an initial deployment for us with each of our data centers having two fast controllers with some storage off of that. These controllers exported the same NFS share that we were using before, but we also expanded our platform of services that we offered to include volume storage through Cinder. This was very early on in the Cinder, I guess, kind of adoption phase for NetApp. We were running the very early on-tap software. We were also running a very early version of their driver. So there were a few little hiccups along the way, but we managed to get really, really good customer support, including access to the developers to get quick turnarounds on fixes before we moved this system into production. We needed a small added to migrate all the VMs across, but that was much better than the pain that everyone had been tolerating up to that point. So this was last year. This year, we received another funding grant which came through which was the data which was meant to be provided at the start of the project. It trickled on in a bit later, but when it's finally arrived, unsurprisingly, we went back and expanded the existing NetApp solution that we had at each of our data centers. Along with expanding the hardware, we also expanded the services that we decided to offer. So originally, we had our NFS shared storage for the VM images. We had our iSCSI volumes that we were using for volume storage. We also started to offer off NFS shares. This was because under this new funding arrangement, we had to provide services to allow researchers to, I guess, kind of develop data sets of national importance. So these data sets were being worked on. They might have been hosted on the cloud infrastructure. They might have been hosted on HPC infrastructure somewhere else in the university. So we needed a diverse kind of flexible solution. We ended up just using the standard V-server configurations and stuff that come with the NetApp fast solution. And we did some more complicated things with it, including setting up proper backup routines as well as even off-site replication to our second data center so that in the case of any kind of catastrophic failure, we would have some assurances that these nationally important data sets weren't lost. Some of these data sets possibly would have been even the primary data source so you have no way of recovering that. So we had a lot of reason to be, I guess, kind of careful. Along with these, this kind of data sets were being developed. We also had to target data sets that basically were developed. So we needed a platform which would allow us to archive large data sets as well as provide, I guess, kind of a tool for them to provide these data sets to other people. SWIFT was provided for that solution. So SWIFT allows customers to come in and access stuff through HTTP. There's a lot of access control that the users can leverage to, I guess, mediate access. This was the architecture that was adopted by us and given to us by NetApp and AppTira. We basically have three of these E55, these E-Series controllers in each of our data centers and they're set up such that there's two host VMs in front of them that are ice-guzzy mounting, the data and then exporting that out through SWIFT and we have a production F5 load balancer in front of that to allow redundancy as well as resiliency. One thing that is interesting is that we're using, instead of the standard way you do SWIFT with three replicas, we've got kind of a hybrid model where we're using RAID DP, which is double parity, so it's similar to RAID 6, but better. And there's, so that provides us with an extra replica at each of our sites and we also have SWIFT maintaining one replica at each site. So I guess that gives us basically four copies, which is really quite important, as I said before, because there's a lot of, I guess, kind of, there's a lot of, we have to put a lot of pressure on researchers to give their data up. They're very defensive, they're worried about it, they don't want to lose it. So giving these kind of levels of assurances is quite important, especially if the data set, say, can't be recreated at all. Yeah, so that brings us to our current configuration. And yeah, we're much happier these days. I guess I'd like to go summarize with a few of the experience that we've had. So we got heaps of support initially early on with Ascender Driver. It was just fantastic to see such enthusiasm for us adopting someone else's technologies. We've, our local NetApp providers have given us super fast spares replacements as well, which is quite surprising and kind of relieving. We even investigated using the Manila Driver to do NFS access control, but we haven't had, we didn't have time within the deadlines that we'd set for ourselves to actually deploy it. But when we were investigating this as a possibility, we got really quick access to the Manila developers as well because that was very early on in Manila development times and it was, hadn't really been, it wasn't even incubated at that point. So that was fantastic to see them so excited about us considering their software again. We have, since moving over to this NetApp solution, we haven't had any outages, even when we changed to our alternate infrastructure, which included a change of switch and things like that. So we went from no switch to a couple of switches with HA. So that's been just fantastic. And we've had no customer impacting load issues since then, but I guess that's partly because we can control how the aggregates, I guess, and the load is managed on the device itself, which is something we hadn't no control over before, which I guess is why you should just, well, there's advantages to having your own storage. In the future, I think that Melbourne University will probably be looking quite closely at trying to move towards a Manila-style solution so that it's a self-service model. We have two operators who are in the room, Marcus and Devendron. Devendron. Yeah, so those two guys, yeah, they, I don't know, I just lost my train of thought then. Yeah, they're gonna investigate, sorry, going to that. We don't have enough operators to support, to support basically manually provisioning NFS forever. So we're gonna go towards a more self-service model. I'd like to quickly thank NetApp, AppTira, and I guess Southern Cross Compute Systems for their support on doing this uplift program to enhance our data storage. And yeah. We're gonna have a Q&A session questions at the answer. Me, is that better? Okay. So my name is Jeff Hortaker, I'm in the Cloud Solutions Group with NetApp. I'm gonna talk a little bit about kind of the engineering deployment here, and then I'll pass on to Colin as we move forward. So, Cleaner, or did you just use this guy? Okay. So just to talk a little bit about where we are. You know, NetApp is a engineering company, as the engineering company, we've got an infrastructure where we've got a lot of shared environments. We've got nine different R&D labs throughout the company, as well as covering about a 5,500 engineering user base. So a pretty large scale growing as we move forward. And we really are kind of customer zero, as Jeff had mentioned, there's customer zero, customer one concept within NetApp, where customer zero is the engineering development side of the piece where we take some of the early, the first things that NetApp comes out with new technologies, new capabilities, we take that in and do an implementation with that. And then, and really driving innovation, one of the big things is really just so we can kind of pass some of the information, it's always good to have a new technology go through kind of the internal process before it goes out to you guys and do the next step. So from the concept of a global engineering cloud, so from a cloud environment, we provide solutions, one thing that we've been developing over the last couple of years is kind of, as we move to kind of the ideal solution, we've gone and we've basically virtualized about 98% of our infrastructure. So really given us a good foundation for building an environment. And to where we've actually moved today to the cloud service environment where this is a structure that's a self-service cloud environment. We have multi-hypervisor capabilities, so we do VMware and Hyper-V and Red Hat in those environments. As I said, 98% virtualized three rack infrastructure that's a NetApp environment called FlexPod. And we can support up to 10,000 VM deployments within that infrastructure. So the 5,500 engineering environment, it's scalable to a certain point, but we definitely have to plan for growth beyond that. And so as we're moving forward, we're taking that next step of going from self-service portal in a specific environment to an open-stack-based environment where not only is it self-service, but it provides us as an infrastructure component, more of a REST API-based access. Really from an open-stack piece, it provides us much more control, much more capabilities. And when we go beyond this 10,000 environment and we stack more components to that, now we don't have to worry about the self-service port, I should say the self-service piece of this was developed internally. So we did all this ourselves, but as you scale and grow beyond that, the open-stack piece allowed us to use the APIs and grow in a much more seamless path. And then really what we're doing here is tying this together, taking the open-stack piece, converging it, adding more hypervisors as we go on. Our goal is to be kind of agnostic, have a lot of different flexibility in which direction we go. Do we need to add KVM into that environment? Do we know what directions? We have flexibility with open-stack underneath this. And it doesn't actually change the self-service component. And from the net-outside of things, we take a lot of features into this. I mean, as you're building out infrastructure, and in the size of this deployment, one of the biggest concerns is cost. How much does it cost per user to deploy? So we take just from the capabilities within the storage environment, we do thin provisioning, we do copy-offloads, kind of minimizing our impact to the actual cluster. And then deduplication, so minimizing again the footprint of the environment, doing thin provisioning, flex balls is a path to doing that. So from a savings perspective, just going to that first step, we've saved about 66% of our actual footprint, so we're over-utilizing our capacity by a considerable amount. And as we go to open-stack, one of the key things that, hey, we wanna take on a new technology, the key is we did not wanna actually lose some of those provisioning pieces. So we still, basically we couldn't go to open-stack and then raise our actual capital costs by doing so. So and as we did that, one of the key things that we looked at with different hypervisor environments and different deployment environments, we're going to, RELL OSP was kind of the choice that we went down, we couldn't, we didn't wanna build and maintain that ourselves, so we went with an infrastructure that was already pre-built and had these capabilities built into it. So I'll pass it on to Colin, talk a little bit about what Red Hat, RELL OSP component of it and he'll provide some updates on that. Thanks very much. Everybody, name's Colin Devine. I'm a technical business partner development manager at Red Hat been there for a number of years. What does that mean to you guys? Well, I focus on specifically on a few key partners. I help work their technical issues, go to joint solutions, joint marketing, go to market, everything to make us successful with them. The way I paraphrase it to people I introduce myself with, like you guys, is that when I'm with the marketing folks, I'm the technical guy, when I'm with the technical folks, I'm just the marketing guy, you know, that happens. I like working with these guys at NetApp though, they know how to rock. Who was here at Vegas? Who was at Vegas last week? Anybody? Anybody? It was a great party. We had lots of fun. So I'm in the back. Anyway, I just wanna talk a little bit about OpenStack, a little bit about Red Hat, OpenStack platform, what value we had and how we worked together with NetApp. So just a little bit about, I think you probably all get this, I'm probably repeating this to the crowd, but just summarize real quick. We know the workloads are changing. We know that, as our CEO says, this is the biggest change in our industry in the last 25 years since mainframe went to client server in the late 80s, early 90s, the move from virtualization or even bare metal to cloud workloads is gonna be a major disruption point in our industry. There's gonna be winners and there's gonna be losers and just like all of you here, Red Hat wants to be on the wintering side. So we got into this a number of years ago. We see this as a great fit for us, the value we add, taking the craziness that is open source and driving into the enterprise. So let me just touch on that a little bit. How Red Hat adds value. So as you most likely know, again I'm peaching into the choir here a little bit, but upstream of course releases every six months. Red Hat takes that upstream craziness, recompiles it into something that our users and our partners can use in either Fedora or CentOS and that's called RDO, that's Red Hat's distribution of OpenStack. Don't tell anybody, I told you that, it's a code name, okay, RDO. So we take that, we recompile that and go out and rock the world with it. After about two months after we release that, again about two weeks after upstream, RDO's released two months, two or three months after that, we test, we QA, we bug fix, we certify, we work with our partners and make sure their solutions are integrated and then we release our version of Red Hat Enterprise Linux OpenStack platform. I don't know who named it, forgive me. We call it RELL OSP for short, but you're not allowed to know that either. We release that to the world. We're of course released five back in August, that was based on Icehouse, Juno, as we know we're in that two to three month window, should be released in December, maybe early January, January. So that's where we're in the cycle we are right now. Again, the value that Red Hat brings is that we take the craziness as upstream, you know, the design summit craziness that goes on later this week and we test it in QA and we certify it and we put training on it and we have services focused on it and we partner with our fantastic partners and we wrap it in a three year SLA delivers value to our customers. So Enterprise can get benefit from the quickly moving technology, but in a way that's consumable to them. That's why Red Hat exists. Take upstream craziness, make it consumable for the enterprise while living and breathing the open source model. How do we work with the NetApp? We've worked with NetApp a number of ways. We've been working with them for 15 years. They know open source, they know how to work in that world, they know how to get their drivers upstream, they've got a number of projects from Cinder drivers to Manila project. If you don't know about the Manila project, check it out, you should be familiar with that. It's gonna be pretty cool and awesome. It's still an incubation, but it's rocking out pretty well. We work with them, we've been certified of them, clustered data on tap, on tap and seven mode. These guys know how to work in the open source world. For a traditional storage vendor, they know it better than others. I work with a number of partners out there. I work with partners that don't get it. I work with partners that come to us and say, hey, we wanna work with you on the neutron networking. And we say, great. They say, we wanna take it and fork it. And we said, what are you doing? And then they say, we wanna add our own stuff. And we say, we're not sure what you're doing that for. And then we want us to support it. That's not how this world works. You don't work that way. You work upstream. You add value and you add contributions, you add code. And that will give you authority in the open source world. And then you work with your partners and your customers to drive that into value that they can consume. That's how you win the open source from the OS where we have experienced the cloud where we're all going now. So that's all I have on that. That's the next slide. Thank you. Okay. Yeah. Thank you. So I have the pleasure at a late time to tell you a bit about our proof of concept project we had with SAP in the SAP labs in Waldorf. I'm working there as a technical marketing engineer for NetApp with many of my colleagues. So our goal in Waldorf is to help our customers to use all the NetApp features in an enterprise application with SAP. So we are building technical reports. We're doing the architecture. We're building even solutions and add-ons in order to optimize our infrastructure for SAP customers. A couple of months ago, we realized that, well, both of our companies are pretty heavy interested in cloud, SAP is open stack member. We are a long time open stack member. And we looked at one of the SAP management tools, management UIs that is called the Landscape Virtualization Manager as a shortcut LVM. So I try to focus on it's not the LVM view might be used in the Linux environment. So whenever there's an LVM in my slides, it's the Landscape Virtualization Manager. It's a tool from SAP where they built in an environment to manage SAP, to relocate, to clone, to copy. And it's a perfect instrument for us at NetApp and for any hardware vendor to add in infrastructure related additional services like backup, like our snapshot and cloning technology so that you can avoid to copy 100 gigabyte or more of data, wire the wire, but using a snapshot technology as we give that to as an added value for our infrastructure. So we asked them, would it be nice to set up a cloud proof of concept to see how that works together and SAP ask us, hey, what's about OpenStack? So we are interested in OpenStack and the LVM, now Landscape Virtualization Manager has a storage API and they ask us, wouldn't it be nice to investigate whether we could not develop an OpenStack API so that SAP calls directly OpenStack code? So that was the challenge and the goals. What we like to achieve with that so show how to deeply integrate in a cloud-like environment to show our added values. Also to utilize the LVM and another acronym, the SSC that is our NetApps Storage Services Connector, basically that little middleware that translates SAP's calls do me a snapshot into our API calls or into OpenStack API calls and then finally set up an OpenStack landscape and show how that works in that environment. Why is that so special? I mean, a lot of the talks before, well, are dealing with standard web type of applications when we talk about SAP. By the way, who knows about SAP as an application or is using SAP as an application? A few hands at least. So we are talking about a large-scale enterprise, business-critical applications. So the minimum database size, it's very data-centric. It starts maybe with 100 gigabytes of database and goes up to several terabytes. Even the newer form of SAP HANA is an in-memory database with a persistence of several terabytes on the storage. And when we talk about enterprise-ready application, it's clear that we don't talk about local disks but enterprise-class storage. So that's the reason why we're really good interconnect and have a value. So how does it start with that OpenStack? So our goal was to use SAP LVM as an orchestration on setting up an OpenStack cloud environment. And our goal was to use Manila as a shared file system, not only for shared files, because whenever you deal with SAP in an SAP-distributed environment or in HANA, they call it scale-out, you have to have a shared file system, usually NFS. And also in the last 20 years, NFS as a data store for SAP is very common and has many benefits. Ease of use, ease of maintenance, still highly performant. All the tools like snapshot cloning could be easily used without any conflicts of open file systems. So the challenge was to set up an OpenStack environment that fits SAP, that uses Manila as driver. So and we chose SUSE as a partner, was quite a good match. First of all, the guys are sitting in the same partner port as we are, close to SAP, basically at the SAP headquarters. So it was a very short communication and we are using their cloud for infrastructure to easily set it up to basically, after the network challenges have been settled, getting all up and running in less than half a day. But basically that was a challenge and many things, if you, I have the challenge to explain that to you in just four slides. So many things have to be solved, like on what type of disks or aggregates do I position my data in lock devices? How do I distribute it? What is the throughput requirements? Do I use 10 gigabit links, one gigabit links? Like how do I design my storage? How do I assign my network in order to get that throughput you really need for running an in-memory database and starting up in a reasonable amount of time? So all those parts have influenced our OpenStack design and have influenced our decisions on set up an SAP system. So we have some deviations to standard, like floating IPs. SAP has their own way of doing virtual IPs and their own mapping. Who of you attended the Manila session, just the previous session here? So one thing to remember, one of the Manila parts is you can create a share with Manila, but who mounts it from within the operating system with sub-LVM? LVM is taking care about that. So basically, you can use Manila as it is right now without thinking about who is mounting it, because that is part of the SAP concept here. So if you look in a little bit detail, so sub-LVM is basically a SAP-centric UI to control the whole infrastructure. So we can start and stop the SAP systems running on ANNOVA, running on SUSE here. The storage is provisioned via Manila as shared storage, was a data and a lock share, which is the simple case. There you can do it differently, but that is the minimum requirement on shares. And we have developed a little SSC for OpenStack when that helps us when SAP is calling clone me a volume, clone me an SAP database. This SSC translates that in OpenStack Manila calls. So basically, that single clone call is translated into Manila, creates snapshot, and then Manila access, so that basically on the fly, the clone is created, the snapshot is created, the clone is created, and access is created. And then SSC reports back the names. I've created those share names, and then sub-LVM is taking care about the rest. He has the information about the shares. He knows on which system he needs to start it. He mounts it. He starts it up. He does a lot of more things like firewalling, taking care that your copy of your production might not necessarily restart the print shop you have started previously on the real production, something like that. So it's all taken care about, taken care from SAP LVM. And it was a nice, nice way through all that part, because basically there was one little obstacle we have to solve, and then always working out of the box with the Manila stuff we had so far. Of course, we have part of that PUC, and it's important to notice that there is always the little lap preview. We are in a lab environment. Lab environment means a lab environment at SAP with lab code of the sub-LVM and also lab code of all Manila part. But it gives you a bit of a glimpse how Manila could help to simplify an SAP part, to simplify the management. And it is completely transparent in sub-LVM just as another storage driver there. So lessons learned. SAP LVM's provisioning features are valid for OpenStack. So they could be used in an OpenStack environment. Thus, meaning that we can use those enhanced capabilities of clone copy refresh just out of the box that are built in an LVM, but with OpenStack storage provided by NetApp cluster data on top. And all the features set from NetApp could be used in optimized operations. It's important to notice OpenStack's not equal OpenStack. So you have really to take care about what is your network setup, how do you handle Neutron when you deal with virtual IPs that are additional IPs that are bound at runtime within your virtual machine, how to allow access to the Neutron firewall, and so on. So there are many things to architecture into your OpenStack setup, so to get it right so that it works with SAP. And there are future projects that we may focus on, like the sub-LVM has also an virtualization API defined, where there could be a good part to include NOVA. So that SAP LVM cannot only clone the storage, but could also create the instance, clone the storage, and get all in one shot. So that was a short introduction in what we've done. It's important to mention, at 640 at the SUSE booth, there is a more in-depth technical discussion about that with a little demo or live clips we have about that project. So if you're interested, you're welcome to join that session, too. Thank you. Thanks. So obviously, the bars are opening soon. I want to keep you from that too long. But we do have a little bit of time for questions. For any of the presenters, for Russell, or Byrne, or Jeff Whitaker, yes? Yeah. Questions? Yes. Hello. Anything else, sir? Well, the thing is that we've already upgraded, say, the API, the whole part of our infrastructure that I have, Icehouse, is a backwards compatibility layer, which allows you to upgrade your compute nodes gradually. So the compute nodes are also run in Cabana, and it's going to be a roll over period until they're all up to date. But in the future, there'll be more flexibility around this. But I don't think that we would be trying to skip upgrades. It's just, I don't know. I feel like there's too much risk in it, but then again, it's not my call. Trying to keep this informal here. Anybody else? Good questions? Nothing? Everybody wanted to get to the drinks, huh? All right, well, if there are any other questions, thank you for your time and your interest. Appreciate it, and have a great summit.