 Good afternoon everybody welcome. Thank you for joining us this evening My name is Don Bork. I'm the on command inside product management TME at net app I am joined by my colleague mr. Kevin Lambright Kevin. You want to introduce yourself? Sure. Hi I'm a cloud architect in our engineering shared infrastructure services group. That's essentially our internal IT organization for our R&D groups So let's get this kicked off First thing I want to talk about is what we're going to cover in this presentation little engineering Agenda here So first we're going to talk about our challenges that we've had in our own open-stack environments talk about where we've come How we've matured that I'll give you a small introduction about on-command insight how that's helping us solve some of our problems I recognize some of our customers in the audience today. Some people may already be using on-command insight and Then next we're going to show you how we brought some of the visibility up to the business How we can apply some of these things with cost You saw a lot of things in the keynote session this morning around agility speed and again cost I'm going to show you how I brought that into our open-stack environment what we've done there And then lastly I'm going to go through a quick demo Kevin's going to recap some of the things that he's done in his environment which is much larger scale than what I'm doing and You know, we'll stop and open it up. It's just kind of a small session here So if you have any questions, we'll you know, we'll definitely entertain those So why don't you take it away? All right, thanks Don. So let's talk a little bit about our engineering cloud that we've built you know and and now quite a bit of it is consumed by open stock as well Like I said part of the we call the engineering shared infrastructure group So essentially we're building software hardware virtualized solutions for our engineering groups You know to help them essentially build develop and test their products faster We set out to build what we call our global engineering cloud back in 2013 Prior to that if somebody wanted a VM it was your traditional story where you had to file a ticket with IT You had to provide justification as to why you wanted or needed the VM and maybe in two weeks You know, you might get that VM so, you know, we thought we could do a much better job than that provide much faster turnaround and You know started building this cloud back in 2013 initially it was only VMware and Hyper-V and then open stock was added to that late late in 2014, so This is this session is not focused, you know, really on the underlying architecture of it or The use cases or how we completely automate that with puppet. There are sessions later on Thursday I have a session on use cases at 11 and One of our team members south for gosh back there has a session at 130 that Joint session with puppet labs that goes over the full automation of that if I you know It's definitely some interesting stuff if I went into that Don would have absolutely no time to talk about his part of the session So just a few key stats Today, we're you know, this really started back in 2013 at roughly, you know 500 to a thousand VMs has grown substantially our total capacity today is right around 42,000 VMs of that roughly 15,000 VM capacity for open stock and at any given time, you know, we've got you know, really about 5300 active VMs running with open stock and percentage wise that means that you know KBM and open stock is roughly 36 percent of our overall hypervisor capacity Not necessarily by VM, but a hypervisor host capacity and so a year ago at The the session down in Austin the open stack summit in Austin We talked about this and that number stood at about 15 percent So that you know shows a significant growth year over year and we're also on target to change that number You know to really drive even more scale to open stack You know to roughly 80 percent in the beginning of next year Foundation of this is our converged infrastructure Solution that we have with Cisco and so as such in Cisco UCS compute nexus top-of-rack switches and you know our own net app Storage systems right with faze ease areas and more recently with the acquisition of solid fire year and a half ago There are now flex pod solutions based on solid fire we are using the community version of Open stock the red hat community version rdo and our the lowest level we're actually in transition right now this says liberty, but we're in transition and That's you know, we only have a couple of regions that are actually still on Liberty We're working on a v2 architecture rollout over the course of the next couple of months And that's going to bring you know everything up to the newton release And the reason you know one of the reason we're using the community version. This was a crawl walk run Exercise right so we started with it fairly simply see what we could do with it, and it's grown significantly You know we rely on it more and more in our engineering environment on a daily basis The entire thing is is automated by puppet from deployment all the way through Every single release where we do an upgrade. It's a completely non disruptive upgrade all orchestrated by by puppet like I said go to the session on Thursday at 1 30 and you can learn all about that Why open stock? I talked about you know, we built this cloud back in 2013 You know we've been scaling it ever since it's been you know quite popular as you can imagine with our engineering community enabling you know much greater agility for their own for their processes and You know quite frankly we wanted to avoid vendor walk-in as we were scaling and growing this cloud You know and part of it was we also wanted to reduce our you know ELA costs with VMware to be quite frank That was the that was the beginning part of it as it's grown over the years We've seen you know sort of the the capabilities and how we can scale open stock So you know that's no longer the the number one factor obviously we still want to avoid vendor walk-in But you know we're at the point where within you know under a year We can be roughly you know majority of our cloud will be powered by open stock So at that scale, you know We we do definitely have some monitoring challenges You know things like you know just basic monitoring. We've got up-down stats alerts threshold all that's covered by you know a tool We call Xenos or you know a commercial tools in us There's a number of open source tools out there with Nagios and Zabax and you know quite a few others This just happens to be a tool we've been using in our data center for years and years and you know It provides the basics not an issue We're not currently using the open stack Zenpack and maybe we get more insight into you know the overall environment if we were using that But if we're going to throw more money, you know at them and give more licenses We really should be looking at you know some of our own internal tools first In terms of logging, you know out of the box there really is no centralized logging with open stack You can certainly route everything to syslog and then use our syslog You know to send that to a central server We want something a little with a little bit more intelligence than that We use an open source tool called gray log to do our logging consolidation You know that's pretty cool provides us some interesting insights. We're just getting started. We just You know did centralization of all of our open stack logs there So we're working on you know what we can get with that but as you can see We've already got two different tools that we have to go to and it really ends up being more than that There's a number of other you know monitoring tools that we put in place and even with that You know, it's not just having to go to a bunch of different tools. We still have gaps We you know, we don't have that end-to-end view of our open time your entire open stack environment In addition, you know with all these tools. There is no correlation engine. So when engineer Files a ticket and says hey, there's a performance problem here. We've got no way to drill down and figure out Oh, that's because this volume over here in the storage stack is You know over is running hot or 90% full or something like that It really at that point becomes an exercise in digging into the different components And then one of the you know one of the big things is this last bullet here lack of visibility into VM utilization Over the course of the last four years We've made it very easy for engineers to go create their own VMs and spin up resources and that's been fantastic But for us, we can't really tell how they're being used whether they've got you know undersized under provisioned VMs, you know oversized wasted capacity and You know or they're only using 10% You know CPU utilization or maybe it's sitting idle for weeks or months at a time. So, you know, certainly You know as as we mature this we would like to get that level of insight. So Don't know if you want to talk more about your environment and you know, maybe I could help us out here So similar to Kevin's story here. We have a OCI development Open stack environment. So a different cloud from Kevin's open stack environment and although not being at his scale We adopted it for some of the same reasons. We needed the agility We can't wait 10 days to get a new, you know resource provisioned out to us But what we found very very quickly was once we opened up this Wild West to our development teams Which is really great gave us agility. We got things done much quicker We ran into the same traditional problems of running out of capacity So we found ourselves having management meanings and talking about when are we gonna buy more here? How do we how do we protect ourselves from running out of this? And how do we ensure that we have performance on these these workloads that we're doing for our production? So we're running our OCI development all of our software releases We have hundreds of developers running their software through we have QA departments using that open stack environment We even have some of our sales and products teams using the open stack environment and it became very complex very quickly So what we did was we said hey, we build a software called on-command insight Why don't we create a data source for open stack and monitor our own development environment? So that's what we've done So what we've done is come out with this new data source There we go. There we go. So we come out with this new open stack data source that allows us to get end-to-end Visibility from the compute and hypervisor all the way down to the spindles now OCI is a licensed product So many enterprises the larger enterprises out there Summon this room already employ it and what am I purpose here? I make no money on sales So let's just put that on the table here What my job here is to bring visibility and awareness to you that this may exist in your environments already today And that you can go and talk to your colleagues because one of the important factors here is the visibility it gives you across these silos So we're looking from the compute or the hypervisor level the guest all the way through the fabric down to the spindles where the data resides We do all of this over IP everything is agentless read only out of band So it's non-obtrusive. It's very simple to deploy We recently came out with this open stack data source as I mentioned we support I believe we've gone back all the way to kilo release and we've now Our environment is actually a little bit more ahead of Kevin's which where we're running the neutron release right now For our own environment a Newton. Sorry My bad. So we're running Newton there But it's it's very very flexible So it's easy to deploy points it to an IP address give it a read only username and password For most of our devices with open stack. We support the back-end array So I saw some EMC folks in the crowd. It doesn't have to be net app. It could be EMC I'm not pointing out you sir in the back room So it could be any storage it could be IBM it could be Huawei it could be pure it could be three par it could be Dell does not matter to us when it comes to your hypervisors open stack great also VM We're in the back of the room as well as red hat enterprise virtualization. It could be IBM power L L powers We can do all of those as well and for your guest operating systems as well Okay So I mentioned about the different this is just a handful it my challenge in talking about on-command inside today Is all the stuff that it can do it's very difficult to get done in 40 minutes Just an example of some vendors that we support Now we also open this up. I've seen it pretty much of a shift in the in the industry I was down in DC a couple of months or about a month ago Talking to and that's the internet capital of the world right some very large companies down there I spoke to about 17 of them and one of the shifts that I'm seeing is people are actually moving their workloads from the cloud To their private clouds they're looking for ways to be able to move those to their open stack environments But what happens once they do that is they need to be able to ensure those SLAs They need ways to be able to monitor both in the cloud or the public cloud and the private cloud And they need to be able to do things like baselining and ensuring those SLAs and tying it to cost Understanding which medium or which platform is the most cost-effective? There's also the need to be able to identify waste in the environment And this is something that we've seen in a lot of the environments I walked into one environment and there was hundreds of thousands of dollars of lost I would say revenue Things like stranded EBS volumes underutilized VMs too many CPU or too much compute too much memory Allocated to those resources and OCI has a number of ways to be able to identify those whether it be by configuration Maybe a volume that's sitting out in the fabric that nothing's accessing or maybe by performance Haven't seen more than 10 IOs over the last 30 60 90 days There's a number of ways we do this in the product I'm going to talk a little bit about how we deploy just so you understand the scale and the scope of this Now on-command insight will support up to 250 storage arrays Up to 20,000 virtual machines 100,000 fiber channel paths tens of thousands of ports in a single VM That's huge scale That's immense scale Now everything that we put in our server. I'll call this the operational client that one server We can also roll it up to a federated data warehouse a reporting console So if you have multiple geographic locations, you can have multiple servers deployed or maybe you have a monolithic Environment like some of our banking industries do you can report them all up to a single federated reporting warehouse So you get consistent insight across all your ecosystem. I Mentioned that the the benefits of OpenStack that Kevin is looking for he's looking for some ways to be able to Reduce this ways get more visibility into his environment The way we do this with OCI is we use a lot of analytics We use correlation analytics being able to map the logical and physical constructs of the service path from guests all the way down to the spindle in a shared services environment It's very difficult to understand where that's that storage is being delivered from so you need visibility into all the vendors Or the big players and you need to understand where that storage is being served from So with OpenStack, we'll provide you the virtual disk We'll map it down to a lot or a volume and all the way down to the spindles if we support that storage system So you'll see all of those utilization rates up at the top Now one of the things if anybody in here work with storage when we have a performance problem The first person we usually point to is the storage guys Right. They're the guys that are guilty until proves proven innocent And most of the time they are guilty and I'll say that being a storage guy Right, but being able to show and bring all of this visibility to all the teams OCI is software. It's simple software It's browser-based you provide URLs. They can log in themselves So our developers Kevin alike can log into OCI No longer has to guess how his performance is on his system He can see instantly is it a problem with his application development or is on tap select Or is it a problem with the infrastructure supporting his application? Right, so it rules out all of those I Like this little slide. This is the truth. The best ticket is one that's never been opened So I give the end user the ability to be their own developers and they're doing their own troubleshooting One of my colleagues made a comment once before that there was this guy this application owner every day calls him up and says I'm having a performance problem every day He runs over to the arrays or to the virtualization teams and they're running collections on the stats and doing all kinds of reports to prove It's not a performance problem Then one day I get this installed now Kevin is what I call my customer zero He is my customer. I have to prove the value of this product every single day to Kevin. He's a different business unit than me He's responsible for different things and he has to make his own decisions on what tools he's going to best use to manage his environment So I go to Kevin as well as our customer one, which is more of our customer focused Team I would say our engineering team and that team basically said that this individual is now logging into OCI Not Calling up every single day. They actually thought he got he left the company just because his phone call stopped They go inside of OCI they see the audit trail of who's logging in and sure enough John Smith is in that list Every single day logging into OCI So enabling that self-service our self-service type portal to be able to root cause their own issues I mentioned a little bit about allowing them to go in there Kevin has a help desk our ticketing system You know, there's going to be those times when you do have performance problems or events OCI is an open platform Similar to open stack. So everything's extensible. We have a fully published rest API a Extensible MySQL database. So we allow you to integrate us with any business services or solutions in your environment This particular example our customer one is integrating us into service now So every time someone opens up a ticket they get a URL directly into on-command insight where they can see all the performance for their Assets and to end the entire application stack. So from the app all the way down to spindle many times the problem is solved right there All right, so it's very very convenient to give people back the power in the visibility into their infrastructure The next thing we do here is service assurance I talked about once some of these customers of ours are looking to place those workloads or some of these Merturing workloads in our environment on our open stack. So what we have done here is provided them some service assurance So we're looking at things like masking mapping zoning the iSCSI connection counts the security sessions for iSCSI We monitor those against the service path and then when there's a violation on our defined policies We'll alert them whether it's an email SNMP syslog event There's a number of different ways as well as even a login or a knock type display, which I'll show you an actual live demo We also track all the changes so one of the The truth in in our infrastructure is usually a event or a performance problem is usually a result of a change Whether it be planned or unplanned So being able to go back into the tool and seeing what types of configuration changes have happened in our environment is a useful Capability in our troubleshooting. I can't tell you a number of times people have told me that hey dawn I had to leave at 2 o'clock the performance was horrible And I start troubleshooting at 2 o'clock while I don't we jump into OCI I see they left at 11 o'clock just before lunch You know and it saves me a lot of time looking at the wrong logs The other aspect here is OCI is always on always monitoring so it's 24 7 There's no need to enable stats collections and the logs for our EMC gear We're always out there collecting this stuff So you're gonna get the information we hold 90 days of operational information so that near real-time information every five days Our latest release has increased this to 90 days So if you're looking for the real-time points will have 90 days points or 90 days of the performance and capacity information But our reporting warehouse is the long-term historical and forecasting capability So it's day over day week over week the hourly monthly quarterly year over year summary information in our data warehouse And that will be forever So with OpenStack there was a lot of things that we were seeing in the environment So it wound up being capacity utilization was one of the big things I was seeing hypervisor contention in our environment. I was seeing a lot matter of fact I'm one of the biggest culprits How many people here when they work with application developers they say you know What do you need for an infrastructure and they all go give me the slowest thing you have? No one does that right not a soul Usually they say I don't need the fastest thing Just give me something in the middle and then that what I find in our environment. Everybody's in the middle All right, our high tier cost stuff is being wasted. Nothing's utilizing it and our lower stuff is not being utilized either So looking at those trends in OCI I'm usually the guy that's getting a call saying why you needed a course on this this box It's doing 10% CPU utilization. I'm one of those culprits But this is giving us visibility into that and we have operational trends We can look at that over the course of a day Understand that and be able to identify it to the end users We have annotations that you would see similar in Amazon where you can put metadata or tags on the assets So we can go back and understand what business entity what line of business what tenant what project What user even our own little data if we want to put a little tag or a note on that particular resource So it gives us a lot of visibility into those Lastly when it came to moving into a more I would say mature environment We needed to be able to ensure our SLAs a lot of our I would say releases were performant They had high performance requirements We need to make sure that things finished in time or also would have ruined or run the risk of not meeting our release schedules So we need to maintain these SLAs in our reporting warehouse We are capable of putting SLAs and SLOs in our reports similar not just for open stack but for any device that we support in our environment and we measure over time to make Sure that we're meeting those This is a soft way of going around QoS type policies in the environment So if you're not leveraging those this was another method of which we employed and then lastly was capacity or understanding cost Accountability for that everything in our data center has a cost to it and our business executives may or measure us based on the amount of Revenue that we can bring in so we needed to be able to refine cost and control that and OCI offers a lot of capabilities when it comes to show back and charge back type of capabilities But most people who are a little fearful of deploying that Quite honestly, they find it complicated. Do we charge fully burdened or unburdened? OCI is a simple cost per gigabyte How much capacity to find a cost and then you can use a report similar to this to just track toward a budget Everybody has a budget for their ecosystem. We're just simply tracking our tiers or our capacities against those budgets Some people don't have a budget. Let's put it though But most most of us have some type of idea where we want to be able to in well where we want to be and we want to track Toward that and this is just a simple example one report out of many So I'm going to do this I'm going to do a little bit of a live demo here I'll kind of walk you through this is some of the recording so you can actually see how the product works Get a little bit of better understanding here Hopefully this starts up. So what we're doing here is we're looking at on-command insight This is just our web page logging in I'm getting some high-level details about my environment the amount of or the vendors in my environment This happens to be net up in EMC or Dell. I have some information about the tiers Whether it be gold fast tier one tier two extreme we get the capacity breakdowns the amount of raw and usable I have information about my fabric environment. So my brocade my Cisco Cisco and Qlogic as well as the firmware version. So it's really important for regulatory reasons I get a lot of high-level facts about my environment how much capacity for those tiers how much of my environment is virtualized give me some information about the amount of capacity for my data stores my VMD case the busiest fabric in my environment and then over on the side here will have a Top 10 storage pools here in the moment and it's going to show you a breakdown of all the aggregates or storage pools or disc pools depending on your vendor technology and the amount of usable and Used capacity for those everything in this dashboard is interactive. So as you hover over graphs you click on them They'll drill down into more detail You'll see a little legend at the bottom that I could switch into This screen here is a heat map So the larger the font the more IOPS that that device is generating and that was for storage the same here for our virtual machines Now once we start collecting all of this information and we bring it into your environment We could set up policies to basically need or a monitor that environment We have a number of dashboards out of the box that we send You can also create as many dashboards as you want everything's very flexible in here We have the complete ownership here or who owns what dashboard and it's built up a widget library So we have over 21 widgets in our library to choose from and you can see here's an example of an open stack dashboard that I Created this has given me a breakdown of all the open stack data sources in my environment I think I have about 74 VM instances running currently here This is the example of the widgets that we have so I can bring in all the metrics I want to see what you're looking at right now the checkbox items are only the metrics I'm showing you can see the number of uncheck boxes here So there's a lot more information I can bring in here. These are all user configurable widgets So you can create them sort them you can add filters to these widgets. We also allow roll-ups in Aggregation functions for these widgets. So if you want to see your sums your total your max your averages your medians and so forth Once you create the widgets you have something like this a box plot This happens to be one one of those designs where you're looking at your 25th quartile 75% quartiles your median your max your averages. These are all for the open stack VM instances We also have just below this a little scatter plot and I kind of like this design here This happens to be looking at IOPS and latency and it helps me organize my time So when I'm looking at this if something's down in the corner there It's really not an importance to me because it's not generating any IO these outliers on the blocks Those are what I'm focusing on and then next over here We have a stack graph and I'm also overlaying a line chart on top of it So I could be comparing any metrics together. They could be the same metric They could be virtual machines against the storage resource in the back. So any metrics you want to plot This one here I like and I use this in Kevin's environment and others and for open stack And this is looking at for example virtual machines with X amount of capacity allocated to them and a number of processors But low utilization So it'll rank them all up when I click on these again. They're interactive They'll drill me right into the landing pages, which I'll show you in a moment and the same goes for memory and You know for any metric that you want in open stack You can plot those on here You just check the metric you want bring it into the graph and sort it The thing that I do here that was very easy this whole dashboard probably took me about 10 minutes to create It was that simple and this is something that's new in the latest release And any administrator in the product can come in and create their own dashboard views. It doesn't have to be open stack I'm doing that for this presentation today, but it could be any storage any virtualization Platform that you would like and then lastly we can also trend that over time So you can see the ebbs and flows in that performance. Is it a one-time spike? Is it happening over the last five seven thirty days 90 days? We can drill all the way down. You can see here. They're interactive I can change those and add filters to them as well Okay So a lot of flexibility in the widgets change them around on the fly save them You can send these URLs off to your teammates. You can print them as PDFs But there's no limit to what you do here. We can also put variables on these dashboards So I create one dashboard here put a little variable up on the top And I'm using a text variable where I can just type in the name of a virtual machine or a Partial name and it will change all the widgets on that one dashboard just showing that particular virtual machine instance All right, so a lot of flexibility there I can also use all the annotated data I put in here like data center Maybe I have a data center location in Boston in Tokyo. I can put the data center name in there I can use an environment name Maybe I want to look at just my production assets or maybe my development assets in this we're providing the consolidated view I'm looking across all of my environment. So it's giving me one view across all of those suites So this particular example here I'm bringing up a virtual machine that is has a reported latency issue and I could have gone to the global search window and brought it up there or I could have come over here to our violation dashboard Looked at the breakdowns of the policies Maybe analyze the time of day here the last 24 hours of the policies We also contain at the bottom here a list of a thousand of the last violations So I can simply sort the violations and click on the ones that I'm interested in maybe it's by Criticality or maybe it's an application that's associated with it or a business entity On the on the screen here you can see that I have a policy that's been violated. I'm clicking the ID No need to go in find the event It'll bring me exactly to the time of day when it happens gives me all the details about that virtual machine the guest operating state The CPU utilization the memory utilization these red dots on the screen are those performance policies that have been violated and Down below you can see the metrics in about a second here. I'll scroll down just a little bit more You'll see that I'll be able to look at all of the metrics across a timeline. I'm actually looking at the last three hours I'm going to increase this to 24 hours so we can look at that again I can go out to 90 days and look at this performance. You can see on this exchange server here There's been a lot of violations Lot of data points We also get all the summary information for this exchange server the VMDK information or the VHD if you're an Azure all of the Information I could plot out onto graphs as well I just kind of save some space here and keep those things rolled up So I could see that same performance information for my file system Utilizations I could see the compute or the fabric information So the zoning information for my environment the masking mapping entries as well as the violations that occurred Now on the right-hand side what I'm showing you right now is our correlation analytic This is the neat stuff. This is what it gets me up in the morning. This is what we're doing We're looking at everything that's logically or physically connected to this VM And if we find a correlation we're going to put it on the right-hand side and we're going to rank it and what I'm seeing here is that the Latency here is I believe 90 something percent Correlated to the latency being seen on this particular volume because we understand that end-to-end service path We're going to map that you don't need to know you just have a CPU problem or just a latency problem We're going to map it through so being an L1 administrator You can simply just follow the cookie crumbs on the right-hand side So here we have all the details for the volume we can understand the tier that's associated with it We can see the performance again here I can also see now that we have a new resource here, which we call a greedy resource or a bully in this environment So this is another asset in this environment that's sharing the same resources and it's impacting this particular virtual machine And I can see here when I select a checkbox it brought up its performance on top of the other and I could see there's a very close Relate I would say correlation to the latency we're seeing on that particular volume So my next step would be to click on this greedy volume and understand why is it being a bully? What's going on there? So I can see all of the efficiency Information is it been thin provision? What is it associated with? I could see it's associated with a new travel booking application So I started off in an exchange application now. I'm looking at a travel book application I can see all the performance details and what I see here that's interesting is a new virtual machine And I could see that the IO demand here is a I can't see the number But let's say 90 something percent related to the I up demand from this virtual machine So it's correlating so those storage guys who always get blamed at the beginning What we thought was a storage problem at first because of the latency on the exchange server We're now kind of going back up the stack and seeing that there's a virtual machine on the other end actually causing the problem So here we are looking at this virtual machine here. We see the performance information We're going to bring in some additional metrics here and bring in some CPU memory and so forth Scroll down here and now I actually found this at VM world a couple of maybe five years ago So what I can see here is CPU utilizations at a hundred percent. I am not seeing any red there That tells me I didn't set up a performance policy to monitor CPU But I did see memory there was being ticked off So it's a hundred percent on my memory hundred percent on my CPU. I'm also swapping to disk So I put my little technical practitioner hat on Okay, so what I'm seeing there is that I'm now swapping the disk driving the additional IO Causing the IOPS and utilization to go up on the aggregate and therefore causing the contention problem So what I'm showing you here is the open platform here We have a fully published rest API in our in our application So we provide you all the popular methods put get posts so on and so forth We have a response test tool here So we can go out and actually put in some of the keys to go out and query those rest calls And then you can test it and you can actually see as a developer when it comes back what the the output is So no more of that, you know try and pray that the information comes back in the format that you expect Next here. I have is our dashboarding or our reporting capabilities So we have a lot of out-of-the-box reports We also have a full suite or an automation store, which is also an open community We can download all the reports that we have our customers share these with other people We also have a Report authoring tool in our in our suite so you can create your own drag-and-drop reports showing you just a simple example Of how I'm creating a report on the fly. I'm not a report or BI guy I'm simply dragging the metrics over and looking at the information and it happens in real time So we have a built-in intelligence Solution into our product which helps us with things that we do in our capacity planning all the time that make problems like double counting Especially in shared storage environments that runs a risk all the time So very much like an Excel here, but this is what I can do at my level There's a lot more things you can do with this product in the professional service catalogs that we have show you all types of examples there Now what I'm doing here is actually creating my own custom formulas So I'm creating things like utilization for my capacities, and I just take this being net app We're a storage company and then I can also do some things like add some Visualizations put it into some graphs and some charts, and I just threw it here into a column chart or a bar chart to show you now These reports are also interactive So once I run this report and play it You'll see that I can actually drill down and I love this feature here because when I'm working with my management They want to report I send them one I send them the top level of this report and whatever they want to see They can drill down just by clicking on the report So it's really really cool really flexible as you can see we're stuck drilling all the way down. I mentioned we have this community It's on the automation storefront if you're familiar with some of our other products like WFA We all serve our packs out there There's all types of reports that we have so if you have access to net apps communities go take a look at those They're all all freely available for you to download take a look at it's very very popular It's probably one of the most popular places for OCI right now And then what I'm going to do here is turn it over to Kevin. He's going to talk a little bit more about his Development environment and we'll wrap things up Right Kev. Okay. Thanks, Don. Thank you. That's an awful lot I don't know that we're going to be able to take advantage of all of that, but you know There's definitely some interesting things there So I just want to really talk here about one thing that we recently introduced so two weeks ago and why Excuse me. I think that OCM might be interesting in this environment So two weeks ago we rolled out in production a software-defined storage solution for our engineers This is essentially our our on tap select, which is our you know software-defined You know on tap as a software-defined product We rolled that out in our open-stack environment by the way, so Don't ever put something into production two weeks before the open-stack summit where you're finalizing your presentations But we're actually we're pretty excited about this. This offers on-demand Software-defined storage instances for engineers so they can go do development and test against it You know pretty cool stuff as such it is a storage system. We have Put you know two different environments together ones a performance environment back by solid fire the other one is a non-performance environment where you know you Essentially do development and functional tests and stuff like that Because it is a true storage system, you know people are going to drive load against it it is latency sensitive and You know one thing that that resonated with me when we were talking about that was the bully victim scenario We could absolutely have people that are over driving You know driving too much load even in the performance region or Conversely in the non-performance area where they're only supposed to be doing functional workloads They could actually start driving a lot of load So, you know we certainly would like a bit better insight into you know what's going on there And be able to drill down and figure out who's the the bully there And then the last you know already talked about sort of the something like this gives us better end-to-end monitoring of the entire stack You know I definitely like that Event correlation piece of it. So what's next where do we have this? I'd love to say that you know? Hey, this is in our entire open stock or you know Rolled out in our entire op stack ecosystem, and it's doing tremendous things for us But that's not really the case again. You know like with most things. It's a crawl walk run Operation so you know right now we have it in our open stock dev environment And we're just sort of playing around with it see what capabilities it has so we can get the next thing within the next month Or so is actually deployed into our software defined storage environments the ones I was just telling you about that we rolled out and I'm actually more excited about you know What kind of capabilities and value it can show there and then maybe over the course of summer into early fall deploy You know more widely across that you know 15,000 VM capacity that we've gotten an open stack environment in addition to that right? So I said we use then us it's not like we're going to turn that off tomorrow This is something that's well-attrenched in our ops, you know organization something they've been looking at for years So you know I'm interested in what kind of integration capability there is with something like zenos or you know Is there something that we can integrate with our gray log our you know log? Consolidation or even with our cmdb and ticketing system to provide you know more automated end-to-end monitoring Yeah, like Kevin said, you know, he's not going to give up his existing ecosystem He needs to maintain that he needs to do a smooth transition over to whatever he chooses So OCI and Zeno's OCI and Splunk OCI and whatever product you choose we provide the open platform to do that Everything is extensible everything is wide open We just want to be able to provide you access to the data what you do with that data is your choice and that's what Kevin You know likes about that So if I were to wrap up things just talk about a little bit what we've seen today We talked about some of the challenges that Kevin's had his environment, you know OCI's development environments No different. We did have some of those same challenges with our infrastructure The analytics that we're leveraging things like the correlation analytics that I talked about the bully victim scenario or the degraded and greedy resources that we showed you are ways that we're being able to provide visibility into that infrastructure The integration capabilities of OCI is what we're leveraging that most of our customers are looking forward to We've come out with a new SNMP data source to be able to bring in SNMP objects in or polling of SNMP objects So any metric from any SNMP device we have the extensible platform. We can tie into things like service now apter you name it and Then lastly when it comes to our own environment We gave our end users our developers access to the OCI tool so they use it as their own self-service portal So they can understand when they're having problems and when they're not So this is gonna wrap it up today I hope this was helpful for somebody the folks in the audience if you have any questions We'd be happy to stand around and then spend a few more minutes answering anybody's questions Yeah, great. Thank you. Thank you