 Welcome, everyone. Hope you guys are enjoying the second day of the summit. So I'm going to kick us off. My name is Seth Fox. I'm the VP of Operations for Selenia. I'm here today with Brad Vaughn, who is our VP of Delivery, and Francesco Paola, who is our CEO. Quick special announcement about Francesco. Today happens to be his birthday. So when you see him around the trade show, make sure you wish him a happy birthday. That'll go a long way for me, actually. So just a quick bit about Selenia. We're a professional services and software company with a focus on accelerating the adoption of cloud computing. We work with enterprises and service providers around the world to help them with that. Our focus, however, goes beyond just the technology. When you install a solution like OpenStack, to really take full advantage, you need to focus on people and process as well, and we help our customers do that. We have experience and understand how enterprises adopt infrastructure and work with them on a regular basis. We've also recently announced a tool called Goldstone, which is preconfigured to understand OpenStack and help you. It's a tool for cloud operators to help them really manage their infrastructure over the long term. So what we're seeing and hearing from customers, I think a lot of people are talking about this stuff at this event. Agility is a key focus. People need to be able to innovate. They need to be able to compete. And OpenStack is a tool to let them do that. What our customers are telling us things like, by using OpenStack, our developers can focus on what's needed, and that's the customer. Focus on the customer facing applications. I was actually talking to one customer that actually found OpenStack a useful recruiting tool. They were talking to their developers and they were actually leaving because they wanted to go somewhere to build elastic applications and work on this type of infrastructure. So not only were they unable to hire new people, some of their best people were leaving to go work in those types of environments. Openness, that's a pretty easy one in this environment. When you see 4,500 people, 1,000 developers working on a project, being able to take that power and bring it into your organization is a great benefit for anybody no matter what size your organization is. Cost is always a big factor. You want to lower the total cost of ownership. The real way to get there is kind of the harder part, which is the operational efficiency. If you're going to take this type of infrastructure into your organization and you're going to give it the standard IT process, that's really not going to get you the cost of ownership that you need. So getting that operational efficiency is really key to success of making your OpenStack deployments useful long term. So who's using OpenStack? We look around, we're going to talk today about the top 10 auto manufacturer that's going to be sort of the focus of our project planning exercise today. But yesterday we heard from Wells Fargo talking about how they're using OpenStack. PayPal's in the process of moving the bulk of their infrastructure onto OpenStack as well. They're also a big contributor to the project. So financial service is a big player here. Telecommunications, Comcast, AT&T, Verizon are all using OpenStack in multiple forms inside their organizations, whether it's for their public cloud services or their private infrastructures. From a SaaS perspective, ConcurTalk actually in Hong Kong at the last summit, some of you may have been there, about how they're able to enhance their expense product by using the OpenStack platform. HubSpot and Workday as well in that space. eBay, obviously you're working with PayPal as well, moving the bulk of their infrastructure onto an OpenStack platform. Film and media, digital film tree was in Hong Kong. MLB was talking this morning and Walt Disney Company was there yesterday talking about how they're getting involved with OpenStack. And then the government, NSA and many other groups, they're not telling because obviously exactly what they're doing with OpenStack, but they're all taking advantage of it. So really a wide range of industries are adopting this platform and really making it work for their business. And then the other thing that's really critical to making sure that it's successful is the ecosystem. You can see it on the Expo floor. There's a lot of companies out there building solutions around OpenStack. You have OpenStack in the core, maybe you need training services from Selenia or you need something to manage your VMs from Scalar or high performance storage from SolidFire, right? There's a lot of different solutions out there to really round out your OpenStack solution. So the ecosystem is in full swing to really make your solutions work for your business. So with that I can hand over to Francesco who's gonna take us through. Thanks, Seth. So today we're gonna talk about planning your OpenStack project. The way we're gonna do that, we're gonna give you a fairly detailed example of some of the work that we've done recently with one of our automotive customers. Before we start there, I wanted to make sure that you all understand that when we talk about an OpenStack project or a cloud project in general having been doing this for four years now, I was previously with cloud scaling before starting Selenia, is the fact that we've seen a lot of organizations dive into cloud without really a purpose. And ultimately part of the reason why those engagements fail is because they haven't really thought through or are not really driving cloud based upon the business strategy of the organization. So specifically you have to justify the investment from a business perspective. You need to also understand and ensure that you can embrace OpenStack environments into your legacy IT organization. There's no point in building out a POC or a pilot and then letting it run in a closet. Ultimately, you invest in OpenStack, you invest in cloud, you invest in running applications in a distributed fashion, giving you much more agility than you can get today, et cetera. But ultimately you have to take into account not only your legacy IT infrastructure, but also your operational capabilities. You also want to choose the right architecture for the business. We'll see later on, we'll give an example of an OpenStack cloud that supports a Hadoop big data analytics cloud. But ultimately, not all clouds are created equal, right? So I think the district providers are doing a good job of trying to create architectures that are actually specific to workloads that actually can scale with your needs and your growth. But one size doesn't fit all and you have to do the analysis and the assessment to make sure that your architecture can scale with the business and support your business drivers. And then finally, something that is overlooked, but organizations are starting to understand the importance of this is the fact that you have to be ready to transform your operational processes. And that includes governance, looking at your organization's organizational skill sets, looking at your processes to ensure that you can not necessarily map legacy processes of procurement, for example, or provisioning onto a cloud environment. And that is independent of whether it's OpenStack or something else. So today we're gonna talk about building a car cloud. Actually, this is one of the top 10 automotive manufacturers in the world. They had several challenges. One was it takes, as you know, the product lifecycle for a car is three years and their IT deployment lifecycle was three years as well. And so they couldn't really live in that kind of an environment, especially when their competitors that were coming out with fairly interesting customer-facing solutions in the car, they were able to take data that was being generated by cars as well as internal and external systems structured and unstructured and coming up with very insightful type of information. And so they needed an environment that would facilitate that. They also, obviously that environment didn't adapt to the rapid change in technology. And as they were bringing in more and more of their legacy partnerships, they were seeing that they were trying to understand, well, if cloud is supposed to save me money, why am I paying more for infrastructure that you say is virtualizing my environment when in fact it's not giving me any advantage? They also had massive amounts of data. Massive amounts of data being generated, like I mentioned, by multiple sources. It was sitting in an Oracle database in places like Irvine, California and parts of China and Europe as well and not really being addressed or managed. And they saw their competitors actually making fairly good, insightful decisions on product prioritization as well as impacting R&D and quality. And so they needed to be able to harness that power as well as sharing information across the business units. One of the things that we looked at in planning this engagement was to make sure that the platform that we built and architected enabled business units to share data. So really looking at providing data as a service, a data as a service platform across the organization. And then finally, bringing OpenStack in again, like I said, as a catalyst to enable standards across the organization, especially with a software development life cycle. So we talked about the business drivers being a very important aspect of the investment in cloud and what they saw were the following. It really was around open innovation. It wasn't just about getting the agility that OpenStack provided across the organization. It was really helping with developing open innovation. Using, can we build, could we build a cloud platform based on OpenStack that gave support to a wide range of applications across multiple business units, centralizing the management and operational functions to save cost and time and enable agility. There was also the analytics piece, right? Can we gain insights and competitive advantages from the data assuming that the platform could render the information to my data scientists. We talked about data as a service as a goal across the organization. That was actually one of the drivers of how we architected the solution to make sure that it could not only be accessed within the local market, but also globally by its subsidiaries as well as third parties. We talked about agility. Real-time was another important point. So real-time data is being generated by cars all the time. Assuming they could address the cost factor of dealing with the telecommunications providers of transferring that information, they could really harness the power of real-time data centrally to make real-time decisions for consumers in the vehicles themselves, whether it was navigational requirements or whether it was lifestyle requirements. And then finally, cost. Ultimately, cost was not the primary driver, but cost had to be the third or fourth objective, knowing that it would come if we got the agility that was required as well as the openness and the ability to share information and data across the organization. So let's talk about the logical architecture. Really, the architecture that we designed was, again, to make what we call the car cloud as the strategic enabling platform. So it was just, it was more about building a technology platform that could provide services. It was really more about enabling internal lines of businesses across sales and marketing, quality service and R&D, allowing them to share the information as well as external consumers and third parties and government agencies to actually share in the usage of that platform, contributing to that information flow that could then be used by the organization to make insightful and intelligent decisions. Again, like I said, whether it was a product decision, whether it was a customer service decision, or a competitive positioning requirement. We also had to make sure obviously that we were linked to enterprise IT. And one of the things that we'll see later on is how we structured the cloud group, what we call the cloud competency group was that in order to successfully deploy something of this scale in an organization of this size, you can't rely on incubating it within enterprise IT. We really had to externalize it and extract it from the infrastructure from the mothership, incubate it externally. But at the same time, we needed to be linked with the enterprise IT organization from a support perspective, process perspective, integration perspective with, for example, their legacy data warehouse systems, which would feed into the car cloud itself. And so one of the things that we had to do was work very closely with the enterprise IT organization. The implementation roadmap, again, one of the things that we found was having, again, made mistakes in the past was that if you try to do a big bang approach, especially with technology as complex as an open stack, specifically running workloads that required fairly high performance requirements and fairly high scale requirements, was a fact that we needed to start small, and we needed to start small in order to prove the concept, both at a technology level as well as a business level. So we started with a POC, which Brad is gonna go into in detail. The POC was really focusing on validating the car cloud on open stack against a legacy appliance for the same use cases. And in order for us to test that we have to define metrics and KPIs upfront. And that's really, really critical, especially when you start making investments in these environments, you have to not only define the objectives, as we talked about, but also have to look at the metrics that you're measuring because that's gonna be critical in order for you to convince your executives that this solution is the right way to go. Phase one, once we validated the POC, was really more around a pilot. Again, incremental steps to slowly demonstrate the value of the cloud, essentially Hadoop running on an open stack. And specifically around a single rack deployment, we expanded the use cases that Brad is gonna get into in detail. We actually, again, had very specific metrics we needed to measure. And then using that information, justifying the investment and scaling out from a single rack deployment to a 10 rack deployment over the next six months. And with that, I'm gonna turn it over to Brad and he's gonna give you some details about the POC itself and the pilot results. Thank you, Chesco. So you can see here, this is a picture of the actual POC environment that we were using for this particular customer. A key point when we've presented this slide we've all talked about this environment before is there is a continuum we look at as far as adoption of open stack. And the key, we talk about adoption of open stack because it isn't a product. You're not buying something that you're going to ship into the environment and install. It is something that affects both up the stack and the operations process and everything that Chesco was talking about. So there is a step before this where you could do technology evaluations. So you don't necessarily need a half rack for that type of environment. That's where a pack stack and all in one installs and those sorts of very light evaluation environments come in. The key with this investment is more you have to bring the organization along in their understanding of the technology and their understanding of the value in steps. And those steps have to be connected. The amount of investment of time and money has to be connected to the quality of the proof. And that is the key to this particular step because the proof in proof of concept is essential. So what you do to actually prove the value of the environment is really linked to how much you're going to go about investing in your project at this particular point. So you can see here, as Chesco mentioned, we had a number of different use cases and we looked through them with the customer and evaluated those workloads. Key things we look at obviously is trying to engage the application owners for those particular environments because what we're looking at in these environments, as you'll see later, there's a lot of testing that needs to be done. Generally, when we're porting applications, particularly if we're looking at existing applications, then there's going to be some customization and migration. So whether that's SIs that have worked with those apps or actual business unit development teams or IT development teams, we need to engage a lot more people at this particular point in the process. At the technology evaluation phase, that can be a couple of IT engineers in IT just downloading and testing out OpenStack itself. Once we get to this phase, it's a much more complex environment. In proof of concept, we still stick with predominantly core OpenStack. We're not layering in a bunch of monitoring tools or other environments around it because we're trying to keep the cost and complexity low. That way we can execute quickly and we can get to the next phase quickly. You can see the notes on the bottom, RDO, it was straight Hadoop, KVMs on CentOS and Quanta, very light, very quick solution to get in place for this particular customer. So what we did, as we mentioned before, as far as the use cases, you can see in this particular project plan, says a reasonable amount of work upfront and the real variance in executing the POCs or what really affects the timeline is that first half or the first segment where you're doing the analysis, which actual use cases you can use, which of those actually have test cases or test harnesses to actually validate and prove the concept and how you go about documenting what is going to be the valuable outcome of the proof of concept. So that can really affect the very first part of the project. The next part, the part that often gets a lot of attention is really the one that we find we can execute fairly consistently within the two to four week period. And that is basically just install and config of the environment and basic integration work. And then you can see that the largest consumer of time in the POC is actual execution of the tests. It is not a product, so we're going to be doing iterative testing around tuning to make sure we get the maximum possible benefit from only environments, but there are setting up test cases. There's a lot of different organizations involved in executing those tests. So you can see selecting the tests, how many you do, how complicated they are, is really going to affect the cost and timeline for doing this sort of POC. So in the middle, what really, some of the things that people think should be fairly easy, but trip up, the majority of POCs is on the left of this slide, and that's really around equipment. You should, we should think in nowadays that equipment should be a fairly commodity-based thing to get together, but unfortunately in enterprises, it can be challenging to get the stuff in the door through the purchasing processes, implemented, rack stacked, tested to a basic level and operational. So you need to be very careful to that when you're doing your project planning, make sure you have that sorted out. Other discussions that happen next are really around the software and the data, which distributions or which code we're going to use. In the POC, it's less of a concern around the actual distribution because you do plan to actually rebuild this environment. It's not something that's going to persist into production. So you can go with very cutting-edge code. You can go with less-than-package distributions. It really depends on the requirements of the customer. The distribution supportability is not the critical factor, unlike what we'll see in later planning phases. What is, however, as we said, is we really need to test with real applications. So either we're doing tests on applications that exist today, or we're doing side-by-side testing with an alternative deployment method. In the information we give in this presentation, we will focus on the outcome of particularly a side-by-side evaluation. But when you're looking at the application software licensing, he'll install, he'll customize it. Who's got the testing tools? He'll do all the testing, install, and configure, and particularly for large data sets, particularly if you're looking in the consumer space, getting access to consumer-based data sets is very challenging. So we have to look at whether we're doing data substitution or cleansing of data for the testing process. Those sorts of things take time. You just got to go through and go through the process. And finally, we find in most enterprises, security is a separate organization. Often an afterthought when talking about things, so you need to make sure you pull those guys into the conversation early, or it is going to bite you halfway through the process, particularly when you're using data sets, okay? So get it in early, get the pain over with having those discussions. We have found quite happily that we can deal with all privacy and security discussions with those organizations. It really just becomes an education process on what security in an open-stack world actually means. So in the POC we did one particular test case. So we talked about a number of different use cases. And what you'll find when you're doing your POCs, one of those use cases will shine above all others, okay? And that will be because of the quality of the result as far as the impact it has for the business and the verifiable or the validity of the actual test results. In this case, it was looking at particularly that some telematics data and doing this particular map reduce effort on millions and millions of rows of data being streamed on from the cars. And we had a side-to-side test with a legacy environment that just happened to be on site, trying to be deployed at the same time. So it was picked up as a use case. We did an alternate deployment on the open-stack environment. And you can see, as far as the cost of the environment, we had 125K versus 1.2 million worth of legacy hardware. A bit of services, three weeks worth of services. The two weeks was actually for the legacy appliance was actually engineering effort on site for them initially. And the reason why they're less than our effort is particularly they didn't have a lot of option of actually changing their environment, okay? So they couldn't reconfigure and reconfigure in their testing process because they didn't actually have the engineering talent to do it on site at the time, okay? That came much later when we get to the pilot. But what you could see is they couldn't actually complete the test. They couldn't get it done within, and we're talking two or three days of running. They couldn't actually get it done whereas the open-stack environment using Pure Hadoop, we did it in 40 minutes on a half rack of equipment, 125K. So that was a pretty impressive result for them, surprising both for us at the time and also for them. So one thing that did for us, because you can see it's very stark result, right? It's very obvious result. And what that did is meant we went from POC to pilot very quickly, okay? And the more quantitative you can get on your results, the faster you will go through the adoption process. If you spend a lot of time on, you know, strategy is important to lead the engagement, but as far as getting the actual investment dollars at any particular time, quantitative results is the king. So we went into pilot, and basically the service that we offered, obviously we did architecture, installation, training, and porting more apps from existing environments to the car cloud. We went from doing it off-site to an in-house data center for the customer. And the reason why we did it off-site with the POC was purely because we wanted to get it done quickly. You know, speed, get it turned around, show value quickly. So we did it the least path of resistance. Whereas as you get to the pilot environment, you're now looking at something that is a candidate for production, okay? So it needs to fit into the environment that it's going to exist within production because you plan to scale off that, okay? So the architecture has to be production-ready architecture, okay? It needs to be commissioned and go through that process, so there are some things we'll talk about in that space, but it needs to be architected well and it needs to be in an environment that can go into production, okay? We engaged a few different BUs for a few different other use cases. In this particular case, the first step was a full rack. The only thing we really want to say about this is really, you still need to keep it standardized and simple, okay? We had two profiles of nodes, basically compute nodes and storage nodes. We stuck with Qwanda because the customer was very happy with the POC result and so we went into the pilot result with the same types of equipment. We do need to watch out about the networking. We need to make sure we have suitable networking for both IPMI and all the other connections that we need and 10 gig where we're shipping a lot of data around the place. But that's pretty much all we want to say about the rack. It really, the standardization aspects of it are the most important. People will quickly ask, so do we need to use Qwanda? Does it need to be that type of equipment? There are some definite benefits in the Qwanda-style equipment, but as long as you've got standardized, usually an enterprise will go with a vendor that they have a relationship with. Software components, so when we get to the pilot stage, we're starting to layer in other types of tools and technology, okay? We're still not really into full production environments, so we're not really going with absolutely everything we need, but we're starting to build in things like Scala, for instance, where we're looking at interfaces for clients, things like Ceph, and then we've got Galera in there for the HA-type capabilities. We've got our Goldstone product that we launched this week was in this customer for doing some of the monitoring and ZabEx in there as well. So we're starting to layer tools and technologies that are more production ready. Key being, we're still in pilot. We're trying to keep the cost down, so you'll see a lot of these things are still unsupported or evaluation versions, but there are supported versions available and the path to the supported version during the commissioning of the environment into production would not be a challenge, okay? So here we're looking at more production ready, but we're still keeping the cost down. We expanded the testing, so previously we were looking very specifically at the results of those applications that were gonna give us the benefit. You can see here we're starting to look at more broad operational environments to make the environment more applicable to the entire of their IT. So you can see we did some VMware testing, we did some Hyper-V testing in there. We did a lot more testing and validation of the environment using Tempest for supportability. We did a lot more testing of throughput to looking for performance bottlenecks in the actual OpenStack environment. So a lot more lower level testing involved in this, but we also did the application testing on top of that. So we did a lot more non-functional testing as we got into pilot. And because the environment is mirroring a production ready environment, non-functional testing makes sense, okay? If you were in a POC environment in a half rack, you're probably not going to get the same results for your non-functionals. Skills is an important part as you're taking through the adoption phase. You start off with your technology evaluation, it's probably one engineer downloading some stuff and putting it on whatever box they have available and that's good. Few of those and you've got some nice reference points for comfort with the technology and tools that are involved. As you go through POC and become more production ready, you need to start with more specialist skills, okay? You need to start also from the point of view of engaging the production teams that are involved in those particular things. So here we look at compute, storage, networking, starting to bring in specialist skills in those particular environments and then connecting them together with people who have OpenStack skills, okay? That's part of the transformation of the operational environment is to start connecting those people together. OpenStack specifics, which often come from companies such as Selenia with the existing legacy, it's horrible to call some things legacy because they're not that old, but with the existing environments and the existing skills and technology. So how do we go once we got through the pilot? The legacy appliance company, which shall not be named, but you come to our booth and have a chat if you want about it. They had some time to think about that. We had, what do we have about four to eight weeks or something between POC to pilot? So they had some time to hammer away trying to tune their environment, as did we. So we went to Cloudera on the Hadoop environment. We expanded a little bit of the types of tools we were using for some of the analytics. You can see the use cases expanded. We weren't just doing one particular type of data analysis, but all these things shared a common pattern. It is like, we are talking about a common infrastructure, so there should be patterns as to the types of services and analysis tools that are needed to do this type of work. But we expanded the number of use cases to give the customer more comfort about the applicability of the results to their business. So what you can see is from these results, few different types of analysis down the left-hand side, a bunch of sizes of data there to give you an idea of scope. But you can still see we're looking at a 3x to 10x type of performance value for this type of test. One of the questions that often comes up in this slide is really about why are we doing Hadoop in a virtualized environment? And I sort of mentioned it before that the legacy appliance had some real trouble reconfiguring itself for performance. What we were finding when we were doing this sort of analysis is we really wanted to change the profile of resources that were available to the solution as far as the size of the nodes that were being used and the number of them in any particular analysis. For us, that could be done in minutes, and then rerun tests. For them, it was a matter of really starting to pull things out and re-plug things around the place, which was taking them hours and days to do. So we could run a lot more tests of a lot more different profiles in a lot shorter period of time. And that was really the value of the virtualized environment. In addition, as they go forward, they have such a large number of teams that wanna do this type of analysis, and it's going to be a shared resource pool. So the environment really needs to be configured for lots of different things all the time. They are a very, very large company. A lot of research scientists, a lot of scientists in general, so they will need that sort of architecture flexibility to be able to change things around. So that's it. I'll hand back over to Francesca to talk a little bit about the operationalization after the pilot. Yeah, I won't take too long. I wanna leave time for questions. But one of the key areas of success for any open-sac deployment is around what I called creating a cloud competency group. And really, that means structuring a group and incubating a group outside of what I mentioned, the enterprise IT environment to make sure that cloud technology, processes, governance changes can be absorbed well into the environment and then spread out across the organization. And so in this particular case, we made sure that we set up a cloud steering committee that was comprised of business unit executives, IT executives, as well as very specific levels of engineers from within the organization that could drive the strategy and the flow of applications that could be deployed on the cloud. One important aspect of this was also building an application architecture and onboarding team that basically set the standards across the organization for architecting solutions to be optimally deployed within the open-sac cloud environment, whether it was a big data application or a mobile application or whatnot, but ultimately having that centralized group that helped to guide development teams across the organization to architect applications to run effectively on the environment that we build was critical. Obviously, in this particular case, the big data analytics piece was a major component of it. And so they built a pool of data scientists cross-LOB data scientists that could ultimately guide the performance requirements as well as the query requirements for the application. And then finally, cloud infrastructure engineering and operations. Really, creating an operational team that was focused on managing the cloud, monitoring the cloud, and then expanding that team initially with third-party resources from outside the organization as well as helping that organization build out that capability by helping them to hire and train those individuals to ultimately manage the cloud infrastructure. And so whether it's a company the size of an automotive manufacturer or whether it's a smaller enterprise, you need to look at this and creating some sort of an incubated cloud competency group to really help accelerate the adoption of open-sac within the environment. Some of the outcomes of the pilot, what we learned is that Hadoop on open-sac works really well. In fact, you can continually tune it to make it perform significantly better than legacy appliances that are touting their capabilities, which frankly we showed you what the results were. The capacity plan for the cloud was really driven by the workloads and the use cases. Again, we used the analysis that was done at the application level in the business units to really drive the scale-out requirements of the cloud. One of the things that we found is that we went with a single solution for block storage, but we found that we had, in the next phase we're going to be re-architecting the storage component to create a tiered storage solution. And the reason why we're doing that is because we're tailoring the environment to those specific use cases. So standard block storage, iSCSI for a little bit better performance, and then SSD for high performance requirements as well as targeting very specific use cases in the mobile arena. We talked about Goldstone, which is really a platform to give you monitoring and management visibility into open-sac clouds, and that was the first deployment over at the customer. And then ultimately, what we talked about before, the cloud competency group is really the next phase, we're going to be expanding that out to make sure that it could be enterprise-wide and not just focused on the incubated group. Let me just go to the final slide, and then we can open it up for questions. So really, when we talk about implementing open-sac clouds and the success of open-sac projects, yeah, technology is one aspect of it, but ultimately, you really need to look at how do you reconfigure parts of your organization, how do you introduce new skill sets and capabilities, how do you enhance governance and automate some of those policy requirements so that you can facilitate the deployment of cloud as well as the management of cloud. So one of the lessons learned over the course of many, many deployments is really incubate. So isolate then extend to the broader organization with the CCG as well as with the cloud itself. We talked about the competency center, focus on the culture. I mean, really, cloud requires a different mindset in terms of deployment of applications as well as management of operations. So you really need to work with your client and bring in experts. Don't necessarily try to do it yourself, but bring in experts to help you reconfigure the organization for adoption of these new technologies and services. You really want to also align for business agility. In other words, move to iGile but recognize that iGile is a mindset and not a solution. So it's really also about bringing in the right expertise to enable that transformation. And then finally, governance. I mean, really, you have to look at the governance models and make sure that you adapt them for being able to take advantage of what an OpenStack cloud can bring to your organization. And with that, I'm gonna open it up to questions. I think we have five, 10 minutes. So. Hi, I'm Jonathan from TD Ameritrade. How do you keep the internal IT organizations? You talked about incubating outside of the IT organization. And I presume that's because you don't wanna get locked into the legacy mode of thinking while you're trying to create something that's absolutely brand new. How do you, at what point do you engage those internal resources and get them to calm down and realize that there is a future for them with the cloud technology? Well, you have to start right at the beginning, right? So when you talk about building an incubation group, I mean, you really wanna make sure that you have representatives and participants that have the right, not so much mindset, the right skill sets and capabilities, make them part of that group. Cause ultimately you're gonna want champions to go back out into the organization to make sure that they can articulate the benefits of what you're doing. And so I would start by incorporating them as early as possible. And then also working with the teams internally to make sure that it's not seen as a skunksworks project or it's seen as an isolation. You wanna communicate your successes as well. So I would say one, bring them in early and two, communicate the successes and make sure that you can articulate how this ultimately is gonna fit into the existing enterprise organization. And one final question. I'm sure you've done this at multiple locations, but what would you say, just to get an idea of where things could go bad? What's the worst experience or the worst challenge you've faced in trying to implement cloud and an enterprise? So a few things. One is not including the right groups in the organization. And when I talk about the right groups, I'm talking about having worked with financial services organizations before and healthcare organizations. It's really around, you don't want audit compliance and legal to come in and shoot something down because you're not only building a distributed environment, a private cloud behind your firewall, but in some cases you may also be bursting out to the public cloud for whatever reason, right? And the public cloud may be secure and may be specifically targeted at the financial services industry. But ultimately, what goes wrong is not necessarily in the technology is ensuring that you bring in the right parties into the decisioning process, especially audit compliance and legal and risk management when it comes to regulated industries and also non-regulated organizations. I would just also add that dealing with theory as far as in your testing and your POCs and evaluations is an immediate route to failure. So if people are trying to test applications that don't exist today or applications that they would like to do in the future, they're not your first choice and you won't get the value easily demonstrated to allow the continued funding to keep moving down the path. Any other questions? All set, thanks very much. Thank you. Appreciate it. Thank you.