 Good afternoon, everyone. I think we'll go ahead and we'll get started. It's 2.15. My name is Nate Zeeman. I'm one of the offering managers with IBM Cloud and I have the pleasure of introducing this hour's session called Don't Just Take Our Word For It, where we hand the mic over to two companies that we've been working with that have chosen OpenStack as part of their solution. And they're here to tell you about their journey with OpenStack, the reasons that they've chosen it as a technology choice and where they're going with their use cases. Today we have Materna. You may have heard them in the keynote this morning in their choice of OpenStack plus BlueBox and AT&T, who is here talking about their choice of OpenStack for object storage with Swift and OpenPower. We'll be doing two different use cases that are obviously very separate from each other. Materna will be going first, then handing over to AT&T with questions afterwards. So, Armin, do you want to come join us first? Thank you. So good afternoon, everybody. My name is Armin von Dollinger and I'm working as a senior IT architect and as a consultant at Materna. I'd like to give you a brief outline of OpenStack at Materna. But let me start just with introducing Materna. Materna is a system integration and consulting company founded in 1980 and headquartered in Dortmund, Germany. We have 1,700 employees and a turnover of 230 million euros. Materna enables a broad range of services for our customers, services you might have been using. Materna acts totally vendor-independent and has partnerships with all the leading vendors in the market. We have been an IBM business partner since 2006. Our challenges are many, depending on the role we have in each specific customer situation. We act as a consultant and advisor in many of our business relationships. We advise the customers on strategic IT decisions, build and modernize their platforms and runtime environments. We develop some of our customer environments sometimes over the course of decades. In these customer situations, our consultants must be experts in how to transform the IT into a modern and agile architecture. For other customers, we act as a software developer. So we have to maintain test and development systems. Our various teams work on many different projects. We have long-term relationships with our customers. This means that different versions and even different technologies for applications have to be supported over time. All these environments have to be operated and supported in parallel. And Materna acts also as a service provider. For some customers, we are service provider and host. Deploy and manage full business applications like applications for mobile service providers or web-based customer business applications. We deliver applications for the German Customs Service, for example, and other public institutions. Many of them are requesting data locality, some kind of German cloud. As well, we deliver infrastructure automation and service management solutions for many big customers in Germany and Europe. You probably know us from several airports all over the world because we deliver self-check-in and self-backdrop systems. First, let's have a look at the stack provided on-premises. On the left side, you will find internal services which are consumed by ourselves like mailing, billing, CRM systems, etc. On the right, you will find managed service. This stack represents a highly available data center that provides services for clients. These services incorporate mainly software as a service and business process as a service like incident management systems and services 24 times 7 monitoring and support. Infrastructure as a service and platform as a service is not on our regular portfolio on this managed service stack. These services still are on-premises. Grown over the years historically and have not yet been moved to off-premises cloud but plan to be moved partially. But there also are client services which can't be moved to an off-premises cloud because they are bound to Materna data center by contract or due to software license which isn't cloud ready. The right side of the slide shows Materna off-premises cloud service is varying from infrastructure as a service to business process as a service. We are able to offer more services than on-premises. Materna on-premises cloud service may be consumed in-house by our developers or by our pre-sales folks. The latter may, for example, need a demo environment set up for a few days or a few hours. And of course we will have the Materna off-premises cloud for our clients. So why go for OpenStack? Of course there's the market momentum OpenStack has developed which we definitely do not ignore. Typical drivers are scalability. We have to fulfill the scalability requirements of the hosted applications furthermore in our development projects the need for a highly scalable environment arises when executing, for example, load tests. Speed is essential in our development scenarios which have moved more and more to agile methods following our customers' needs in going to market within tight schedules. Therefore we are following the DevOps spirit of the agile world of continuous integration and continuous development and deployment. Interoperability and portability by using OpenStack allows to use the same APIs and the same functionality everywhere. Therefore it makes no difference whether a virtual machine is provisioned on OpenStack in your own data center or in any OpenStack cloud environment. Cost. On one hand you may reduce licensing cost by using OpenStack open source virtualization. On the other hand you may find scalability, speed and portability useful. Scalability, you have no more need to pay for peak performance, grow as you need and shrink if you don't need it. Speed in development is also reducing project costs. Portability you may scale out for peak workload to the most efficient provider. And of course we are vendor independent it's a major quality of OpenStack. So at Materna we started with a grizzly release. It was a pain to implement due to problems with the available hardware and finally we did succeed with Icehouse. We encountered many problems during the release upgrades on our way to the Liberty release. Liberty was and still is the release up and running for internal use, mainly for development. The experience has taught us that we better set up a Materna private cloud production which is for our customers by using a managed OpenStack solution. We must provide a production environment for our customers and don't want to care for the OpenStack components and how they fit together nor would we care for release upgrades anymore. Due to the grizzly release experiences we were looking for dedicated hardware separated from the fish tank concerning performance and data location. And it should be located in Germany as an alternative to on-premises. So we evaluated IBM Bluemix private cloud. It is managed OpenStack, dedicated hardware of premises and located in Frankfurt. Next is an overview. Let's have a look at the concept of Materna private cloud. Having multiple tenants and multiple projects on Materna private cloud. A project or parts of a project may be ordered via an OpenStack based service portal. Supported by an OpenStack automation and orchestration layer. Using the OpenStack API to deploy and run services on different OpenStack clouds. To start implementing the contents or to be more precise to extend Materna private cloud and leave the Materna customer data center, we first of all had a proof of concept concerning OpenStack managed services on IBM Bluemix private cloud, formerly known as IBM Bluebox. So my excuse is if I still call it IBM Bluebox. The proof of concept ran for about three months. It was successful, of course, otherwise we wouldn't use Bluemix private cloud right now. We could not test the release upgrade during that proof of concept but there have been several minor updates. No problems and no interrupt of services. The POC was mainly supported by Ruben Orduz and Michael Weisbach, so special thanks to them. The next step was to expand our VMware infrastructure to Bluemix infrastructure, formerly the SoftLayer data center, also in a POC. The purpose was not only to test the VMware in-house to VMware SoftLayer interconnection. There was also the intent to attach to OpenStack on Bluebox using the SoftLayer infrastructure. So same scenario as you usually have within your own data center. Next step was to expand our VMware infrastructure to Bluemix infrastructure also in a POC. And we also were deploying other systems residing on SoftLayer. So the final step was to cross-link all systems that we had created off-premises on a physical basis within IBM SoftLayer data center infrastructure. By the way, the systems must not reside within the same SoftLayer data center. You can spread them across all SoftLayer data centers where this kind of hardware is available. And as a benefit, data traffic within the SoftLayer data center is for free. So why would we do this? When you start shifting workload from your own data center to an off-premises cloud, you will have the same requirements for data center operations as you have for your on-premises workload. You have to monitor what's going on, collect performance data, manage incidents, backup databases, etc. You may, for example, set up a backup infrastructure of premises, especially for the VMware workload. Of course, it may also be used for the Bluemix infrastructure private cloud. Maybe you don't want to keep the backups within BlueBox. It's highly available, but nevertheless it may fail completely. In this case, we better have our customers' data outside BlueBox to restore it on other open stack instances. We don't want to miss our SLAs, of course. Finally, let me mention security locks. We don't want to store those locks within BlueBox. They have to be stored outside this environment. They must be preserved for auditing and compliance reasons. Why should we send them directly to our data center? Start on separate hardware within SoftLayer, we can access the locks when needed. That's why we need interconnected systems, at least some of the reasons for it. And if we are driven by our customers to run open stack application and services on-premises, government or customs or any public institution, we may install Bluemix private cloud local. But before doing this, we will have make a guess, a proof of concept, of course. Thank you so far, Michael. Thanks, Armin. My name is Michael Weisbach. I'm working with IBM and it's a pleasure to meet you here in Barcelona. It's my duty to give you and very prove all of you about the solution we use in Materna. It's, as already mentioned, the IBM Bluemix private cloud. We very recently renamed the product. You may know this product as Bluemix, Bluemix dedicated, BlueBox dedicated. Because of various reasons, Materna choose the IBM SoftLayer data center site, Frankfurt. But we are able to deploy the solution into any of the SoftLayer data center, of course. We choose a dedicated controller setup. So the BlueBox solution is able to be set up in a converged setup. So controller functions and compute functions as well into one box. But here in this case, we for our dedicated controller, we are going to use a controller, which brings us additional functionalities or capabilities within the BlueBox environment, like load balance as a service version 2. On the compute node side, we decided for enterprise compute nodes. There is also a standard compute node, which is about the size of the enterprise compute node. Compute cores, each node 256 gigabyte memory. Very typical setup, right? And 2.4 terabyte of disk, internal disk for fmr storage. And attached to the SoftLayer network wire 10 gig network connectivity. We also have deployed or installed block storage based on SEV. Roughly 24 terabytes in total. So to provide us a syndrome volume service. And right now this BlueBox or IBM Bluemix Private Cloud 3.0 is based on the Mitaka release. Looking forward to what's next the BlueBox slash Bluemix Private Cloud release 3.1 was released very recently. So we are still on the Mitaka release, but we will get some additional functions. For instance, a very important feature for clients like cloud service providers like Martena. We will have the federated Keystone customer identity provider so we are able to implement single sign-on federation based on OpenID Connector Summer. As well as we got some improvements within the storage area so you do the SEV version we use in the 3.1 release. And finally, and this is also a very important fact and feature for Martena and any other clients based in Germany, we got several compliance statements so ISO and SOC SOC 2 and 3 type 2 compliance which is also important for storing or running sensitive workloads more to come. This is our final slide about Martena and it's a pleasure to hand over to the second use case. We would like to introduce to you today, it's a team from AT&T. So please welcome the AT&T team on stage. Can you hear me? Good afternoon, my name is Jacob Caspi I'm a principal systems architect at AT&T responsible for our cloud architecture with me is Tom Matthews, a distinguished engineer from the power systems at IBM and somehow Kiko's name is not here, Kiko Rice from Canonical. I would like to thank IBM and their OpenPower lab team in Austin for helping us do this proof of concept. We've definitely pressed it a lot to make it happen as well as Cindy from the AT&T architecture team who helped run all the tests for us and could not be here. So with that, I'll let Tom talk about what OpenPower is. So what is OpenPower? OpenPower is a hardware and software ecosystem that is completely open in its nature the stack is completely open it's from the chip up ecosystem it's completely open, it's community based when I say from the chip up, I'm talking about the actual processor chip to the systems that get designed in this environment, for this environment the I.O and then the software stacks on top of it I mentioned it's community based, we have over 225 members that are out there developing various aspects of the stack as we move forward. A very sort of significant focus and so I hope everybody in the audience has heard of Power before. This is a risk based architecture that has been building service for a very long time around it. As I said, we've fully opened this up in the context of this OpenPower environment so a customer that buys OpenPower can get a fully open stack for their environment. We have a broad set of partners that are part of OpenPower, again doing development if you have really good eyes you can see some of the names on the chart. Another very strong focus around OpenPower is really around acceleration. As things have moved forward getting more and more out of the chip cost performance out of the chip, physics and Moore's law have been influenced by that and so now the trend to get to cost performance is really to get to accelerators that are outside of the processor and so forth so that's another very big focus of OpenPower. These OpenPower boxes are targeted at scale out environments so they're very good for cloud, they're very good for HBC and analytics they're very good for scale out commercial environments that exist. The other thing that OpenPower is is the basis of our proof of concept. We used Power8 servers in this environment for the Swift object deployment that we're going to talk about and we saw some very significant benefits the characteristics of Power has some very strong industry leadership characteristics things like more threads, more cash, more throughput and those characteristics really came to bear when we went off and did our work. Fairly significant performance results within this Swift cluster running on top of OpenPower. Swift is an object store database it's also a solution in large scale it is a Swift Swift is an OpenStack project and Canonical, Chris, Kiko from Canonical has been helping us in terms of developing the stack for OpenStack to put on this proof of concept so I'll let Kiko describe what Swift is and how it can help us. So as Jacob introduced me I'm a Vice President for Product Canonical I'm focused on storage and bare metal provisioning and we've been working with both IBM and AT&T setting up Swift on these servers. Essentially Swift is an object store with a two tier architecture all the requests from the end user perspective goes to the Swift proxy and the name proxy is a bit misleading here because in reality the Swift proxy is what everyone engages with and in the back there are object storage demons that essentially are storing the data that the proxy is streaming through. In this case let me just close off talking about why Swift is interesting. So Swift is great because it provides both its own object storage API but an S3 compatible API as well. It lets you store basically any type of object, any size of object and how to distribute those in a way that is scalable and performance. It handles failure of individual storage nodes and you can have multiple proxy set up so in other words both elements of the tier here can be killed and the service still stays up. There's no requirement for anything really on the hardware other than enough disk space and fast enough networking. In other words you don't need hardware rate on the nodes, it can be any class of hard drive and the algorithm and the code itself handles moving data around as it needs to be rebalanced and recover from failure. In this case specifically we're not only looking at Swift, we're looking specifically at a feature that was added in Kilo and stabilized across Liberty and Mitaka which is erasure coding in Swift. Erasure coding in Swift is interesting because the way it's done is different from how it's done in Sef. The proxy is the components in Swift which actually does the erasure coding calculation which means it takes in the objects that the user provided, it breaks it up into chunks, it calculates the parity blocks and then that's what's sent back into the object storage daemon and this is why my issue with calling it a simple proxy is because in reality it's doing more than a proxy is, it's now actually cutting up data, it's calculating parity blocks and then streaming those back into the object storage daemons. It's interesting also because in Swift, because the proxy is doing the heavy lifting of calculating the erasure coded blocks there's higher CPU load typically on the proxy server and this is why this experiment that we did here is actually interesting to show what the impact is on a power. So Tom, we'll talk about what was the lab configuration. Right, so the lab configuration was six object storage servers, that's what we used. The servers themselves were S22LCs, LC stands for low cost open power servers, 512 RAM, Dilport 128GB SAS HBA controller, you know going out to the storage. The storage itself was based upon six super micro 490 storage cores in the environment, there was a dedicated proxy server the data network was 100 gig, the management network was one gig and the software was a Buntu based Swift of course and on the Mataka release. Just one note, this is a very powerful server and that's the server we used in the lab to see how much we can stress the environment probably in real life we can get away with a lot less RAM and probably less cores on the CPU. Absolutely. So what was the proof of concept? We had two goals or two items we wanted to test as part of the proof of concept, the first was make sure that Swift works on power and Ubuntu stack the same way it would have worked on the next 86 platform. The other was to see how far we can stress the environment to see the benefits of open power on the environment. And one thing I wanted to mention is that AT&T has a huge amount of data that we store between our own records and calls and messages and whatever else we store in data. We always looking for bigger and better way to and more efficient way to store data. As you may know, because we made that public we have somewhere between 100 to 200 data centers running Swift and all of these are generating a huge amount of storage that we need to account for. That's why we are, although we're currently running on 3X representation, we wanted to make sure that we are running very efficiently a ratio coding environment to make sure we can take better advantage of the hardware. So with that the initial testing was for data to parities and that's only because we had six servers and we couldn't do more than that. But we tested all the other functionality that we're looking to test as AT&T. So it's the security, life cycle, data hierarchy in stream modification of data and coding, the coding, etc. So this is what we found out, especially with these machines is we started with small loads and we went up to much higher loads of about 2,000 objects over these six machines. As you can see from this graph we didn't even touch on the average more than 50% of the CPU on these machines and that's increasing the workers to as many as 64 workers over these six machines. And that thing is what we're testing and what we're looking to, how many read success ratios we have or what was success ratio. As you can see it didn't matter how many certain workers we put on the servers and the object size, it still remained a fairly constant 74%, about 74% success ratio, meaning the first read and if it didn't work it would do the second read. So the conclusion on this test which was very promising for us was that we couldn't stress it high enough to actually overload the system and that was a very important discovery part of this proof of concept. Of course none of this would make sense unless we had the repeatable solution and this is one of the things that canonical can provide. So if you can elaborate more about that. Sure. So first benchmarks are generally pretty hard to do even on a single node and with any sort of scale out software or big software as we call it, you have the challenge of having to benchmark across a complex system because you've got multiple nodes with different features and different requirements upon them and you want to measure and aggregate that. The most important thing about a benchmark is being able to reproduce it reliably and so the work we've done in MAZ and Juju which is our tooling and automation for doing deployment makes it so that anyone who wants to deploy across any substrate be at x86 or power can just use those tools to get a swift cluster up and running and it's a single option for you to turn on a razor coding policy for one pool that you have and then you can start using it. So essentially what we provide in terms of software which is entirely open source none of it is charged for, there's no gating, you can get all the stuff from us directly and it allows anyone who's doing evaluation of any architecture to quickly spin it up and if you're interested you can contact us and we can talk more about how we did the actual test setup, how the benchmarking was done so you can reproduce on your side and get measurements that are accurate to your network and your infrastructure. Okay, so with that I think we have concluded the presentation. Any questions from the audience? I guess, oh, yeah go ahead. So the question is how are we using containers? Yeah, so one thing that we didn't really, you know, we talked about showed the single cluster performance numbers. There's actually three clusters in here. You know, three of those same clusters that we showed right here. And so the container sync is being pushed across those three clusters. So you were probably kind of curious why would somebody talk about a cluster, a container sync, the context of a cluster. There was actually three of these. Well, the container sync is currently in the testing phase. It's a racial coding with container sync is a fairly new feature. So we have not put that into production yet and this is exactly what we're doing now. Anybody else? Okay, so any questions to the previous presentation? Okay, with that in mind, thank you very much.