 Hello everyone I'm Brad Topol and this is Steve Martinelli We have Phil Estes and Jason McGee. We have a nice session some hot topics Federated identity and containers I'm an IBM Distinguished Engineer I lead all of our open-stack upstream contributions And I'll let the other speakers introduce themselves when they speak I'm I'm Steve. I'm gonna be the Keystone PTL. This is the project team lead during the metaka release For the Keystone project. So our first topic is federated identity When we talk federated identity The open-stack component that is responsible for federated identity is Keystone This is open-stacks authentication and Authorization and access management service it also provides audit capabilities identity and service discovery Supports a lot of token formats And includes new enhancements for things such as federation And it's used by all the open-stack services so Identities are you know what are authenticated by Keystone and open-stack and where can these identities come from? Well, the operator can store the identities inside Keystone inside its own little SQL It doesn't have very good password support The next step is the identities can come from an enterprise LDAP and That's very common. That's a nice way to do things so that gives you integration with The customers existing place for storing identities, but Keystone still sees the passwords the most secure way to do things is through federated identity provider and Keystone's federated identity provider support This is the most secure approach Keystone never sees the passwords. It also allows you to do single sign-on and and these types of capabilities So Steve's gonna take us through more detail on this Thanks, Brad So taking a step back Federation if you kind of break it down to the basic terms, it's when two different organizations are trusting each other And in the Keystone case what we want to what we want What we want to see is Two different entities trusting each other for identity. So in this case Keystone can have identity providers and it trusts that the users have authenticated with their identity providers and They're now able to do things in Keystone And single sign-on if you know we see this in examples all over the place on the web It's a it's a it's a just using your own Credentials so in this example. I'm using my Google credentials to sign in and access another application So if you think about it, it's just using credentials from one place and using and then using an application on another place and Most enterprises already have a single sign-on and federated identity solution So internally you might be using single sign-on and federation to do things such as book travel Assign meeting rooms things like that do expense reports. I Now a little bit of a history of federated identity in OpenStack So why did we want it? Well, we wanted it for the reasons that Brad had mentioned in the one of the previous slides there We could store users in the sequel back end or use LDAP our LDAP plugin But they're they're not as secure as a pure federation approach because Keystone is still seeing your passwords in there And this way if you create a trust relationship between Keystone and an identity provider Keystone will never see the password It's a lot more secure It's just somewhere we where we want to be and we started this in the ice house release So in the ice house release we finally introduced a initial set of APIs that Where you could create identity providers create protocols and and authenticate against Keystone In the Juneau release we advanced this into including something called keystone keystone support Which you could exchange identities between different keystone deployments and we start to include command line interface support In kilo we worked with the horizon team to include single sign-on support for OpenStack and in Liberty We finally start to see Other projects kind of embrace this so you saw Ansible and Puppet and Chef kind of and even the OpenStack client command line project Started to adopt this as well and take an interest in it So what is IBM doing for federation and OpenStack? Well, I've been involved with it ever since the beginning since the initial first talks start to happen I've contributed to all of the blueprints that we listed in the previous slide there And we've collaborated a lot with the different folks such as Rackspace, Red Hat, CERN, HP we've collaborated with them to create the APIs and the functionality in the code and Like I said, we've implemented several of blueprints together and I like to think that we're seeing those leaders in this space So what can you do with OpenStack now? You could use it for things such as social log-in So you could actually sign in with your Google credentials if you set it up correctly and even further you could actually use a command line support for this as well and So what else can you do with it? You could also use it with an existing enterprise identity provider So something such as Tivoli's Federated Identity Manager, which is an IBM product as well as what's your Liberty? if you install the correct Extensions it can act as an identity provider and again Keystone can interact with those as well and Both of these support single sign-on and access through a command line interface as well. So we're pretty happy about that If there's just one takeaway you could get from this though is that Keystone support for Federated Identity is Agnostic it doesn't care what your identity provider is it just cares if you have it set up correctly and I think that's the big takeaway there. It's it's pluggable and it's agnostic And Brady want to talk about the future. Sure. Where are we heading? So we have some wonderful offerings You know our cloud foundry based offering is IBM Bluemix You heard about the announcement of blue box local, which is open-stack based Local cloud and of course we have the dedicated cloud So where we're looking to use Federation is to provide seamless integration for all these these different Offerings that we have so you'll you'll see a lot of work there behind the scenes you may not see it yourself But it'll make things a lot more consumable and now we're gonna hand it off to Phil and Jason talk about Containers I Think Jason's gonna kick it off. Cool. Oh, there you go. How are you guys? It's a after lunch tired For those of us from the West it's like three in the morning or something I Want to talk a little bit about We're gonna switch gears here and talk about containers And what we've been doing in IBM around containers I'm gonna talk about kind of our overall strategy and some of the things that we've been delivering For container capability in IBM's cloud portfolio Phil's gonna come up in a minute and and talk about some of the Contributions and work we've been doing in the community In in the Docker ecosystem and other places around the container space. How many of you have heard about containers? That's like an easy question, right? How many of you have actually run a container yourself? Good. All right, so this is like the hot topic that everyone wants to talk about and I think for a very good reason I think containers are a pretty transformative technology Transformative to how we build apps and transformative to how we run infrastructure With an IBM as we've been kind of building out our cloud architecture We've been looking at how Different technologies different open technologies fit together in the role that they play in providing a robust platform for hosting applications in the cloud And we really have kind of settled on there being three Or actually for fundamental compute models that we think need to be made available in a cloud environment You have bare metal infrastructure, right? There are use cases where The control and the performance of bare metal makes a ton of sense We have virtual machines Provided by from an API perspective by Nova Nova API is an open stack. We have containers Based on Docker in in our strategy and then we have what we call instant run times Which is essentially the capability we get out of cloud foundry Think of instant run times as containers with a prescriptive programming model It's an opinionated version of containers We're straight Docker containers give you a lot of flexibility on the kinds of applications you can run But instant run times I give you a quicker way to get started with your code as long as you fit Within a certain set of rules about how your applications are built and so what we've been doing In IBM is bringing together all of these technologies as we build our cloud and Working across all of these communities across open stack across the docker ecosystem across the co foundry foundation Linux and other places to bring these technologies together to integrate them together And to allow them to work seamlessly together So whether that means using things like Keystone for doing identity and access across all these capabilities Or whether that's the work we're doing with networking around things like neutron to allow us to have common networking Across all of these technologies. We believe the future of cloud is about bringing these open technologies together into a coherent platform So let's talk then about the container piece in in particular and if you looked at IBM's cloud portfolio The kind of highlight capability around containers is a service that we've made available through blue mix called IBM containers IBM containers is a Containers as a service Hosting environment a place for you to deploy run and manage all of your container based applications It's built on top of Docker right use a standard Docker images Exposes a Docker API you can use the standard Docker tools like the Docker CLI to interact with that service But it's a cloud service and one where containers are a first-class citizen You don't have to deploy clusters set up Docker hosts You can simply come to the cloud with a Docker image and run it and manage it without having to worry about the infrastructure It runs on that's our job One of the things that I think is really powerful about what we're doing around containers And something that you see a lot of active discussion in the industry and in the open communities about is the life cycle around a Container, you know, how do you build them? How do you deploy them? How do you orchestrate multiple containers working together? How do you scale them? How do you monitor and do problem determination on containers? All of those life cycle problems have to get solved When you're running a container-based workload, and so within IBM containers We've been trying to bring together all of those pieces as well. And so here's a representation of Of what that life cycle might look like if you kind of think about a container-based application And taking it through all of its Steps as you build it and run it and maintain and deploy that thing over time And throughout that life cycle, there's a number of capabilities that you need Besides just the basic container in which you're running your application So if you start at the beginning The first step in building a container-based app is you need some content to start with right you need some Docker images that provide the foundation on which you're building your application The obvious place to get those images is Docker Hub, which is the kind of central public repository today for Docker images IBM has been doing some work to make IBM software content available as images So we've made things like Webster Liberty or our Node.js platform Or our mobile foundation that you might have heard about in the previous talk available as images on Docker available in places like Docker Hub The other thing though is your images might actually be things you've built yourself So you need a place to store and manage private images that are relevant for your team or your company And so we've provided a private registry capability on cloud where you can store and manage all of your images in a In a non-public location, but a location that's private to you and your team step 2 In the life cycle is building your new Docker images, right? So you take some starting point some base image you add your application into it And you now need to build that image you can do that of course locally on your laptop right using Docker build But we also provide a cloud-hosted service that does Docker build for you so you can actually Build your image build your Docker file and then actually run a remote build where the build will happen in the cloud The build resources are managed for you in the cloud and the result an image will be pushed automatically Into your private registry of Docker images Step three in the pipeline is delivery. How do you actually now take that application? You've built in a container and deliver it Into a running system into running production into a running test environment There's really two capabilities here, which I think are relevant. The first is Build pipeline service right so the ability to actually define a repeatable continuous deliveries pipeline that lets you take that application and run it in a structured way through Automated build through testing from test environments to staging environments to production environments in a controlled way And so we have a pipeline service that allows you to define a delivery pipeline that works for your team and automate the process of delivering that change throughout the steps that you want it to go through to get to production and That pipeline understands containers it understands Docker and knows how to do builds it knows how to publish to Docker registries it knows how to talk to a Docker host or to our container service and Execute those containers on your behalf The next step in the pipeline is actually deploying the container into a running container service And of course the heart of that is this service that we've built on Bluemix the container service that gives us a Shared pool of Docker hosts on which we can run your container for you But there's some other capabilities wrapped around that we've done a bunch of work around Networking providing isolated private networking for containers We actually use neutron as the control plane for that So all of our Docker images running on the cloud get their own IP address their own private network that entire network Infrastructure is set up Automatically by leveraging neutron with OBS and other technologies to allow us to set up the networking infrastructure that you need to run your containers We've done work around persistent storage for containers So if you have a need let's say that you have a Docker image That's running a database and you want to mount persistent storage for the database information itself You can do that in an automated way Share that persistent storage across container images or have that data live beyond the life cycle of the container itself So if you're gonna update your database server You just want to throw the container away and replace it with a new one and reattach the same data to that container that you had before We've also done work on things like support for routes. So if you have a container based app that's handling HTTP Traffic like a rest API server. We can manage the routing of that traffic on your behalf Do load balancing across a set of containers? And manage fault tolerance of that traffic path so that you have a very simple way to expose our container based HTTP endpoint Once the thing is up and running you need some capabilities to help keep it running You need clustering or container groups that allow you to scale that application that allow you to Respond to changes in demand and that allow you to recover in the event of failure We also are taking advantage of things like bare metal performance our containers run on top of bare metal They don't run on top of a hypervisor. So you have An a more efficient execution path where those containers can run directly on a bare metal server And then finally you want some visibility into your container based system And so we've done work around integrating logging and monitoring for containers automatically So any containers you run automatically get log analytics and metrics data and again We're doing this within the community We're leveraging technologies like elastic search and graphite as the infrastructure for doing that monitoring and we've integrated it in with with a docker We've open sourced technologies like a crawler technology that can automatically Extract all the log and metric data from a docker host from all the containers on that host and publish That information into a logging system. So as a developer you don't have to Deal with how do I instrument my container for logging that can happen automatically just by running that container on a host That's been instrumented to get that data for you So I just wanted to give you a sense for the kind of the full breadth of capability You need in any container environment if you really want to run production grade workloads within that environment So if we've been doing this alone and the answer is of course not we've been doing this With partners and in with the community IBM has a business partnership with Docker the company Phil's going to talk in a minute about some of the work We've been doing in the docker ecosystem from an open source perspective But IBM and docker have a business relationship where we've been working together To deliver containers in an enterprise context The IBM containers work. I just described is part of that but we're also doing work around on-premise Support for docker registries and run times and integration with other parts of our portfolio So Michael elder spoke before us on some of the work that he's been doing around heat patterns with things like urban code We have integration of docker into those platforms So as a developer if you're building applications using docker We can integrate that with your existing development processes that you might have built with tools like urban code deploy With the community there are two important things I wanted to highlight in the last six months or so Two important foundations have been created to help move the container Ecosystem to a more open governance model around the technologies that define containers And so so trying to make sure that all of us have the right voice in shaping the future of containers The first is something called the open container initiative And the open container initiative which was created underneath the Linux Foundation's mission is to define a standard Specification and reference runtime for a container runtime and image format, right? So this is actually defining the standard for what it is what it means to build a container image and What a container runtime should do and allows us a standard specification for that so that we can build Different implementations of container environments across different platforms, right or across different providers That was done with a whole bunch of industry players You can see here the guys at Docker contributed Parts of the Docker project like RunC as the initial reference implementation And that community is actively working right now to clarify its charter and to define The kind of key elements that will make up the basic definition of what a container is and what a container image is The second foundation is something called the cloud native computing foundation and this was formed a month or so later Again a huge collection of industry players Including other foundations like the cloud foundation And others that are part of the the cognitive computing foundation. This is a little bit different the goal of CNCF is to provide a place where we can all collaborate on the higher order problems around containers You know, how do we do orchestration? How do we do networking? How do we do service discovery or microservices patterns with containers? Its goal is not to come up with a single specification But to be a place where we can innovate together around Matureing the idea of containers and filling out that life cycle that I described Filling out all the capabilities in that life cycle in a community forum Google Contributed things like Kubernetes as part of that foundation The maze those guys are contributing IBM is contributing resource management and other technology into that space So I think this is going to become over the next year a really vibrant place where we'll see the development of a lot of important Container management technologies the things that we all need to make that full life cycle. I described real So that is a little bit of context on what we've been doing around containers Let me have Phil come up and he's going to talk about what we've been doing on the open source Mic on all right. I think it is So yeah, Jason's given you kind of a view into what we've been doing with Docker and Containers from an enterprise delivery and partnership But I think it'd be great to also step back and you know even going back to what Steve and Brad discussed And even if going all the way back to this morning's keynote when IBM Involves ourselves in an open-source community. We try and do it in an effective way That's not just beneficial to IBM's interest, but Beneficial to the community as a whole and so Docker is just another instance where we have starting a Little over a year ago with a single contributor in the upstream community Just as we've done with cloud foundry and with open stack We've grown that over the past year to approximately 25 different IBM contributors across many of the projects so a couple of us have become maintainers in the Docker engine so myself and another IBM or that I work closely with but we're not just working in the engine. You can see the list here I won't read all of them, but We've contributed to things like compose and swarm To the distribution project and some of the other pieces that are now going into these foundations such as lib container Which is now part of the open container initiative So across that set of releases that you can see there We've now grown our overall contributions to be the number three Non-docker ink contributor to the upstream project, which is exciting for us And in that time we've not only done things that are beneficial overall But we've also been able to integrate some of the things that enable our container service So integrations into software and object store And now in several cases working on enabling some of our other Architectures you've heard of power and system Z so enabling these in the Docker engine in the registry and Even in Docker's own test workflow to be able to test on other architectures And so you know you've seen that that much of this work has led into what we've been able to do in our blue mix container service But I thought I'd spend just a minute to list some of the very specific things that we've done That are both beneficial again to us, but but also to the overall community So in the area of security running a public cloud Obviously has some fairly significant security concerns. We have people at IBM research Looking at all the attack surface All across the container ecosystem, and so we've been able to add things like user namespaces, which is something that's coming in Docker 1 1.9 Trying to clean up the overall error handling paths debugging capabilities And several other key enhancements And of course we have many other things that we'd like to see happen in the future that we continue to work on I just mentioned that multi architecture support is key for us across all our platforms But it also enables things like arm and the Raspberry Pi work that's happening in the Docker community Things around internationalization, which is obviously of interest around the world And then we continue to work in the RunSea community. So for the open container initiative There's a set of work to continue moving the specification forward and the implementation of RunSea And even looking forward to Docker's REST API using swagger to formalize that so these are all areas where We are contributing or looking forward to do in the future And you can see a timeline there of both our business partnership But also our open source work with Docker and the rest of the community members One again, I just mentioned a moment ago that Running a public cloud with Docker containers has has caused us to look deeply in the area of security And so this has been a great way for us to partner with Docker both upstream in the open source community But also to have calls together with them on things that we're finding Research that we're doing and so a few areas where we've worked together. We've submitted pull requests We've we've had things merged related to app armor profiles Our work on multi-tenancy. So obviously we're running a multi-tenant container service Those of you who know the Docker engine today doesn't have multi-tenancy. So working on Some of those specifications. So Docker is now looking to add an authorization and authentication capabilities I mentioned user namespaces and Limitations on a container wanting to go wild and an open Millions of files or try and and bring the system down with fork bombs So these are all areas where we're working with Docker in with the community If you would like to hear more on that one of the one of our IBM security researchers is here this week And we'll be talking tomorrow on a lot of those lessons of running a production cloud and work that we're doing there so This is a bit of a shift. I think that's hopefully given you a good flavor of What we're doing with containers both in our enterprise Delivery of them through blue mix and the open-source work, but to jump back to Brad and Steve. We have book authors here They're not only Keystone Experts in the community, but they've written a book I don't know if you receive one on the way in but there are books available a free free copy for you to take That has already received great feedback just useful information and Then myself and another one of the docker Maintainers has written a book called open by design Research report that's available in e-book form And so as you're leaving the table that's directly outside the door There are postcards with the download link for that book if you'd like to receive that that's looking at really the strategy that we've talked about Just in the last little bit here about our Focus on open-source and open governor governance as the way to transform cloud technology So that's available as a free download As well, and then Jason was involved in the new stacks recent publication of the docker and container ecosystem You can download that from the new stack directly And so that's available as well And then in this room through the remainder of the day there are still talks available On various topics that may be of interest to you and with that I think We have complete our material so thank you for coming