 Everybody, welcome back to our IBM track on microservices. This is going to be part three, where we focus on the application level of microservices. So if you were here earlier today, Manuel started out with describing microservices, the best practices around them from a technical point of view, as well as all the cultural and community considerations that you might have as you transition to this new model. After that, my colleagues, Sean and Andy, took you through the supporting data services that every application with microservices needs. So that includes not only the data services where you store application data, but also the supporting attached services that provide your logging, log drains, your way to track your applications, monitor your applications, and the case of mobile applications with mobile back ends, how you connect to push notification services. So in this talk, I'll focus on the application part of it. And then we'll follow up this one in the next session with a talk on DevOps. So actually, after you've developed an application, how you go from getting that through your pipeline from development through staging to production. OK, so we've all been talking about the same sample application here today. It is a mobile application, an iOS mobile application, that allows you to upload photos to a backing cloud service. Here's the Catura mobile back end. Is our mobile back end as a service, written in the Swift programming language. That, in turn, has a bunch of associated services, object storage to take the image binary, clouding to store the metadata, and push notifications, things to support the communication back to the mobile device. And there's also a component here for asynchronous event driven work, and that's implemented by OpenWisk, which is a new open source project started by IBM that's heading to the Apache Foundation. So here I want to focus on those two application components, the mobile back end as a service, the Catura mobile back end, as well as the OpenWisk runtime for the serverless event driven applications. OK. So we've tied together all of these sessions, one through four, around the 12 factors for cloud native applications. There are 12 factors that have been distilled out of the experience over the last 10 years or so of developing applications for the cloud, as opposed to non-virtualized environments. You're going to need to consider all of those as you develop well-designed microservices applications. At the application level, you're probably going to focus more on six specific factors there. So factor number two, your dependencies. So one of the key things about deploying microservices is as you produce code, you really want to ensure that everything your code needs is known and declared and provided to it. There shouldn't be any guesswork as you move your code from development to staging to production. And related to that, you should be designing with a mindset that your code, as it exists, should run in any of those environments without needing to be changed. So to do that, you design it in such a way that it gets configuration from the environment. So database connection parameters, dependent endpoints, things like that. Factor number six, you want to make sure that your applications are stateless, that they have no dependencies, that any of your logic is completely isolated from any sort of state within it. And you do that by focusing on the process as a single unit of currency, really, to support your application. Portbinding, so as you're developing your application, you'll be probably working on your local workstation with a local web server that's not production grade, which may be listening on port 8080, 3,000, or something like that. You won't want to hard code to that. You'll let that be provided by your local development environment, your staging environment, your production, whatever production load balancer will sit in front of your instances of your application. And to achieve concurrency, you should just be able to copy each individual process, distribute it over a fabric of a hosted runtime for them, and that'll give you your scale out support. Again, and going back to that stateless point, you need to make sure that your code, as written, should be able to be stopped and killed immediately without any adverse side effects. So it should not cause database corruption. It should be very clean as to its separation from any sort of external services, so that can be replaced by another instance of your application. OK, so with that background and the context set for these talks we've been doing, here I'm going to focus on containers and how they help you achieve those six best practices. And there's not just one way to adopt containers with your application. So I'll talk through a couple of the different logical models that you might adopt, depending on your workload type, as well as the particular deployment targets that you might have in a cloud built on open source technologies like IBM Bluemix. And then we'll talk a little bit about how those different approaches come together in a real world application. So that demo pic app, you'll see that it is employed both with a web back end, traditional web back end, as well as the event driven serverless back end for the offline processing. OK, so let's first look at containers and why they fit so well to these best practices. So just a review here. I'm sure if you're here, you probably already know quite a bit about containers, both at the summit, as well as developer focused event like this. So logically, you can implement microservices with virtual machines, but containers really make a better fit, because to meet those scaling goals of scaling out, you really want to be able to pack as many of them onto your resources as possible, to leverage as much as the host operating system as you can. So there's a period to virtual machines in that you're cutting out a lot of the middle, the duplication of code, the excess attack vectors, anything else that's going to consume resources. So they're definitely attractive from that density point of view. And that best practice for isolating dependencies is achieved with containers, by packaging them with anything they need, the explicit level of any libraries or extensions that they'll require, as well as any versions and making sure that what they have packaged with them is not something provided by the underlying operating system. And so all that makes them not only more efficient, but faster to run, easier to transfer from one system to the next, so you get that ability to just deploy and replace quickly without modifying things that are already deployed. OK, so again, mapping those best practices to containers. So all of these open source container runtimes that we'll talk about, they allow you to provide a way to declare those dependencies. With Docker, for example, explicitly put them in a Docker file, build and package. With configuration, a container platform runtime can give you all the configuration that you need. Everything that's going to be run across any of the deployment platforms is going to ensure that your application is completely isolated as a process. And they'll also provide a way for you to push your web application, as it were, to somewhere where port 80 is mapped to it. And also with some of the more advanced container platforms out there that containers are making available, there's lots of options now. You'll get a concurrency model that lets you scale and deploy your containers across either a fabric or a single system, scale up, scale out, whatever you need to do. And also the container technology will basically enforce that your container can just be treated as a process, isolated and destroyed as needed. So that's great from a packaging, from a system point of view. But one of the key benefits of approaching microservices is that you can use whatever language you want for your particular service that matches it best. So if you're a really good iPhone developer, as in the case with our demo app, you're writing a Swift application, you may want to also write in the same language on the server using Swift on a Linux container. In the case of JavaScript, if you're using a no SQL database like CouchDB, if you're familiar with that JSON format and you want to transfer that to your service without with a minimum of overhead of translation or transformation, if your microservice is written in JavaScript, your database is serving up JSON. And you can just push that model to your end user as a chunk of JSON as well. So it's really a good way to enforce that language independence. As it comes to the container platform, so I'm sure here you've heard a lot about Kubernetes. I think with containers 1.0, everybody was excited about building containers with Docker, running that anywhere. But then they realized they needed a whole bunch of other things around that to run them anywhere. So container platforms like Kubernetes, Mesos, and several others are a way now to abstract out any of the infrastructure concerns that you need to worry about. Essentially, you're not worried about some of the open stack level concerns. Those are just provided to your microservices to consume. These platforms, like Cloud Foundry, can help you declare what a cluster should look like, how many instances should be running, and ensure that that's exactly what happens in production. If something fails, it's replaced. And as you're starting up the applications, you can see that there's a full lifecycle where you are not managing explicitly you need to start the applications if you don't want to. And one of the key benefits there of those platforms is you can abstract out, take advantage of all the resources that you have, deploy your applications where they fit best if they're in a data center closer to a user, or something like that. They can just leverage that not per server focus that you may have. OK. So let's take a look at some of those open source models. In the IBM case, obviously, we are big proponents of open stack. But there's at least four options you have with Lumix for deploying applications, each taking a slightly different point of view. So we have an infrastructure service with open stack. We've got a Docker container cloud. And we have Cloud Foundry platform as a service. And a new model for functions as a service with OpenWisk. And each of those takes, even though they're all backed by containers on Lumix or in the open source implementation, they take a slightly different point of view depending on how you, the developer, want to approach your application. How much control you want to have, trade it off with how much convenience you need. So in the case of Docker containers, do you want to be able to explicitly version your runtime or have more information on the exact operating system that's the base for your applications? So you can do that. You can package all that yourself. With the platform as a service, you've traded off the requirement to package or the power to package your application with the explicit runtime and given a little more control over to the exact, maybe, version of JavaScript that you may be requiring for your application. But you're still working closer with the data services that you depend on. And this final model, the new serverless, which IBM offers through the OpenWISC project, which is similar to something like Amazon Lambda, if you're familiar with that programming model, allows you to focus even more closely just on the single functions that implement your logic versus packaging the whole framework that you might get with a web application. So yeah, Docker is a great way to get started, at least on your local workstation in a nondistributed manner, just to understand, if you're new to containers, how you get started with those. So you've got a lot of great developer tools that are now making those features in the Linux operating system very easy to use. So if you want full control over, I guess, again, the dependencies that you have. And your programming model for that is you're writing your code, you're building it, you're pushing it to a registry, you're running it locally and deploying it when you're done. In the case of Bluemix, the container service, those links on the bottom, each of these has a pointer to the directions how you might, as a developer, get started with this. You would use a command line tool or a Bluemix GUI to work with those container runtimes. You would build your Docker images, push them either to our registry or another registry that you may want to use. You'd have a private registry within Bluemix that you can exploit for better performance and some security checking that can go and test those images for any sort of issues. And you can expose, you can map virtual IPs on Bluemix to your container endpoints and use the same Cloud Foundry model of service brokerage to push dependencies, external services that you might be using within that cloud or outside the IBM cloud. And you can group them into single containers, container groups, or pull-up clusters with Docker Composer right now. Again, we're very much looking at Kubernetes as well, so this is going to evolve. With Cloud Foundry, this was the first developer option we had on Bluemix. It's been out there for over two years. And essentially, it already provides a nice orchestration technology that's totally abstracted from you to write your application code and provide a way to just run that on top of a black box platform as a service. And in the case of Cloud Foundry, it's also consistent. You can use the CF CLI, which is the open source CLI, to push your application. It will then be built with all of its dependencies and run however many instances you specify that you need. So if you want to provide HA, you do that by manually specifying the size of your cluster within your manifest, which has the information about your deployment. You can edit and version the code. And the session after this will show you how you can push it through a DevOps pipeline and deploy it as a Cloud Foundry application. And you can perversion all the services. They'll come in as environment variables, and you'll be able to build on top of those. OpenWisk. So this takes both those container deployment models a little further. And this model has been called serverless, because you're thinking less about scaling. You just assume that when your code is run, it scales up. It scales down for you. It's also serverless in that you're not really working with any processes yourself. You're just focusing on a simple snippet of code that's just running at some point, maybe one time a day. It's not sitting around all the time. So if you have unpredictable workloads that you don't want to manually specify the size of your cluster, it's very good for that. It's also a little more cost-efficient to map your business logic directly to the compute time used. So if you have, again, those very bursty workloads or things that happen infrequently, it can be a better model than reserving a Cloud Foundry instance by gigabyte hour, for example. And you can go into Bluemix right now, again, a very similar developer experience that you have choice between all these platforms. Here there's a Wisk WSK command line tool. There's also a GUI for this that lets you write those pieces of code in JavaScript, in Python, in Java, in Swift, and create the functions, associate the events that were fired them, and map that declaratively through rules. And again, the nice little monitoring console built in already to handle, to see the performance of those applications, as well as the success, the failures, the outputs, and things like that. Okay, and there's one more thing. Another open source project that came from IBM, it's called Amalgamate. Has anybody heard of this? Okay, it was just announced at DockerCon earlier this year. It's essentially a way to provide service discovery, which is very important for groups of containers that need to find all their dependencies. It'll run in any cloud, within many different frameworks like Kubernetes. You can find it there at amalgamate.io, up on GitHub. Won't get too much into it, because it's not used in the demo application that we have, but as you start to build more complex clusters that you may need to move in a multi-cloud point of view, from one to the next, from your development systems out into the public cloud, things like that, it's certainly something you'll want to look into. And it was built to address some of the shortcomings that already existed in systems like the Netflix OSS projects. So interesting architectural solutions to some of those problems that may expose themselves. Okay, so let's tie this all back to the Cloud Foundry application. So the Swift program language was originally built for mobile applications on iOS devices. But recently, actually about 10 months ago, we worked with Apple to open source that language and provide a way for it to be extended to different workloads. So one of the key innovations that came out of that partnership was the development of an open source web framework called Catura. So Catura allows you to provide a backend for your mobile application that's written in the same language. So here, in the demo app in the blue pick, if you've seen it earlier, you can edit the code that runs in a Swift runtime on the server within a Bluemix console or on your own, I prefer sublime for writing code. You can push that code, either push button through the Bluemix UIs or through the CF tool. And you can kind of see the success of that application once it's pushed. And then I'll show you how you run mobile app against those services. And the extension point that's not in the upstream right now but just shows you how you would approach the programming model here. I picked PHP, I love PHP. Just it was the easiest thing for many developers that just want to create a very, an iteration of a service. They just want to get out there quickly and maybe move on to something else later. But in my demo here, I'll just show how I created just a simple file, push that to Cloud Foundry just to provide an admin UI around some of the data generated by the Katora application. So what that aims to do is show that I can leverage the other endpoints provided by Katora or I can connect to the database to back that service. And again, same type of use case, push code, view in the dashboard. On the open-wisk side, the one that's in the demo app, it's used to take that uploaded image that was pushed over HTTP. What it does is transform that file to a smaller size, optimizes it. And it uses some of the Bluemix Watson services to extract out metadata such as location and any sort of things that are identified within that, as well as the weather information. So the reason that open-wisk makes this very nice is that as the mobile application is synchronously, I'm sorry, the mobile backend synchronously is working with the mobile application. This can handle some of those compute-intensive tasks like transforming data, like contacting a bunch of other services, aggregating information, and then eventually calling back to Katora to provide the push notifications and update the UI. And again here, with open-wisk, what I can do to take load off of that production database is just copy the production database that backs that mobile application so that I can perform some more analytical type of workloads on that. So because open-wisk can be triggered by events, so a new image going into one database can fire the action and have that provided copy put into another database. Okay, so hope this works out here. Okay, so I'm handling all of these microservices on my own workstation using Sublime. You can use any editor you want. Again, it's choice, whatever you wanna do to push the cloud. So Katora essentially provides you a little hook to start an HTTP server for your Swift logic. Here, there's just a main Swift application. What that does is pull in some logging information and we give it a controller mapping. And so the service controller essentially implements the REST API behind the mobile backend. And this, in turn, can map routes against any other linkages to logic, to connect to services, or it may provide its own API here, which we'll use later. And once you've, in the Cloud Foundry world, the way you take that code, you package it is you do a CF push. It takes a bit of time so I'm not gonna do it here in a live demo, but I'm using the CLI, the CF CLI here. And what I would do is CF push what's in my directory up to the cloud. And it will be available then at an endpoint that I can then go ahead and browse on Bluemix. So here I'll just sign in. And so Katora can provide this little web interface written in the Swift programming language. Okay. Yep. Yeah, and so that's the web application level of it. As for monitoring the open-wisk jobs, I can see, for example, when these open-wisk actions that are written in Swift are pushed up, I can take a look at the logs for it. So, let me see here. Orchestrator, right. So Orchestrator is one of my serverless applications that are built for this demo app. Essentially what it does is links some back-to-services that it will call itself asynchronously and handle some processing type jobs. So, all open-wisk jobs, whatever language they're written in, will implement just the main function that takes a context or parameters value. Here it's injected the connectivity to external services by the open-wisk platform. And it will call out to the weather service asynchronously. It will call out to the alchemy image tagging service. So that's my consistent, whether I choose a CF app in Swift or serverless, I can do it either way. Okay, so that's the core of the mobile application. There's a bunch of back-to-services that it has as well that you can find in GitHub. I'll look for a blue pic. And now if you took that open-source project and again you want to add a brand new service, for example, I want to add a non-user-facing API for or a way to look into the different types of tags that have been associated, the different locations my end users have been pushing up information from. I simply can just push, for example, a PHP application here that uses Bootstrap and just hits those APIs provided by Katorra, pulls in the data, and I can view, for example, hitting those APIs, the locations and vision tags. So it's just a simple way to build on the application without changing the main logic and just adding new features, which is a key benefit of doing this in a microservices versus monolithic way. Okay, just go back to this one. Yep, and in this case, I had written that code in PHP, nice ugly PHP all in one single page application in the PHP world. Just use a curl, hits those end points for the tags with curl, stores them, and I process them. Of course, that doesn't solve my problem of not putting a load on the original system. So instead I can connect to perhaps another database. So what I can also do with OpenWisk, writing it again in another different language because I prefer JavaScript to Swift, I've written a function, again, implementing main, and what it's doing is an async waterfall. You can also use promises as well to link a call that when a change happens and images uploaded to that database, it's gonna copy out just the metadata. I don't need the binary data, I just want the tags and location. I'll capture that and push it back to another database that I have, which will be my analytics database. The user interface for working with OpenWisk is you're working with those actions, those triggers and those rules. I've got a little script that I can use to uninstall and deploy them. So I'm getting rid of the rules that I had for when I set this application before. I'm removing the trigger that was associated with the database just to be clean. So as I'm iterating on this code and testing this code, I'll just be going through deleting those resources, creating my actions. Here's the new one I just added. These were the existing actions that were backing the mobile back end asynchronous processes. And I've got my new trigger tied to the database and it's mapped by a rule. So if I do a whisk activation pull, I will go ahead and see if I upload a brand new image using mobile front end here, oops. Ooh, sorry about that. Ay, ay. Okay. All right, I got my mobile app back. Okay, so I'm going to, now that I have my copy job in there, actually I'll clear this guy, I will upload a new image, have a nice little flower on here. And so the application, the existing service, the microservice there is processing that upload. The asynchronous jobs are processing as well. It's got some location data just hard coded to my particular local environment. And my function now, you can see that my analytics process saw that that data was handled in the original database and it piped it over to a separate database with just that piece of information I needed. Here's the location data and perhaps any tags that it may have gotten from the image recognition service. And if I go back to my cloud databases, I've got my original database here with the binary files. In my analytics database, I should just have a subset of that data for that particular upload. There was no tags for this one, but I've got San Francisco. So I can query then against this. So each of my microservices has a totally different connection. Okay. So just wrapping up, so I hope that gave you an overview of basically why you're using containers to package your applications, to address those best practices with microservices, as well as the different programming models you have. Although they're consistent in your developer workflow, working with an IDE, using a CLI tool and pushing, you have different deployment models. One is an always-on Katora Web mobile backend. It's got its own billing model and scaling model and those open-wisk actions that are just invoked to handle events, process data, and they can also be fronted by their own web frontend. And so after this, I didn't cover how I might, now that I've created some, hacked up some microservices, how you can properly then go ahead and iterate on that, as my colleagues would like to do, probably kick the PHP out of there and go write a different version of that. So they can now go through a DevOps pipeline, write a new implementation of either the backing services or the mobile backend or those asynchronous services in Swift and show you how you can run some tests against it and automatically deploy instead of maybe just using that. That's CLI method. Okay, so any questions? Okay, I'll be around after if you guys have any questions and then we'll get started in about 15 minutes with the final session where Megan and Michael will show you how to execute a DevOps pipeline. Again, built on open technology and implemented in Bluemix. Thank you.