 Welcome, everyone. Let's get started here. Let me just connect this one. Yep. This is an infographic I got from the Multi-platform Trend Report from the Cloud Foundry website. What's really interesting here for me is if you look at the adoption of PASS amongst the enterprises, you can see 77% of PASS usage and 72% of containers, which is really interesting for me, because if you compare how many years ago PASS was there and where containers came from, 72% is a big leap. And 46% of serverless usage among enterprises. I mean, the people who have responded to this survey are IT decision makers who are saying that 39% of the respondents are using all these three types of abstractions. So what I want to talk to you today is about Cloud Foundry originally started out as a PASS and now has abstractions, which can run containers as well as serverless. So my question today to all of you is, is it possible to architect an app spanning across the abstractions in Cloud Foundry? So when I say abstractions here, I mean PASS, PASS, and PASS. So platform as a service, container as a service, and function as a service. So I'd like to start off with a little bit of my experience. When I started in the IT industry, there was only one way to develop an app. And it is a monolith. Every team member contributes to the same code base. And we never used to call it the monolith in those days. I mean, there were lots of them happening. And I just wanted to make an observation there that when we started building these applications, the focus amongst the developers, I mean, if you get into a development team, you can hear them talking about design patterns and code complexity, basically, because everyone is contributing to the same code base. So design patterns, we still have this book from the Gang of Four. I'm sure some of you would have read that as well. A 20-plus design pattern, some really cool object oriented techniques to use to have your code manageable to get a shared understanding of the code base across the team, right? I mean, that's where the focus was. And that's where the discussions were around. But soon there were some new architectural patterns that came out, like service-oriented architectures and then microservices, where we started basically taking that app and identifying the services, right, which make up that app and splitting that into different independently deployable artifacts. That's where the microservices world is heading. And then the focus changed from into domain-driven design, DevOps, platforms, distributed computing, right? I mean, what's really interesting for me is coming from developing monoliths during those years. And now nobody is talking about design patterns, having a shared understanding, because everyone is working on a really small code base, and all they're really interested about is what interfaces do these services expose. They don't care about what the internal implementation of those services are. These are just observations I'd like to make. Then another observation I had was innovations in cloud computing, basically driven by the cloud vendors like Amazon, Google Cloud, and Microsoft Azure, and also changes to the Linux kernel. I mean, innovations in the Linux kernel, the introduction of C groups, namespaces, and container technology in general has actually given rise to different abstractions called PAS, CAS, and FAS. So FAS is a platform as a service, as you all know. It's really built for running stateless apps. So when I say stateless apps, what I really mean is apps which have externalized their state. They don't do any kind of persisting to disk or any of those activities. And also, 12-factor apps, it's a set of guidelines how to make your app deployable on such a platform as a service. Then you have container as a service where it's ideal for running your commercial off-the-shelf products, stateful apps. For example, a stateful app would be if you're running some machine learning workloads. And if it is Apache Spark, for example, it does most of its compute in memory. But when there is no much memory left, it has to spill to disk and write to disk. Such kinds of applications can run, and it's well-suited for a container as a service. Even database, as you can see, many of the database vendors now containerizing their applications and giving it out for use. Then you have function as a service. These are basically you just push your function to the platform, and the platform takes care of running the function for you. I mean, basically give it a URL, maybe a Git URL, and it will take that URL, build the image for you. And when an event comes into the platform, it spins up the container or whatever technology it's using and to service that event. So why I wanted to set this up here is because I wanted to just show that we have got the choice of now moving these services into these different abstractions. So my question today is, I mean, how does Cloud Foundry really help you basically sprinkle a monolith into microservices? And then there are services within those boxes which are a good fit. Some of them are a good fit for a platform as a service. Some of them are a good fit for a container as a service and some for even-driven functions. So the next question I have is, so what does Cloud Foundry offer for a multi-platform world? I mean, how can you use Cloud Foundry to run such workloads with different abstractions? So this is what I was showing you earlier. So Cloud Foundry has something called as an application runtime, the CFAR, which is basically a mission statement. You can call it like the answer is Haiku, which says, here's my source code. Run it on the cloud for me. I don't care how. You just do a CF push and the application, the runtime, takes care of building the container for you, storing the container in a registry within the platform, doing your DNS, networking, security, and all that stuff. Even the load balancing is taken care for you. So you, as a developer, don't have to do anything about it. Then you have the Cloud Foundry container runtime, which was previously called Kubo, which means Kubernetes on Bosch. So it gives you an interface for you to create highly available Kubernetes clusters managed by Bosch. So on these two building blocks, you can actually have all your abstractions running. So that's the focus of my talk today of taking a demo app and showing how you can place the services which are well-suited for running on a pass, gas, and address. So that makes it multi-platform. And Bosch, if you're not aware, Bosch is a release engineering deployment management and lifecycle management tool, which has a component called CPI, which stands for Cloud Provider Interface. And that makes it cloud agnostic. So you could be running your path on the application runtime on Bosch. And Bosch might be deploying it onto AWS, for example. Or it could be VMware vSphere. Bosch abstracts out what you call the complexity of dealing with each cloud provider so that the operator doesn't have to deal with it himself. So it provides a good control plane for the operator to work with the platform. And that makes it multi-cloud. So you get a multi-cloud plus a multi-platform with Cloud Foundry. So this is the app we're going to have a look at today. What the app does is it has a few APIs and a UI, as you can see, which is deployed into a platform as a service. So given a website URL, the app is going to go fetch all the links from this website and then make a prediction as to what kind of website this is from a predefined list of categories. So I'll do a quick recorded demo of this. So since we are in Basel, I thought I'll do Basel.com, which is a tourism website. Let's see how it goes. So this is a recorded video for you. We'll get into the details of what exactly is happening behind the scenes and how this app has been deployed onto different abstractions. So the link has been submitted. What it's doing in the background, it's going and fetching all the links to the site Basel.com. So you can see the words. I mean, it does a little bit of stemming before it submits it to the machine learning module. So the fetching of links is happening on Function as a Service. We'll look into that soon. The machine learning prediction is done on Container as a Service, and the API and the UI is first done on the platform as a service. Let me just forward this a little bit. So it is predicted it as a travel website, which is pretty much right. So let's move on. I'd like to introduce you to this team, which is called the WebCAD team. So there are UX and API designers here. The UI designers are building their front ends. The API developers are using Spring Boot, Node.js. They need a Postgre as a Redis cache. They use a platform as a service so that they can go in and do a self-service kind of provisioning of their applications and their back ends. Then we have the data science team, right, with data scientists doing the machine learning work. They're doing their data science models for this prediction of the category of the website. Then you have the data engineering guys on this side who are responsible for ingesting data into their environment, right, and also building the data pipelines here. So this is how the stack looks up. The UI and APIs are using Spring Boot and Node.js. They're using their own Postgre and Redis. And that's running on the Cloud Foundry application runtime. And the machine learning team is using Spark, which is running on the Cloud Foundry container runtime. And we'll talk about the serverless and the event driven functions. It's using RIF, a plus K native. And all this is running on Bosch, as I mentioned earlier. Just so that I can set the context here, I'm using a vSphere environment with the Pivotal Cloud Foundry installed. As you can see, I have a Bosch director there and the Pivotal application service and the Pivotal container service. So the application service is running on top of, it's basically, Cloud Foundry application runtime. And the container service, Pivotal container service, is using the Cloud Foundry container runtime. So let's do the deploying of the front end API onto the application runtime. So I'll have to log into my VPN to do that. So just bear with me. Once that's connected, we should be able to push an app. Yeah, looks like it is connected now. OK. So the first step is I have this app here, which is just a Spring Boot app, which I'll be pushing it onto the platform, onto the Pivotal application service, which is running on the Cloud Foundry application runtime. So I'll just show you where it's running. CF target will show me that I'm targeting the API.System.PCF platform.com. That's where the platform is running. So I'm going to do a CF push. So this is the, here's my source code. Run it on the Cloud for me. I don't care how. Happening live. I've just pushed an app, which has a manifest, which is just describing that the name is WebCAD. That's the jar you need to deploy, and the name of the app as well. Once, as you can see, I've just pushed an app, and it's the build pack detection and creating a container or a droplet is taken care by the platform. And even storing that into a registry it's done by the platform itself. Now, this is the route for that app. I'm going to hit that. So this is the Apps Manager, which would be showing me the app here. So this is just deployed. I can click on it. If everything works, I should be able to see the UI. So that's the first part of deploying the application. Now, let's look at the second part of deploying the machine learning workloads. So this is done. Let's move on to the next one. Deploying the machine learning workloads. So the first step is to create a Kubernetes cluster using PKS, the command line tool. So I'll speak briefly about the PKS tool. So I'm logging into the environment. So PKS is a command line tool for you to spin up Kubernetes clusters, highly available Kubernetes clusters. So what I've done here is it takes 10 to 15 minutes to spin up a cluster. So I've done it in advance. What it's doing is as an operator you can go in and say PKS create a cluster. And the PKS API will talk to Bosch and then use the CF container runtime to create a highly available Kubernetes cluster for you. So once this command finishes, you have a highly available Kubernetes cluster. So that's what I've done already. The next step is for me to get the credentials for the cluster into my kube config. Then finally I deploy the machine learning workloads. I've done all this. Here you can kube CTL. So you can see there are a few namespaces here. You can see there's one called machine learning workloads where I have deployed my pod. That's where the Spark application is running right now. And if I look at my default namespace, there is nothing running at the moment. There is a link collector. Probably it will die in some time. So deploying machine learning workloads with Spark 2.3, there has been native support for running Kubernetes. So you can now actually run Spark on Kubernetes. Final part is to run the function as a service. Before we get into that, I just wanted to talk a little bit about the landscape. You have AWS Lambda, Azure Functions, and Google Cloud Functions, which are function as a service provided by these vendors. You also have the open source function as a service. Now all these open source implementation have a build, scale, and eventing module. But the thing is they have slightly different implementation of this, which has created a lot of fragmentation and lock-in. So what a new initiative called Knative, which was started by Google with industry leaders, what it has done is it's a Kubernetes-based platform to build, deploy, and manage your modern serverless workloads. So it has the same serving, build, and eventing. Serving is basically scale to zero. Build is given your source code. It does source to image. Events is how do you manage subscriptions and delivery of events to these functions. So we'll be using RIF. It's an open source project from Pivotal. It stands for RIF as for functions. That's the URL if you're interested. Go have a look, please. It builds up on Knative, and is going to be the foundation for our future product called Pivotal Function Service. Now let's get into deploying these event-driven functions. So RIF, as I mentioned, uses Knative under the hood. So when you do a RIF system install manifest table, it goes and installs Knative for you under the hood. So the first step for me is to create a channel. I create a channel here with the link collector. So what it's basically doing is I'm creating two channels. These are Knative concepts. But since I'm using a Kafka bus here, it's going and creating an actual Kafka topic under the hood. So once I create these two channels, I'm going to have my function code deployed. So I'm saying RIF service create a link collector. And I'm saying that's the image you need to use when you start building the event. You need to start up this container. Now the last step is connecting the dots. I mean, you do a subscribe, RIF service subscribe, and say that, OK, link collector is my input. And whatever the cat's link collector service returns has to go into a different channel. Now RIF hides all the complexity of the yamls and all those things which you need to create and makes it really simple for you to build this pipeline. Now, so this is the RIF command line. So if I do a RIF channel list, you can see some of the channels which I have created. So what I'm going to do now is to show you the pipeline I've built. So I'll just explain what exactly this is doing. Initially, when I go and submit a link, a link goes into the link collector, which is picked up by the cat's link collector service. And it goes and says, OK, basil.com has 138 links, though the link fan out channel. And it puts those 138 individual links into the link fan out channel. The link fan out channel picks it up and puts it into the link crawler channel, which starts going and crawling these sites individually, the URLs individually. And finally, the cat's link crawler is putting that data into the link crawled channel. Basically, these are all going into a Kafka topic. The machine learning module, then there's another function which gets informed, which picks up all this data and sends it to the machine learning module and says, based on this data we have collected about this website, what do you think is the category which it belongs to? So this is the high level architecture of the app. As a user, you come in and submit a link to the web app, which puts it into a queue. Now the queue can be managed either as part of a platform as a service, because we have integrations with Service Broker and the Open Service Broker API. It could be managed there. Or it could be something running in your Kubernetes system as well. Whatever it is, those messages are picked up by the function as a service. So each of them starts executing it. And finally, the data goes into the machine learning app on this site. And finally, a prediction is made. And that response comes into queue, picked up by the web app and shown to the end user. So basically, you can see how you can place components, which makes sense for a particular abstraction in their own space. So normally, you shouldn't see anything running here. When you post a link, that's when something should appear here. So maybe I've kicked off something just before the presentation. That's why it's running. So with Knative, what happens is when there is no activity for a particular queue, the container is terminated. So if I just watch these parts now, normally you wouldn't see this cat's link collector here. But anyway, let's go and kick off a new web page. I mean, a new URL to. And then I'll switch to the console view so that you can see the event-driven functions starting up in response to the event. So you can see that the link collector is already there. So it has started up. And you can see the link fan out starting. And then the link crawler initializing and then starting up. So these are the event-driven functions which are happening in the background. They are going to crawl. And you can see those responses coming in in some time for all these links. So you can see that there are three containers in each part. So there is an Istio proxy. There is also a Q proxy. And you're actually user container. That's the business logic for your container there. So fortunately, this is not a recording. So I can't fast forward this. So it's going to take some time. So it's predicted it as a computer as an internet-related site based on the crawled content. OK. So this is the high-level architecture, as I mentioned. And finally, I'd like to just summarize what I was just talking about. So the initial question, is it possible to architect an app across these abstractions? Cloud Foundry, I think it is. I just showed you a demo of that. And the other thing I want to mention is choose the right abstractions for the job. Most of the time, when we go to customers, we see there are apps like APIs which need RDBMS or cache just used platform as a service. That's the right abstraction for running such workloads. Run workloads which need persistence and a lot of networking requirements on containers as a service. And even-driven apps which have these even-driven kind of patterns run it on a function as a service. With that, I'd like to end this talk. Thank you for your time.