 Okay. We'll go ahead and get started. I think we're on time. The next hour, 15 minutes of your life will be hopefully full of excitement. Some laughter, some tears. We're going to have some fun today going through. I think I've got seven demos queued up. If half of them work, this is going to be a great session. So, when I make a lot of this hands-on, kind of showing how things work, maybe you inspire to do it yourself. Not during the session, you're going to be paying attention. But right after this, you will probably be doing all of these things, which is awesome. So, I'm Richard. I work at Pivotal. I do marketing stuff now. I also like to build stuff to maintain illusion of relevance. So, a lot of this is stuff that I've been building and messing around with, and hopefully, it adds value. Because as we're all sitting here at this show and all these shows, there's this question of like, hey, all of your stuff will never run in one place realistically. I've got all kinds of packaged software, commercial software, and open source, and Linux, and Windows, and batch, and real-time stuff, and all these sort of things. So, how do I start to consolidate some of it? How do I get a sense where it's not 19 bespoke platforms, where this stuff is for this workload, this is for this workload? How do I start to pull it together? So, really, the point of this talk is going to be looking at maybe seven-ish workloads that you're not used to looking at in a Cloud Foundry world, maybe six, and then how they work. We'll actually build it, see if it works, and then you can all maybe load test it if you feel like messing around with it. So, before we reset, this is the 101 track or whatever it is, if you're introducing the Cloud Foundry. So, I'll take just two minutes and describe a little bit about what PCF's all about, and even if you're a veteran, I would say this stuff matters, because there's plenty of, it's easy to be confused about, well, is Cloud Foundry and Kubernetes, do they clash? Does this compare to this? How does this compare to that? So, what are we talking about? Like, what is this thing that it does? How does it add value? So, the things you should know, if you forget anything else besides that I'm a really good presenter, let's focus on these things. So, by default, Cloud Foundry built for multi-tenancy, right, so you've got orgs and spaces, it's easy to segment users, make it really simple to say, hey, this group can consume this many, this has different roles and permissions, even isolation segments, which are a little underrated, work naturally in every Cloud Foundry, which is, here's a compute pool that is only for these apps, and it's still all managed in the same place, but I could say, hey, this has GPUs, or this is for PCI workloads, or this is using really high performance disks. Whatever your reason could be, you can easily carve off actually pools of compute really easily within Cloud Foundry, it's a natural thing. So, really purposely built for tenancy and large orgs and lots of teams, which is nice. You know, the other thing, I will show some of this, is that you can kind of push whatever you want, you can push code, raw source code, CF push, you can actually push container images, which we'll mess around with today. So you have a lot of different ways to get source code into production. The other one, again, we'll demonstrate a little bit today, is you may think of most platforms, hey, Heroku, you think of these kind of Google App Engine, these traditional passes as it's for web stuff. That's true, it's for a lot of web stuff, but with PCF and Cloud Foundry as a whole, you can do TCP routing as well. So we'll talk about that, and it's not just web traffic for this. The other one I think we don't probably use enough, is you can do interceptable routes in the platform, meaning I can have, here's my app, and let me go ahead and add intercept traffic to that and add an API gateway. Let me add some caching, let me add whatever, I don't have to change my app. So it's really easy, there's extensibility points that say, I wanna intercept traffic and call out to something else. You can do that without touching any of the code of the app itself, which is neat. If you like doing YAML, you have a tiny bit of YAML, you write in PCF, I'll endorse you on LinkedIn afterwards, if you'd like, for YAML. But this is the only thing that makes an app a Cloud Foundry app. There's no such thing as a Cloud Foundry app, right? It's a .NET app, a Java app, a Spring app, a Node app, just an app. The only thing that is even remotely CF-like is, hey, you can do this and it's even optional, saying hey, what's the name of this thing, how many instances are there, how much memory should I use? So there's a tiny bit of YAML, which is the right amount of YAML for everyone. Automatically injects and reads environment variables. Again, super handy sometimes is you're pushing to a platform, how do I take stuff out of source code and store it in the environment? So PCF handles all that stuff and Cloud Foundry handles that for you. One of those things, again, we take for granted, but being able to automatically aggregate all the logs that each one of your containers spins out, right? Every app you have that console write lines or debug, whatever, all of those things get sucked into one place. So you're not going to each environment looking, let me read logs here, let me read logs here. That's all pulled to one place and it's correlated with things like container metrics. You could see, gosh, why is there this spike in latency? What's screwy here? Oh, the logs indicate we're having a bunch of database timeouts, that's cool. Because the point is to be able to pinpoint problems faster. So the platform naturally pulls all that information together, that's nice. Also monitor's application health, hey, if an application crashes, which I will do on purpose, what often happens anyway for me, just because I'm not great at this, is that it'll automatically recover the instance in every programming language. So it doesn't matter, I'll show you that in a little bit. Again, auto scale or manual scale, scale up or out, right? More capacity, I want bigger, more RAM or disk or give me more instances. So it's nice to have be able to flip both of those on. Whole marketplace, I want to stitch in different services, databases, caches, message buses, logging tools, I want to dump it all to Splunk, terrific. All of these things are external things that are really easy to plug in. One of the most important things, and where you kind of differ from even just a straight kind of container orchestrator, is that PCF and Cloud Foundry as a whole manages VMs and kind of the apps, right? Because I mean the first installation step for something like K8, which is awesome, is like find a great cluster. All right, where the heck do I get the cluster from? So there's still that inception problem of actually instantiating environments, that's what Cloud Foundry does with Bosch, right? It builds the VMs, it runs them, it manages them, and then it also runs all the apps and stuff on top. Runs everywhere, right? It doesn't matter if you're on-prem, off-prem, that's part of the value property sorts of things. All these different layers of HA are kind of built into the platform, so whether a container crashes, all right, let's go ahead and recover that. A service on a VM may crash, maybe it's a log collector or whatever, that automatically gets recovered. A VM disappears or collapses, fine, Bosch recreats it. An entire availability zone goes down, not a big deal. I'll go ahead and use this one instead. So all that stuff's just built in. You don't have to wire it up, you don't have to build anything in your apps. Those sort of things are built into the platform. And then finally, if you're using PCF, you just have a few different interfaces. I can use the API because you wanna prove your geek cred, that's fine, maybe a GUI if you like to point and click. And the command line interface, of course, as well to impress your friends. You have lots of different ways to use this sort of stuff, which is great. You know, and the PCF as a whole is the reminder, runs on every cloud, that cloud provider interface is the secret. If you implement like 15 APIs, Cloud Foundry can talk to your cloud. And Bosch does all this magic stuff like embedding the operating system, Windows or Linux. And then sure, Pivotal Network, think of this as the CI source for my platform. I'm just gonna keep pulling that stuff through. Critical vulnerability is not a big deal. I'm just gonna suck that thing through a pipeline. And then of course, Pivotal Application Service, which we're gonna focus on today. This is the kind of traditional Cloud Foundry, the thing that runs apps. We also do Pivotal Container Service, which runs on the same foundation if you want managed Kubernetes. The Alpha Pivotal Function Service is available and we have people in the room who've been messing around with that. And then of course, that Marketplace. So I mean, in essence, I usually talk about this as a continuously updated platform for your continuously updated apps, right? And that's the point here. If all I'm doing is updating my apps all the time and my platform itself is kind of atrophying, I'm not updating it very often, I'm kind of missing out on what the value of this sort of thing is. I need to keep updating both things all the time, right? That's where the value is for security purposes, functionality purposes, all of that. So that's what we're gonna deal with today and now we get to go build stuff. All right, so what is a standard app? If you were just going to a complete set of strangers and for some reason talking about apps, you might say, this is a standard app if I'm doing PCF and this would be kind of a cloud native app. So what do we mean by cloud native, right? What does a cloud native app mean? Then we'll go ahead and deploy one and live the dream. So first off, typically it's this custom build. If you're telling me SharePoint is a cloud native app, I will make fun of you and ask you to leave. It's not a cloud native app, it's just commercial software that's big monolithic, that's fine. Traditionally it's custom built stuff that you're building in a cloud native way, following things like 12 factor app patterns, right? Getting state out and having it horizontally scalable, quick startup, quick shutdown. Often it's microservices, right? Not always, doesn't have to be. You can make a modular monolith, it's not a big deal, but often it's distributed. These dependencies get declared and I don't assume that that box has all my services running. These things often come with the app. You're typically aiming for horizontal scale in these environments. Of course you can scale up, but when you're doing cloud native you're taking advantage of the fact that compute is generally infinite. You attach these things to back services, databases, caches, queues. All of your storage is typically off box. Sometimes you have some scratch space in your container image and you can, it's fine, but it's ephemeral. You know you can lose it technically. Of course these apps are kind of willing to handle faults. They assume things fail all the time, right? That's a modern software. We just accept the fact that things fail. So how does my software handle it? And it typically it's continuously delivered. If you're doing these things, or if you're building apps to be manageable, building apps to fail, building apps to scale, you're doing cloud native software development. So that's where these sort of platforms have been made for this sort of thing, especially cloud foundry. So let's go ahead and mess with one. And with this as well, we'll try some other stuff. All right, so what am I gonna do here? Our first app is a .NET Core app. And what I wanna do is actually just deploy a regular app and then we're gonna kind of explore it a little bit. And then I'm also gonna do a no downtime update for fun. So I'm using Steeltoe as a dependency. This is the .NET framework that Pivotal created to open source part of the .NET foundation. It's kind of how you add microservices logic to your .NET framework or .NET Core apps. So this I wanted to get some actuator endpoints, get some health checks, things like that. This is a remarkably complicated service that simply returns a hard coded set of cities when you ask it for information. I know, the source code will be available later if you'd like to use this in your own architecture. So this character does not do a lot. So again, nothing in here is cloud foundry-esque, right? I can go run this on EKS in Amazon. I can run an Azure app service. Nothing about this is cloud foundry. There is that little tiny bit of yaml, right? It says what's the name of my thing, how much memory, how many instances. I'm just defining an environment variable for the Philadelphia airport, plug for my local location, and I'm attaching it to a service. I'm saying when you start up attached to the service that exists called Serota Crunchy, it's a crunchy Postgres database instance, right? That's the only thing. It just kind of gives me a little bit of insight. And I could do all this from the command line too. I'm just doing it in a declarative format that I could throw in source control. I have a real change history. So doing that all via the CLI is a little risky. Okay. So let's run the, what I would still contend is the best command in the cloud, which is CF push. What CF push does is that magical sequence of events, it takes me from source code to a routable thing, right? That it does all of the stitching. It takes my app, it pulls in any dependencies. It uses build packs to actually assemble the whole blob that needs to run. Then it goes ahead and gets that into cloud foundry and then stages it, finds a place to store it, hooks up all the network routing. So load balance is taken care of, DNS is taken care of, log drains are hooked up, monitoring is turned on. All those things happen when I CF push. So instead of me deploying, opening tickets, doing all that sort of stuff, the whole point is I just wanna quickly have this thing assemble my software and run it for me. All right, so that's gonna go deploy quickly here. Since this environment times out every nine seconds, I'm gonna go ahead and log back in. I think it's a security feature. All right, we'll see if I have my app here. It's starting up. So again, in Pivotal Apps Manager, this is something Pivotal adds, not part of cloud foundry proper. You do have this idea of orgs and spaces. So I have an org called Pivot R Sarota. I have a quota. I have different permissions that I can have in my users. Then within that space of development, I could have a space for dev test prod, performance tests, random QA guys, like whatever I wanna call it. I can set all that up and that also has its own members, that's own permissions, has its own settings and security and quota. So it's nice, nice immediate framing around tenancy. All right, so this app should be running. Let's see. It is, if I got four instances up and running, that's delightful. Because I used that Steel Toe dependency, this Apps Manager actually reads the fact that this is a .NET app. So it puts a little icon there. Now you can say, are all you literally doing the icon? That's not a whole lot there. No, there is more. So it does pull in health information specific to that. You can write to this. So that information can also be pulled in. And we do some other things. So let me go ahead and show it. Let me quick pull it up. Now you can say, oh my gosh, your first demo of the day, seriously, that's not good. Okay, come on, the first one worked. So we got our cities back. That's a low bar, but I have crossed it in front of you. So I've made that request. Now what's nice is again, because I'm using those Steel Toe libraries, I can also pull in trace statements. So this actually pulls in the last hundred requests to my service. So this is me, just calling that. I can see all the headers. I can see all that stuff. That's all just baked in. I had to do nothing to my app to make this work. I just added a Steel Toe dependency and I was good. We can suck that information out when it runs in PCF. And then of course I can do logs. So this aggregates all the logs. So from all four of those instances, that's all coming to this one drain. So again, if I was hooking up Splunk or something else to this, I can be sucking out logs from all of these from one fire hose, which is great. And if you remember, I said an environment variable. So if I look at, yep, there's the hub, there's Philadelphia, right? So that automatically gets loaded in. Again, because I'm using Steel Toe, it automatically knows all my endpoints. So hey, by the way, if all of a sudden I kept seeing a bunch of 404s, somebody calling an endpoint that doesn't exist, I can actually interrogate and it'll show me here which of the endpoints my app actually exposes, which is really handy. And then finally, you may notice because I hooked up to a Postgres instance that was already running in my environment, it automatically gives me things like URLs and credentials. So my app now can read these values from the environment. Again, nothing in my code has credentials, nothing in my code has to know about that. Just by deploying that to PCF, I inherit all that sort of stuff. And then, just for fun, what's really nice is again, these things are not kind of opaque containers that I can't access. So if I do CFSSH, I may have spelled it wrong, core node downtime app, all right? Oh, not node downtime. I've got node on the brain that's, yeah, that's not good for me. So I'm actually SSH into a container here. So I'm actually in the container, one of the containers running my instance. So I can go in and still create mess with logs, I can touch a file and create something. I can do all kinds of things here in this container. Now, again, this is still more informative. I shouldn't be screwing around here. Any changes should come back through my deployment pipeline and things like that. But this is not hidden, right? These are just Linux containers that I can bounce around in, which is really nice. All right, so I've got my app running. Let's go ahead and change something and see how it works. So let me go ahead and show another version of the app or another app that runs. And this is also a remarkably complicated app, so I'll explain it to you all after if you need it. I'm literally just calling that endpoint and sleeping. So again, brace yourself. I'm just gonna call this infinitely and see what happens. I mean, I hopefully know what happens, it should run, but let's, let's not get ahead of ourselves here. Okay, so this should start running. All right, fantastic. So it's calling that endpoint every half second. It's spitting out information, that's terrific. So what I wanna do is actually make a change to this service, but without actually causing a restart. Right, because traditionally in Cloud Foundry, when I do another deploy, there's at least a 30, 60 second window while it's restaging everything where actually everything's not routable. I don't love that. So I could do blue, green deployment and actually swap the network route. A lot of strategies I can do to avoid that, but let's say I don't wanna mess with any of that. So what I can actually do, let me go ahead and add another city here. I think Pittsburgh is P-I-T. You're not gonna correct me if I'm wrong, it's fine. So let's go ahead and say I've got that. And what I wanna do is then issue a fancier newer command for no downtime deploy. And so this one would be C-F-E-3-Z-D-T for zero downtime deploy, push, and then just give it the name of the app that I wanna do. No downtime app. So what this does is pretty cool. Is it go ahead and it stages, deploys, does all the build pack stuff. But then what it does is it puts a new instance online and then takes one of my old ones, puts the next one online, takes that offline until it's swapped out all four with the new version. So there's gonna be at least a temporary period where I'm getting results back from both services. Right, but I'm also doing this in a no downtime fashion. There's that, again, 30 seconds where maybe both versions are participating in the application, but what's neat is that this is all happening without me having to do anything fancy. So in production, you still may do blue-green, you still may use things like Spinnaker and do really smart deployments and weighted considerations and doing canaries and all these kind of fun things, but especially in your dev environment or maybe even test. This might be a really nice way to do easy no downtime deployments. So once this thing gets done, it's a magical part. It starts taking down the old ones, but again, only replacing them with new instances. If we go back to this guy, he's still returning all the regular cities, turning away happily. There we go. Okay, so we got one of the new ones is starting. Those other ones are going offline. Let's find that window again. Oh, we got a little Pittsburgh action there. So it's starting to come in. There was zero hiccup here, right? This thing has continued to call with no downtime, which is really cool. So eventually all four instances get replaced. All of them should be returning Pittsburgh in their results. And in the meantime, yeah, I might have to do some defensive programming because there could be a chance that again, my old version, new version are running at the same time, but it's not a, it's pretty short window. And depending on how you've built your service, that's not catastrophic. I want that last one to go. Oh, I think we're there. So there we go. Took a few seconds. All of those instances refreshed. That's pretty cool that that actually worked. All right, terrific. One for seven so far. You're keeping track. I know it. All right, so what else can PCF run? That's a, it's just a bread and butter, meat and potatoes, wherever I'm at, culturally. That's just a traditional workload, right? That's gonna work great on Cloud Foundry. It's made for those sort of things. It scales great. Let's do some quirky stuff. So let's talk about apps packaged as Docker images. Let's live on the wild side together here. So I have something, I've packaged it up in a Docker image. I wanna go ahead and just deploy that directly. So what kind of thing should I know about this before I get my feet wet? Well, of course PCF has always used Linux containers. It existed before Docker did, right? So I mean, this is original stuff with C groups and namespaces and all that. So that's all great. So Diego uses this Garden Run C engine to actually construct those containers. Run season, a standard. These are OCI compatible images. So all standard container-y stuff. You can publish things to PCF from Cloud Foundry from either public or private registries. Credentials are no credential. So if this is sitting behind the firewall, terrific. If you're using Docker Hub, have at it. All of these are valid choices for pushing. Of course, you should probably specify a tag. So you're always pulling the latest. Otherwise the platform will always just pull the latest. So if you have versions you wanna deploy, you can specify a tag as part of the deployment. What's interesting as well, I didn't know this till somewhat recently, you can actually control the exposed port when you push from a Docker image. So you can say, hey, I'm gonna push this and expose 62.55. Great, PCF will then honor that. So maybe some additional controls you can get there. I'll show you that of course as well. The most important takeaway here is there's no difference between an app deployed from a PCF, from a CF push source code or from Docker. It's treated the same. It's all logged, all environment variables, service brokers, monitors, all those scale. No difference, right? Once it's running in the platform, it's just stuff running on the platform. So nothing degraded or anything like that. And of course your admins do have to enable it. Just take them out to lunch or something. It's fine. They'll turn it on for you without too much trouble. So when you really, you're just considering you have two choices, right? Do I deploy code where the platform containerizes it for me? That's build packs. We saw that this morning even on stage from Ben as we talk about cloud native build packs going to more platforms. So do we do build packs or do you do it? And it's not even a wrong answer. It just might be wrong based on your team and what you wanna focus on. So if you do the former and I want the platform to containerize it for me, what happens? UCF push, you do what I just did, right? And so what happens there? Okay, the cloud controller triggers a staging event. Container kind of gets created. The root file system actually gets mounted. Diego actually builds everything I need, gets that droplet which is that blob, if you will. That thing gets scheduled onto those cells and it runs up. All the containerization activities, all that sort of stuff is handled by the platform. That just works for you. And for most that's great, right? Now if you wanna do it yourself again, this isn't a wrong thing, it's just you have more blue responsibilities here, right? I've gotta package my source code, great. I have to pick a base image, right? Here's my node base image, my Java one, my whatever. I've gotta then write a Docker file. I've never, I don't think written one from scratch, it's just copied from Stack Overflow. I then have to generate and upload that image somewhere and then I've gotta go push that Docker image, right? So I've got some pre-steps. Now once it then gets into cloud foundry, it's pretty trivial, it starts up in seconds because it doesn't have to do anything. Literally mounts that container image as the root file system and streams it out to a cell and starts it up and you're great, right? So in both cases, everything works. You're just almost deciding, does my team wanna assemble these images for maybe good reasons or do I not really care? Do I just really wanna go from source to a routable thing with as little must as possible? Then do the former. If I wanna handcraft my container images or have my Jenkins pipeline do it or something like that, totally cool. You have a choice of both. So let's do one of those. So I'll take an app that I put together and so this is just a even simpler .NET app that literally just says like, hi, I'm in Docker. Yeah, so I'm .NET Core, it's containerized, it's in PCF. It's amazing stuff. So I have written the Docker file written. I've taken this from elsewhere for the most part and so this is just taking the Microsoft base images that they provide for free. I'm running a restore to actually pull everything in. Then I'm going ahead just from grabbing that base runtime. Yep, I'm setting it, should listen on 8080 and here's my DLL, right? Then before I got here this morning because I'm not living that dangerously, I went ahead and did my push to the Docker Hub. I containerized my thing, got it all running up there, that's great. No, I don't wanna install Docker extensions and I was good to go. So let's prove that sitting up in the Docker Hub right now. Here's our Sarota Core CF Docker. This is a really complicated app that we're gonna go now push to CF. So let's go here, let's go bigger again. And so the command is only slightly different if I'm dealing with Docker. So here my command is pretty basic, it's still CF push. Let's call it Core CF Docker and this time now I'm passing in a Docker image flag and so here's the place to find it, it's gonna default here to the Docker Hub. I'm gonna say I want the latest version of that. We'll make one instance of that thing running and we'll give it 256 megs of memory. I'm gonna then type it wrong for some reason. Let's see, what don't you like? See, you should just infer instances. When's this machine learning coming, James Bear? You know I want one instance, come on. So we'll go ahead and push this, this happened super fast because there's no real build packy stuff here, right? It's just literally taking it, mounting it, starting it and in, I mean this is about a 10 second process which is pretty crazy. They go from like, hey here's this thing to, hey I can now reach this thing and it's a running app which is just awesome. Now it's making a straight up liar of me but this is just about 10 seconds of time. It'll go ahead and start this up and now it's running. So this is a Docker based.net core app running happily here in PCApp. Let's prove it's happily. You don't know me that well yet. Let's see if it's actually there. Oh, get a little whale up here. Our iconography work or whatever the heck the word is is spectacular. So I mean that's a Docker whale. We've got steel toe icons. Don't even let me show you the boot one because you're not ready for it. There we go. I mean that's, this is premium tech. So that's fine, right? That net core app just running perfectly fine in PCF from a Docker image, which is nice to have. All right, so that's pretty basic. Let's talk about a more complicated scenario which actually is mildly more complicated because let's talk about TCP routable apps. Let's talk about something that's not just a friendly web application with traditional 8080 sort of traffic. So what should you know about this? So what's it mean to deploy one of these kind of apps to the platform? Well, hey I can do all kinds of protocols here. Everything's kind of supported. You'd create your TCP route and you map it to your app. It's pretty straightforward. And then clients can make actual calls to that new route. And admins enable this of course. You have to set up an IP space, things like that. What's cool is from here, you can also use container to container networking so things can privately reach each other. If you don't want this thing exposed to the public internet, which I'll do it first and then I don't trust you so I'll make it private. I can have this container networking and service discovery that just works for me. And so those container to container networking policies are great, right? Developers do not have to go open a ticket to say this app should talk to this app. Developers get to create a command that says this app should talk to this app over this protocol and this port, live life, right? It's a much better approach there. And so all that service discovery again is built in. I'll show you that. I could just use a DNS entry and it goes ahead and figures out where everything is, which is very handy. So what I'm actually gonna do is deploy Redis to PAS, run Redis as a service instance in there and then have my app route to it with no problem. That's crazy you say. Well, just stay tuned. It's right after lunch. This is the best you're getting. All right, so let's roll here. So I'll go to another instance here of code. Which one do I wanna do? Let's do this. All right, that is a really large font. All right, good. So what I wanna do is first make that visible. There we go. And let's go ahead and see what domains I have. And so this pulls the domains that are available to me as an app dev, right? So these are things that I can map to, route to, whatever. So you see I have a TCP apps and I have one internal as well. Mesh apps just showed up there. I didn't even see that last week. That's great. So I have all these different things that I can attach my app to. So I wanna attach this to one of these TCP routes. So let's go ahead and do this. I'm gonna go ahead and push, we'll call it Redis Docker. Just literally the Redis image that's in the Docker hub. Right, I don't even have to build anything here. I just wanna literally deploy the one that has tens of millions of pulls and just deploy that to PCF. And I'm gonna go ahead and give it an instance of one because I need to specify that apparently. 256 megs of memory. And what I'm also gonna do here is give it no route. Right, so I don't wanna make this thing routable yet. Again, I don't wanna attach an 8080 sort of thing. Just don't give it a route yet. I'll do it later. And then finally, what's really important is I also have to tell Cloud Foundry that this thing should be only looking at a process. Otherwise it's gonna be trying to ping at a web endpoint to see if this thing is healthy. And it's gonna say, oh my gosh, I'm not healthy. Let me spin up another one. You're gonna get stuck in this stupid place. So instead I wanna say, look, it's process. Is the process running? Then it's healthy. And that would be its indicator of what you should do. Yeah, I figured it might be you, you, let's see. Now that looks right. Yes, curses. I was so excited about the docker part that I left off the image. All right, so this should also be very fast because again, it's pushing a docker image. There's like six waters up here. I know I'm drinking someone else's at this point. This is very unsettling for me. I'm sure it's fine. All right, so we've got everything up and running. If you look at my app though, this route is empty. Right, this is an unreachable Redis server right now, which is the worst kind of Redis server. Let's be honest, it needs to be reachable. So what am I gonna do with this thing? Now, if I do go back to the apps manager, I should see the little Redis docker out of the whales back. That's good too. So this is running, I just can't literally access it. So let's change that. So what I wanna do is actually map a route. So I'm gonna do CF map route, very intuitive. And I'm gonna have this as Redis docker. What am I gonna map to? And I'm gonna map it to the tcpapps.pcf1.io. And I'm gonna go random port because life's more exciting that way. Now what's interesting here is I'm sharing the tcp router, the public one with everybody. So if I just tried to pick the Redis port, I think it's blocked every time because someone else is using it. So I'm just saying give me a random one. Now the one still exposed on the container is the standard Redis port. But here's the one at the external router that'll be listening on a port, send tcp traffic through to that port on the actual container. So let's go ahead and see my app and see if that's actually there. So app, Redis docker, I should now have a route. I do. So this is now routable, right? This is actually routable to anybody. Again, don't mess with me and try to do the anything here yet. But you can actually reach this thing over the public internet temporarily. So let's prove it. So let's go ahead and use the Redis CLI I have on my machine and we'll send this to tcpappspcf1.io and we'll ping it and it was one, gosh darn it. The memory of a goldfish. Where was that port? Thank you. Wonderful. All right, so it's that and let's just ping it. All right, we got Pong back. It's a good sign. You might say that seems mildly non-legit. So let's actually set company name. We'll set a key to pivotal. Great. Let's go ahead and get the company name key. Hey, Pivotal comes back. So there's a real Redis instance accessible over the internet using the Redis CLI. That's kind of cool. Now I'm still nervous having that sitting there on the public internet. So let's go ahead and delete route and then we're gonna do the more secure way. Are you sure? Yeah, stop making me nervous when you ask me that. Great, so we'll go ahead and delete that route. And now we're gonna do this the smarter way. So the smarter way is to actually map it to that private kind of internal TCP router where it's only accessible within your Oregon space, really your space, I believe. So let's go ahead and do that. So let's go ahead and CF map route. And once again, Redis Docker. We're gonna map it to the apps internal domain. And we'll give it a host name Redis Docker. Now what's neat here is because this is mapping this again internally, I can, this is just gonna go over now the standard Redis port. I don't have to do any sort of port translation stuff. So it's gonna create, add that, that was super easy. Let's go prove that worked real quick. Take a look and its route now shows up as apps internal. So this is completely not accessible from the public internet, it's only accessible from within the Cloud Foundry Network, that makes sense. So then I built an app that actually talks to that. Redis Docker and it's using the standard Redis port, right? So this code just literally after you call a get adds a product key to Redis, then goes ahead and gets that product key and throws it on the page. Sophisticated stuff because you deserve it. So it's good stuff. So I'm gonna go ahead and push this app and then we're gonna go ahead and connect the two. Now they're already applauding, I'm not done yet. I don't know what's going on here. CF push, we'll go ahead and push that. Now again, in this example, I am not, let's make sure I'm not a liar. No, I'm not connecting to that Redis instance as a service, right? It's not a backing service. I didn't add it from a marketplace. This is literally just a Redis container running there that I wanna access from my app. So we'll go ahead and push that. And then the last thing I'll have to do is by default, these two can't talk to each other. So just because I've done this, every app isn't just allowed to naturally talk to every other thing at this sort of container level. That's kind of a special permission there. You need permission first. So what we'll do is actually create then that container-container policy that says this can talk to this. And I will load it first, just so I can get that terrible feeling when the app doesn't work. Just to prove that it doesn't work and then it will work. All right, so we should have that one showing up here in a second. There it is, the Redis reader. Go build, back, go. And then we'll define it, which is great. You know what's gonna bomb out when I push it to, because it's actually gonna try to connect when it first starts. It's gonna get angry. You don't like Cloud Foundry when it's angry. So it's gonna complain. I should have done it with a no start, but YOLO, we're having fun here. All right, so it's gonna complain. That makes total sense. It's allowed to complain. We can actually see in the logs why it's complaining because it's gonna say it cannot reach. Yeah, it's trying to reach stuff, right? Trying to hit the Redis connection it can't. So again, this is where aggregated logs are great, so I don't have to go BSSHing into a bunch of instances trying to figure out what goes wrong. I can go to one place and actually see everything going on. So this is gonna be angry. Let's just make it less angry and quiet it down. And so what we wanna do now is I'll add that policy that actually lets these things talk to each other. So this is pretty easy. So we just get to, let's first off list access, make sure I don't have any policies there right now. Nothing makes sense. Let's go ahead and CF allow access. And this is the Node CF Redis reader. It's my source app. I want it to be allowed to talk to Redis Docker and I wanna be able to have them talk to each other over TCP and the port. It'll be the standard port for Redis 63.79. It's gonna allow traffic and again, this is all great stuff. This would usually be network-based, ticket-based routing sort of stuff. I'm opening a ticket with a network team. I'm waiting three weeks to get acknowledged. Instead, now this should just have a policy here. So now if I go back and restart my app, it should be less angry. And then when I start it up, it should just work fine. We'll go ahead and stream the logs just so we can see when it's done. That seemed really fast. All right, let's go ahead and view the app. Hey, value from Redis is PCF. So put it into Redis, pulled it back. It's talking to Redis, no problem. I just have a random container running. So now, especially with that new feature we announced this week with having multi ports per container, do you think your testing scenarios could be fun too? I could throw rabbit in here and expose the management port and the messaging port from the same image. This could be a cool way to do some simpler testing on your systems by just deploying all of it to PAS. Even though realistically, you may not run this here permanently, you'd have to attach a volume service maybe for persistent storage or things like that. This could be a great way to deploy certain types of workloads with non-HGTP traffic. All right, we're killing it on demos right now. It's a lot of cockiness. It's gonna come back to bite me. All right, so we did these. What's next? Background workers. So again, we think of this for web traffic, interactive sort of stuff. I'm triggering it. What does it mean when I'm trying to do workloads instead or just kind of quietly sitting in the background making things happen? Not like PCF product management, just it's doing things. It's not all flashy like engineering, but James knows what's up. So what should I know about these? So by default, of course, everything I push gets a route, everything is addressable. I don't always want that. So instead I might want to choose a non-routable app that's just gonna quietly do its job. It just runs in the background and does stuff. So I might need to have a different sort of health check than just process. And those things, what's important though is if I have a background job, it can still access everything else. It can still use Spring Cloud services for configuration. It can still do environment variables, attached to services, all that kind of stuff. So let's quick show that and give you a sense of how that works. So I've built an app and this one is core demo batch. There we go. Enormous. All right. So what we have here is I'm actually gonna talk to RabbitMQ in this background job and just pull things from the queue and do some work, right? I don't need it to be routable. It's just when a new message hits RabbitMQ, pull it and do some work. It's a background job. So in this case, I'm pulling environment variables from Cloud Foundry. I'm connecting to Rabbit. And then I'm just listening for let's say new loan records or something like that and processing when they come in, right? Just sitting there constantly, quietly processing information. And if I look at its manifest, it is connected to my RabbitMQ instance and I'm saying here's a process health check, right? Don't restart it because it's not HTTP routable. It's not supposed to be. All right. So let's go ahead and push this thing in. We'll send that in. And while that's going, I can go to apps manager and I can look at my services and I can look at the RabbitMQ instance I deployed somewhere. There we go. I can manage that and log into that. Maybe. We know what's gonna have to come from the other one. All right, let me wait for that to start up. Okay, so this one's attached to RabbitMQ. Let me try one more time see if my credentials are weird. All right, we're gonna do this the fun way. I trust you all so I can pull up credentials here. That seems fine. All right, so let's go to our background job. Let's grab its password because we wanna talk to RabbitMQ and check it out. I think its username was also remarkably complicated. And so what I wanna do is log in and just prove there's nothing in Rabbit which maybe is excessive at this point, let's say. Let's go ahead, we'll manage. All right, that's the password. That means the username will be something delightful. There we go. That's like, it's gonna name my third kid this but decided against it. Just kids are cruel. Why would I wanna subject them to that? So we'll be here. If I look at the exchanges and the queues it just created the lone one as it started up, right? Instantiate that, it's looking for data. That's cool. If we go back to the app and actually stream the logs, let's go ahead and turn this on real time so we can see a process things as it hits there. Let's go ahead and Rabbit, and what you can actually do in RabbitMQ is actually send a message in just for fun here. So I could send in a little JSON payload. This might be lone. ID is 100 and we'll just stop there. So it's a complicated message. If we send that back in and look it received lone ID one. So it's immediately reading that, right? It's connected to the queue permanently. It's just a background job listening for stuff to hit Rabbit. It's really nice to have these sort of things that just run. So background jobs, again, another nice option. If you just wanna get rid of some of these things that might've just been sitting on a schedule task or a server somewhere. Now I'm not gonna do a demo on this one so I can hit the next one but if you look at schedule jobs this is also exciting. So if I don't wanna just have something running permanently in the background or a web job I might have something that's just supposed to wake up and do something, right? Maybe it's supposed to flush an FTP share just blow out old data. Maybe it's supposed to clear out a database or a cache. Whatever. What's neat is each one of these starts up in their own container instance, right? It runs its thing and then it shuts down and Cloud Foundry doesn't worry about it. It's supposed to shut down. So you can initiate these things with a command line interface. You can use apps manager. Again all the functionality is still available to these. They're just meant to start up, do some work and shut back down again. Now you can also set these to run on schedule and a scheduler uses like a chron expression run this every Thursday at 2 p.m. Run this every other Monday, run this every two minutes. You can set up that expression and set up multiple schedule tasks if you want to. And you can even schedule it to call HTTP endpoints. Maybe go call an endpoint, pull down a feed of data, whatever it would be. So really easy to use these. If you're looking to do again, get rid of these scheduled cron jobs you just have spreading around a bunch of machines. This is a nice replacement for that. Can you auto scale these? Can you auto scale these? So like you, you wanna scale it? So, well so the question is how can I auto scale these? The answer would be so that background job I showed you. Auto scaling also works on RabbitMQ depth. So you can easily define an auto scaling policy based on Q depth and spin up five more workers to start processing from the queue. Probably not as much as with these scheduled jobs but for background jobs absolutely. Good, so next one I wanted to show you actually is a legacy app. Because the misnomer, maybe because I don't know, whoever does marketing here likes to talk about cloud native apps a lot, which is a real problem. But in real life, it's not just cloud native apps, right? I can take existing apps and just run them in the platform, right? They don't need to be completely refactored or redone. So again, remember you can do code or containers, Linux or Windows, any of those just run. And again, when they do run, they automatically inherit all that stuff that's happening in PCF, which is great. Now sometimes you wanna do some refactoring but what I wanna show you is actually a dangerously old.net application and just seeing it run on a Windows cell within PCF with zero code changes, runs perfectly fine. And so I'll use Visual Studio for Mac here just to mix it up. And so this is a remarkably tiny, let's make it a little bigger. This is a classic ASP.net web service. I wrote this, this was my 2005 style sort of.net web service where all you had to do was add an annotation and then you just got this hellacious soap based service. It was magical. It's also catastrophic to look at today. But this was really easy. This was the first time we were building web services that were really neat. So I took this code.net framework for, it's pulling machine names from the environment. It's writing out to the console. It's pulling environment variables. Like it's doing a bunch of weird stuff. That's cool. That's what legacy apps are for. So the only thing I've done is add a manifest file. And this thing says, look, give me a couple of instances. Use the hostable web core build pack which makes a little IIS hostable web core and deploy this to Windows 2016. That's it. That's all this thing does. So if I go back to another window here, and I'm in that folder, let's go ahead and see if I push that thing. And so this pushes to a.net Windows environment. I don't have to refactor it to.net core if I don't want to. I don't have to make any fundamental changes. I'm taking a classic.net app and deploying that to a Windows cell. And that should just work. Now, again, if I were doing unholy things like reading from the registry or things like that, you still might want to refactor that, of course, because you don't have access necessarily to the registry. You can read from it. You can't write to it in a Windows container that we do here. So there might be cases. That was crazy fast. But that's going to be a.net app just running on the platform. So here it is. Here's my ASP.net web service. Let's prove it. It's actually Windows Server 2016, which is kind of crazy. I can hit this service. Now again, it's going to complain. Don't worry. It's going to be good. Here is that glorious sort of classic.net page. I mean, look at this. Look at this delicious soap that comes with this. This is magical stuff. We've really progressed as an industry. We should all give ourselves a pat on the back. So this is good stuff. So regular soap service. Now, what's also cool is because this is using Windows containers, I can actually SSH into this thing too, which seems like, no, that's crazy. You're not, what are you doing here? What witchcraft is this? So I can. So that is a C prompt right there. Now I can try to do LS because I'm used to using stuff here. I can do DIR. Here's all those Windows folders no one ever uses like favorites and pictures. It's like, that's a Windows container, right? That's a Windows box sitting right there. And I can even do crazy stuff like instantiate PowerShell, which is not a big deal. And then start running PowerShell commands for my Windows container managed by Cloud Foundry. Pause for head exploding. Like that's really neat stuff. And so I can do all my regular stuff here. I could start services, I could do whatever. That's just a running app, which is neat. But what I also wanna prove to you is that all that sort of failure recovery also comes for free. So why would I move that app there? Well, if I want to, let's say purposely crash that endpoint. So I have a function, usually again this happens by default, but let me purposely crash this thing by doing a remarkably bad thing, which is actually aborting the thread that's running IIS. Like this is catastrophic. Every doc you read says, please never do this. So I did it, which makes sense. So what I wanna do is call that a crash endpoint. I'm gonna call this, it's gonna collapse into a flaming ball. Great. And so it actually crashed the app. And so what I will see here momentarily is actually just detects that one of them restarts and it'll actually show me that, yeah, it's been a couple of seconds, let me log back in again. It'll actually show that one of them crashed and one of them is restarting here. So let's go back to that app and still recover super fast. So fast, you didn't even see it, there it is. There's the app crash, but it's immediately restarting that instance. So even with a classic.net app being deployed to the platform, even if memory leak crashes, whatever, the platform still can detect and recover those sort of apps. So that's a really big deal again for some of these complicated environments where you're not gonna modernize everything. Some of these things you might literally just be able to move and start getting log aggregation and failure recovery and all that stuff for just for free without changing anything. Terrific. All right, so that's all exciting stuff. I won't do a demo of this one, but again, we often get, over here people like, oh, I mean Cloud Foundry is just for stateless stuff. I mean kind of, but really it's also does stateful stuff. So while the containers are ephemeral, you can't actually attach disks and attach volumes to a container in Cloud Foundry. So you can do NFS, right? NFS mounts, it's not a big deal. It's really easy. It's something that can show up in your marketplace or SMB for Windows. So I can actually attach to a shared file store, write stuff to it. That container does not have to be completely ephemeral. The admins can add that, developers then mount that volume wherever they want in the container. So I can still have stateful things without any problem running in Cloud Foundry. It's not a big deal. And then of course, again, these can be read write volumes or just read volumes. You have a lot of choices there. So again, if you have stateful workload, you have Windows workload, you have background jobs, all those are cool. The last one I'll quickly cover streaming pipelines. This is neat stuff. So if you're trying to move away from this world of maybe just the centralized ESB or kind of integration specialist team and you want devs to be able to also build kind of connectivity and data processing pipelines, this could be good for you. So how do I move from kind of batch to data processing to real time? PCF supports things like Kafka and RabbitMQ and Azure Service Bus and all that kind of fun stuff. Spring Cloud Streams, a really nice library that actually makes it simple to talk to a broker. I just put a blog post online a couple hours ago of using Azure Event Hubs with Spring Cloud Stream, really nice stuff. Again, these things are probably background jobs that can ingest data multiple ways. And if you're a pivotal customer, you can use Spring Cloud Dataflow to actually stand up this entire environment completely managed by the platform and build and deploy pipelines. So let me quick show you what those data pipelines look like as they can be part of your world too. All right, so let's go back to here. Let's go to my services. Let's go to my Dataflow server, check that character out. And what I want to do is build a quick streaming pipeline for you. And this is using Spring Cloud Dataflow for PCF. It's part of your entitlement. If you're a PCF customer, you don't have to pay anything, everything's great. Maybe, I don't know, I'm not in sales. It's probably free. So what I did was build something that's gonna be in this pipeline. It just built a Spring Boot App that, let's see, here's my data-enricher pipeline. And so all that this thing is going to do is enrich data, right? So something comes into my pipeline and if it has a certain ID, I wanna turn it into like a friendly numerical or a character letter, right? Warehouse ID for 400 comes in, switch that to California. I'm just doing some data enrichment on data coming through. Think of this as data quality exercises, whatever. I just have a simple thing. So this simple thing just uses annotations that Spring Cloud Stream understands. Doesn't know anything about Rabbit or Kafka or any of that. It doesn't care. I deployed that to my Dataflow server. Here it is, demo-enricher. And all I wanna do is now build a pipeline that works. So let me go ahead and create a brand new stream. And because there's no chance I'm typing the entire stream in front of you, let me grab it from there. And this uses a nice DSL that you can drag and drop or you can just type everything in and use pipes and filters, kind of Unix style. And so this is great. So this little character talks to a Postgres database, pulls data from the database when it hits it, processes it through this stream and then writes stuff out to a log, right? Data changes in database, processes it through stream, writes the log. A developer does this, right? I don't need necessarily special expertise here. I'm building Spring Boot apps and stitching them together in a pipeline. Really simple stuff. So let's go ahead and create that stream. Let's give it a name. I'll call it the kind of warehouse processing stream. Now what's really cool, what happens here is okay. So what does this have to do with PCF? I'm glad you asked voice in my head. Let's go ahead and click play. And what I wanna do here is deploy the apps that make up the pipeline go to Cloud Foundry. Now this is where microservices are cool because traditionally if all of a sudden your informatic environment was overloaded you would just buy more hardware, make things bigger. Because each of those steps in that pipeline are unique microservices, I get to go in here and say, you know what, let's have one node talking to that JDBC database. Let's have two of that in richer and then one doing the log, right? I get to have different instances and memory and whatever for each stage of the pipeline which is really neat. And then each of those automatically gets deployed to PCF. And if I wanna scale it, I can scale it or whatever. But all those steps in the pipeline get deployed to PCF. It talks to that backend broker, RabbitMQ, Kafka, whatever, to actually do all the work. But I don't really have to do much as a dev here. I don't have to deploy anything. This thing's now gonna start showing up over here. I'm gonna start seeing apps show up. And while that's happening, I'll start this little app that loads stuff into Postgres. And so here they're showing up. Here's all the actual things. Here's two instances of that in richer, right? So Spring Cloud Dataflow automatically deploys the either Cloud Foundry or Kubernetes, all the app instances, stitches them all together to the broker. And then you're good to go. You have this managed integration platform where you're not, again, building kind of proprietary ESB sort of stuff. You're just connecting apps which is actually a really nice way to go. So we can see how this thing's doing. Takes a few seconds, deployed, deployed. Again, they're applauding. I haven't even finished yet. It's very unsettling. All these things are going. I can see some information about it running because it actually interrogates the runtime Kubernetes Cloud Foundry to see what's happening. Let's see if they're almost ready here. And then once this starts up, let's go ahead into that Postgres app. All right, so let's clean it up. It might already be cleaned up. Okay, let's create a table. Terrific. Let's add a record to that table. Let's add a few records to the table. So these are just adding those warehouse records with like one, two, three, four, five. It's gonna translate those into friendly things. Let's see if our pipeline app is up. I think it is, and just lying. Let's go ahead and look at the logs. We'll go ahead and stream the logs, and we should see this happening in real time. When I add a record to a database, this pipeline should be pulling records from the database, processing it, and dropping it to the log file, in this case, the log of this app. Let's see. I don't think I've seen it yet. Let's make sure all our apps are running. It still says it's crashed, but I don't believe it. I think it's a liar. Maybe it's not. Okay, so they're all deployed. So this should be processing now. Keep adding records, and there we go. So here's a record. So here's the order, warehouse ID, warehouse location, that friendly thing, California, right? So it went through my little custom in richer, wrote it out to the log. Every new one I add here. Let's add one more, just so you don't think I'm making stuff up. Oh, good Lord, that was a lot. So then more stuff comes in. Here's that new one. Showed up a lot of stuff happens in the log, which is why you shouldn't write the logs for demos. Things are happening though. So you saw that one go in. You already believe me. We built a lot of trust here together. But this is a really easy way to build a data processing pipeline, right? And this stuff runs in PCF. These are stream processing apps. Again, not a traditional 12-factor web app. This is sort of stitched together apps, which is pretty cool. So hopefully through all this, you get a good sense. Like yeah, you've got a ton of stuff, but more and more of this can actually potentially consolidate on Cloud Foundry, right? I can run 12-factor, sure, but I can run Docker images and TCP apps and background jobs and schedule jobs and legacy apps, streaming apps, and more, right? So this may be potential for you to start combining some of your things into one platform and simplify your ops and give your devs a better place to go for everything. Hope this was fun. I'll be hanging out up here for questions and appreciate all the attention.