 Hello, everybody seeing people still dropping in. I'll give you all a minute or two. People lost people are still trickling in. So I'm saying about one person per 15 times 15 seconds. I give another minute if that's okay. People are just joined. We're just waiting for people to trickle in. Hope you all are having a great day, great, oh, the summit. Okay. I'm going to get started. We have about 15 people. So hello, everybody. My name is Si and I am a product manager at VMware. And my primary product area is building an open source cloud boundary for Kubernetes project. Also called Cf4K. And I'm here today to go through a journey of a developer who wants to build a cloud native app on Kubernetes. And the challenges that they are facing, the face of Kubernetes in general, compare that with Cf4K and demo, the little experience from pushing an app to actually curling. We will also take a look at the inside, the guts of Cf4K. And then talk about generally the complementary relationship between Cf4K and Kubernetes. And finally, just talk about where we are going next and how you can get started. So quick note on presenting this remote for someone I'm hoping you and your families are safe during these uncertain and tough times. Ideally, I would love to do this in person and be interactive. But we are doing this remote and the tool we are using, you know, a bit limited. So I recommend you to hold off your questions. Or you can add questions to the Q&A. And then I'll do my best to answer them at the end. If I can get to all of the questions, you can ask me in this live channel, Track Cloud App development. And I'll also maybe share a Google Doc and we can asynchronously answer those. All right. So first slide. So let's build an app with a Node app or Kubernetes. My goal is to just keep it a bit interactive and see the journey from a developer point of view. So this is an app. I'm an app developer building apps. I write code, I test, and I debug locally. This is my inner loop, which I love doing it all day, making money. So far so good. Now it's time to push to production. Unlike the old days where you hold the code over to DevOps Team, now with the introduction of Docker, DevOps teams expect you to provide both packaging and deployment instructions by Docker. All right. So, okay, let's address the packaging right now. Docker is a common way, I mentioned, to ship an OCI compliant image. This is a norm now as an app developer. I have to now understand how to use Docker and some of the constructs that goes with the Docker file. So a few constructs that come to my mind is you can add the base image, you know, the run and copy instructions. You can actually even run your Docker image locally. But with the Docker file, there are more questions than answers for me. And here are some of the subset of questions that have put it on the screen. And this is just subset, there are many other. But some key sort of related concerns are related to DevOps teams, yet it's on my plate. So for example, base image. What base image should I be using? Can I use a base image that I found on stackoverflows.com? Can I ask for what is the best Node.js base image? Or should I use an all-inclusive Uber image that has all of the app dependency? It helps you write less instructions if you use that. But how about origin of those images? Who owns these base images? And how often do they ship CVD patches? So these are all the trade-offs you have to think about between questions and trade-offs you have to think about when writing a Docker file. And so that's the sort of one way of looking at it. But I'm not going to highlight all of these questions. But one thing that I want to highlight is the key thing is the image provenance, which you see in the labels here. It's extremely important to establish image provenance. So you and your DevOps teams can answer some basic questions of your cluster, like where did that image come from? What version of code are you running in the cluster? If you don't, you will have a bad time. So sort of thinking about this, moving on, right? You build a Docker image, you're happy, now you can test locally, right? Fine, I have to test it twice. Let's think about how will this work on Kubernetes. One thing I did forget to mention, forgot to mention is the registry. Now we have to worry about where should I push my images. Hopefully DevOps teams will provide one. But how often should I prune their registry? That's some of the questions that would pop up as you use another sort of dependency. Okay, so moving from sort of Docker image, now your DevOps team wants to use Kubernetes. And now we have to start thinking about what are all the ways you can deploy your app. So even more new constructs and sort of new YAML. Here on the left you see you have defined a new deployment, which actually is pretty intuitive if you think about it. But it doesn't make sense for an app developer to define it, for maybe an DevOps person it may. You need the services to go with that. You will need to test this in cluster. And if you're consuming other services, you will need to need their deployment services configuration to validate all that works as expected. And then things well enough. You need to think about what kind of configuration you will have to provide. What kind of configuration the users can configure. So more YAML, if you may. So for example, how will you share your config and sensitive info with other services or provide a way for users to configure them? How about encrypted communication between microservices that you want to use or external network? All of that was just day one. All of this you have to stop thinking about these are the things you need and you have to configure them by hand. So those are some of the day one concerns. What about day two? Just quick primer on day two activities. Activities for day two are anything that is required once the app is in production. So what I mean by that is for example on your left you have to worry about observability. So you know how your apps are performing and helps to debugging if you do encounter issues. But it's not easy to pull these directly from your deployment. Yes, there are projects that help you with collection and indexing, but either you have to deploy them yourself or you have to share that with other apps and installations that are on the cluster. A few other things, CI CD. You may have a CI CD that will trigger on new code changes build the image, test it and push the image to the repository and then finally update the Kubernetes to read all the pop. The blue blocks indicates that it is something that you should be doing and I'm hoping you are all employing continuous integration and delivery. Reach out to us if you're interested in how we do it. We do it our complexities 10x compared to because of how many projects that we build and ship. But there are other variety of other automation that you have to build. For example, patching the base images. The language dependencies that go with that. What about search? When they expire you have to rotate them. What about credentials? If they get compromised you have to rotate them. This is just to name a few, but these are all the developed concerns that you are required to own or co-own with them. Speaking of that, as an app developer, in my opinion, you should be spending majority of your time on your inner loop which is code, debug, and test locally. Other times monitoring your app in the production so you can collect useful feedback from your users and then iterate on your apps. So, sort of amplifying this, the problems actually you are seeing gets amplified if there are many depth teams in our organization. So, this is one where looking at it there could be multiple snowflake environments. I think I mentioned before, you step team may be using their own base images and we don't know where they came from. So, servers are unpatched. You don't know which servers need patch, which apps need to be patched. What about TLS communication? What if you want to talk to one app for another app? You have to worry about encrypted communication using servers and so on and so forth. The point is the cost of apps keeps going up over time as new services are added. And one way I've seen people solve this problem is adding a depth ops person to a team. But that's very expensive and not to mention less satisfying for the depth ops person as part of the app developer team. It's just a different approach and just a different, you know, concerns. So, I only showed part of the concerns here. But there are other concerns that I didn't highlight here. It's just a focus was for app developers only. So, let's think about, let's try to think about this contrast with building an app on CF, Cloud Foundry. So, we're going to go to the next slide. So, let's reset. Let's see. Okay, we're back to our inner loop. We code, we test. Everything is great. Let me try to push an app to Cloud Foundry or to production in this case. So, on the left, you see all you do is see a push, right? And then you saw previously in the slide I intentionally showed you the files of your node app here. There's just one file that returns hello and then it just got package.json. All this does when you do see a push is it basically just takes all of the source code, figures out what kind of language this particular source code is made of, builds that OCI compliant Docker image and then shifts that to the registry and takes that and deploy that as just a Kubernetes part of deployment. And you can, in this case, you give instances to instances to be running. You do that, exactly that. And then it also creates a networking service for you so you can actually call the app, which you can see on the top right. You can actually look at the logs, too, directly for that app. This logs are your logs. You don't have to share with anyone else. And you can actually create additional routes if you wanted to to make sure that you have your own routes in case you can conflict with other people in the same organization. That check and balance is already built in that allows you to, you know, build Aster without stepping on other teams' toes. So just to summarize, you didn't have to write Dockerfile. You didn't have to write Kubernetes YAML. You don't have to use third-party, like FluentD and other things to run your app. You don't have to use base images, worry about the base images. All you did was run, see a push from your source code and everything else was taken care of you. So, and that, to me, at least as an app developer, brings joy because I didn't have to worry about any of the infrastructure or operational-level capabilities. All I cared about was my code and having that code reach to production environment. And just to be clear, that see a push is going to be consistent with your local, you know, Dev platformery instance or production platformery, where you will have a CIC see a push. Awesome. So I'm going to pause and I will do a quick demo of the see a push. So what I have is an existing cluster and for purposes of this demo, I actually pre-installed CFRK8 already onto a Kubernetes cluster, in this case, GKE. And I will just show you a very simple example of how to see a push and sort of hints of what's happening behind the scene. So I will switch to my entire screen and I will hopefully all can see it, schema screen. So just to confirm, I'm going to do, to just verify that I have a cluster already have quite foundry installed. I'm going to do CF apps, which is my CF CLI, which I use to connect to platformery. As you can see, I already have two apps running. I'm going to go ahead and switch to an app. Let's see what I am. Test. Let's see if I could go to my smoke test, which I built and just do go push. Smoke test. And let's see if it works. Oops. Push. As you can see, the first thing happens that uploads the code to my foundry and then it starts detecting the source code. And as you can see, first thing it does is able to pull the source code from the blob store, which is what we use behind the scenes to store the code bits, more on that later. And it should start detecting the language that the source code of this poll pushed. I'm doing a live demo, so hopefully the gobs are happy with me. I'm not taking no longer than I expected. There we go. So it recognized the Go app and it's now going on, building that image, but what not expected, but we can push another Go app that actually now works on the right side. Okay, this is the app that I actually showed. I'm going to exit out of this. Seems like there's an error with that. So this is another simple app. Again, same thing. Hello from Node app, which is what you saw in the earlier slide. I'm going to try to just push this. I'll call it Node app and then hopefully that just works as well. I'm going to close this. You can see it recognizes the Node app. It goes as an NPM install, creating all the layers it needs to do. It selected the right Node engine as well. So the build was successful for this app. Now it's actually going to schedule the app. As you can see, it actually created a new CI compliant Docker image and pushed that to the Docker hub. All right. So in this case, let's see, I can do is call that app. Node app is actually now available at that end point. So I'm going to stop sharing again. I'm going to switch to back to my slides and just walk you through a few things on how that all worked. So there are a few things that we are using under the hood. This is of an interest if you are a DevOps person, but it's also good for developers to know what's happening behind the scenes. And if you have any specific questions on any of these projects or components, that's what we call them, you can reach out to me via Q&A or post talk as well. There are two major namespaces. One is the app namespace and the Kubernetes where apps are actually deployed and monitored. So when you notice that the Node app that was available, that was actually in the app namespace. And then there are system apps that contain, they have found the components that do all of that magic from CI push to actually to the point where you're able to curl the app. A few things to sort of look at this one is we are built on top of Kubernetes ecosystem projects, especially from C and CF, such as Istio, the cloud-native build packs, which is actually a pocket of build packs as the implementation of cloud-native build packs. And we did this because we don't want to reinvent the wheel. And what CFRK does, it stitches them together to provide the desired outcome for app developers and the operators. There are some components that we built ourselves. This is a cloud controller, which is the Vax, the CLI API that you just saw. And then logging and metrics that take all of that, collect and curate the content from pooling deep and make sure that when you do CF logs, you actually are able to get those logs yourself. And extensibility also is another big part of the value proposition. Operators have the options out of the in-cluster database in a block store that you see here, Postgres and Minio, and instead use external services for high availability for data services. That's another sort of whether you're looking into how can we provide accessibility to the operators. Control. Operators can control the type of base images, and we'll see that in a moment, how they can use it, but they can control the base images used by the applications. For Node, you saw that we were using Node Engine and NPM install. The operators can control what type of base images applications use. So you don't have to worry about it. All you have to give the operators is just the source code with the right construct, and we will do the rest. There are many more advantages the operators and developers can get from using Cloud Foundry. I'm going to zoom at least you heard about it. We have Cloud Foundry, that's the stable version which is on the VM, so this is a brand new project that's only focused on the Kubernetes. I'll move forward and I recommend you take a screenshot of this and ask me any questions you may have again later. A quick check. So how does an app get built? So this is a very high level sequence flow and it will take an whole hour to just go through each of these boxes. So if you have any questions again, you can ask me. So as I mentioned, your code goes through the Blobster. This is where you are basically all of your code is put and then any subsequent pushes are going to be fast because the Cloud Foundry actually knows the difference between the code that exists on the Blobster versus what's on your folder. So a subsequent push will be incremental. Once the code is pulled in, then it triggers a build process. In this case, we are using K-Pack which you saw in the previous slide. K-Pack is a build server that actually uses K-Pack to actually not only systematically detect the languages, the language itself. So in this case, Node, it will look for certain files in the folder structure of the app which go for GOMOD. And once it detects the language, it starts building the OCI compliant image with the right set of dependencies and executables including the base image. So this is why you didn't have to build a Dockerfile. You didn't have to write all of the run and copy instructions that you saw in the first few slides because it builds it for you. Once the image is built, it pushes it to the app registry which, you know, if you had gone through the install process of CFOCate, you provide that at a time of install. And this is something the operators would do. Once that is done, it sets up proper network services. So the reason you could curl the app was because it's able to create that routing, not only with the domain name, but the URLs that you may have flash path through or bar and map those to the correct apps. So all of that machinery is done by within Cloud Foundry. So imagine if there are two app dev teams, one has Foo and one has Bar, they get their own direct URLs to the apps, but they're all put in one namespace and somehow Cloud Foundry is able to figure out when it looks at the domain which one it should go to. After that, it sets up the right permission and resource policies for the app itself. So make sure that it has the network policies, maybe enforced certain no-aggress policies, enforced that they can't talk to other apps unless exclusively granted permissions. Imagine if you had to do all of that yourself, the pain that would be for you to actually start establishing our back and other policy rules, writing natively in Kubernetes. And then once all of that is done it returns the app status that you saw where it says app is running and here's the memory and disk space it's using. Obviously a lot is happening, but if you look at it from this vantage point it is the same steps that you took previously and just automated for you and see how push creates an abstraction and hides all of that complexity behind the scene on Kubernetes, just like it did in the previous sort of the VMs. Okay, so kind of moving forward we know the Node app that I use is a very simple app. Real apps actually have more needs. They need databases. They need app services. So how do you you know, how do you how do you begin setting that up? So one key benefit of using Cloud Foundry is that it uses open source local API which allows any independent service providers to provide services to your apps. So you may want to connect to let's say AWS database service for high availability or S3. Well, you can do that with Cloud Foundry service bindings. All you have to do is, as you can see in the first bind or database service you create a service by attaching that MySQL plan to your app and you automatically get an instance provision for you MySQL instance with the right passwords and usernames. So you can read that in your app and connect it when they can deploy. There are many ISPs available for you to integrate but you can also create your own services. So if you want to connect to other apps, this is a microservice that's powered again. Well, you can do that too with service, user service binding. In this example I connected to another app service where I provided the URL from the port numbers and if any credentials I have to I can do that. So this way in a proper way to sort of connect apps together or to another services that your app is dependent on and gives you options to swap them at any time. If you want to use Postgres DB you can just do that by switching the service assuming your app can follow sort of protocol at JDBC and not specific to sort of database protocol. Okay, so that's sort of CF services but what about the D2 operations? I talk a lot about how this in the initial slides the challenge is faced by operators and in fact, app developers all of the concerns of D2 activities fall on them but the great news is that good stuff on read the D2 operations perhaps actually fall on to platform operators. This is their concern. So for example we talked about Docker file. We talked about the base image and the concern we talked about one was the base image, how do you update when there are new CD6 is available. With BuildPack that is actually staff. So staff images are images that contain OS dependencies like Linux or Windows. This is the base image that you probably saw on Docker file. When a new staff CD release is available. The operators can simply deploy that to Cloud Foundry and what Cloud Foundry will do is simply re-base that image. So let's say your app, the Nord app that you saw, it will simply swap the base kind of line layer which is a lot faster and this is part of the BuildPack spec I recommend you check it out. It allows you to re-base without rebuilding the images so much, much faster and then redeploy those running apps. So that means there is no developer intervention required. You can just be happy coding and the operator is just deploying this behind the scenes and making sure that apps are correctly patched. The second use case that I mentioned is the libraries. You may be using Node.js version of Node.org, so the version of Java that has CD6 gets the same flow. The operators will patch the BuildPack, the language BuildPack in this case will regal the image that needs to just go through the whole process. But it's the same thing. They can just deploy that, redeploy those apps without your intervention because you don't package any of the language dependencies in your code or in your manifest. So the outcome is that we have achieved the separation of concerns and you focus on the code and the iterations and DevOps focus on day two. So to summarize CFLK creates an abstraction layer for app developers but under the hood it is using Kubernetes-native constructs to achieve the desired outcome for both the app and the app developers. So which means from longevity point of view, CFLK can swap out different modules. So for example if we can swap Istio with some hot new networking project considering how Kubernetes is moving so fast, I expect this to be less than six months but the interface remains the same. Operators can simply not install a certain component if they want because they have already installed it on their cluster. In this case let's say operators have K-PAC in their system. So in short what CF complements Kubernetes because it increases its adoption for app developers and yet still provides control to operators both at the CF level and at the Kubernetes level. So where are we headed? So the Cloud Platform for Kubernetes is an alpha stage and if you're looking to go 01.10 late summer we have multiple alpha releases so I highly recommend you all to try it out. VMware which is my company is backing this project and is heavily invested in making this an enterprise ready project. So here's a quick preview of our roadmap. One thing we're exploring is how to decouple the components even further and you may have seen recent changes where we created CRDs to actually native Kubernetes CRDs to have two components talking to each other instead of using a direct API. So I highly recommend checking that out. So we would love you to take the last release for a spin and give us the feedback and there are multiple ways you can actually reach out to us. So I'll paste the links because the links are not shown correctly but I'll paste it in the chat and I'd love to sort of have you get involved or you want to just try it out, let us know. The platform.org also has some what I call tutorials that I believe Kodakota uses the Kodakota tutorial. You can try that one out too but I'll place those links for you to the repo. So you can reach out to us for any questions you may have, how are we building this, what are the any particular interests you may have about KPAC, build service, language build pack, so on and so forth. So with that I am open to questions. How do I try to see if there are any questions in the queue? Nancy, am I not seeing questions or is this question right here? Just want to make sure. So I got a few questions. Are the slides available for download? I am going to, yeah it seems like slides will be available soon. Yes. While there is flat channel it seems like there is available. You can actually go to hash track. We will also paste some links here. If you all are interested this is in the chat. So the deploy dock that you can go check it out. There is a slack link for you. And then okay it looks like it is only for the staff. I have to find a way. I will paste the link in the flat channel. So after this discussion wait around more questions. What is the expected for pushing code to one environment? I am going to keep going. I have received many questions. Sorry about that. I will share the deck. I am assuming that she will share the deck as well. What is the expected for pushing a code to one environment? Promote them to higher environment. So because the important way of pushing it as long as your code is the same. You should be able to just see a push between different environments. You do not have to package the docker app because the code itself is frozen. It is just that the environment production environments may have additional policies that they may employ behind the scenes. But it should be the same see a push across multiple environments. What about testing that app? The question is what about testing the app still works without changing the base image? Usually the way we have seen users do is when there is a base image change and the app is rebased, there are some smoke tests that are run for that app to make sure that the app is reachable, routable, at least at minimum. I do not think it would be advisable to run all of the units that the app may have because there are so many apps that are getting updated. So that is one way to do that, to make sure there is a certain amount of smoke tests, maybe one or two smoke tests per app when there is a change in the base image. You can also have that in your own pipeline as well, pick up at the same time just to make sure that it may fail in your pipeline. So it is easy to know to let your DevOps teams know that you are seeing some challenges. How do you determine the app works well? I think, yeah, as I mentioned, that is going to be something you would have to look into, but mostly the operator will switch base image on a CD fix. They are not going to move, let's say like if there is an actual breaking change in the OS, a certain version to another one to that is completely breaking. They will not do that. You would have to make sure your app works. They will probably give you notification that you want to go to break version or major version. Then you would have to update the apps. But the automatic updates that I showed you, that is mostly always going to be patches, which are non-breaking. And the ones that are breaking, that will be a manual process to make sure that there are apps that have time and need time to update their code so that they can work with the major changes, major OS changes. I am interested in integration of Postgres DB with the app. We are using the platform to use those or the apps will use open source, open I believe it was just right there. Open API, open service broker API to many acronyms, which allows you any independent service providers. So if you have a Postgres DB that is running in cluster, you can actually have that connect directly to an app if you want to. You just have to figure out where the Postgres DB lists what namespace and also have to consider the high availability costs that you have to incur because you have to make sure that is highly available for your apps. Most recommendations we have for users who are using an external service then use in cluster along with the platform. Is there a way to implement extra checks on CFPush? For example, I believe yes. Many users have done additional checks in CFPush where they check for manifest sanitation. They check for things like an app asking for hundreds of instances of massive amounts of memory or disk space. So those are some of the checks you have seen people employ when they do CFPush. It actually works really well for customers. There are more on this page too. Is this QCF? No, so someone else, is it QCF? It is not QCF. QCF is a project that is trying to take out the existing Bosh releases. So these are the Bosh VM releases and trying to convert them into Kubernetes native containers. It uses something called CFOperator which is an operator on Kubernetes project that actually converts a Bosh release into a container. So it is using an existing set of Bosh releases to move towards Kubernetes. CF4K is fully focused on just using the ecosystem of C&CF and other Kubernetes projects and then building from scratch if you may. So what are you using to do CFPush? Creating the cube deployments. I'm assuming you meant the deployment. So the irony is responsible for doing that. So what it does is when the app image is built by the build pack service, the irony is given at this event which I didn't listen to and say okay, I have a pop to schedule and here is the image URL for that and here are all the additional data points for that, like how many instances, how many, how much memory so on and so forth and then it then goes in and deploys that in the right namespace and so it's a very simple deployment spec. I can actually show you my presentation but if you describe the app, you actually will see the, it's a Kubernetes pod with annotations so that the controllers can watch them and at that point the logging service will then stop scraping the pod logs and then curate that and give you sort of like a curated version if you do CF logs which we just saw in the slide, but do CF logs they will actually show you all of the apps, events including errors and access logs and so on. Is Kubernetes a direct replacement for Diego file? That's an interesting question I think it's just when we look at Kubernetes as another sort of infrastructure layer so it's sort of like where Kubernetes is taking care of a lot of the deployment aspects, scaling aspects of it so yeah you could technically say that but yeah, I suppose you could say that. Running on CF on Qube doesn't add too many abstraction layers. I'm asking because we're trying to carve out an exit strategy from running open source cloud boundary to Qube for our startups. Yeah, I mean that's a good question. It does add the abstraction layers but then I think the cloud boundary for previous VM was such sort of a monolithic app like right now you're thinking about how should we build it so what kind of extensibility that you want to get. You could potentially think about it you could actually stitch all of these together yourself. You can use Cluindy, you can use Istio you can use K-Pack but there's a lot of operational stitching that you have to do yourself so which is a lot of work in my opinion and keeping up with that and making sure that they work with your with your apps, app developers and creating that abstraction is going to be hard but this is a trade-off, right between getting a good abstraction, the layers that add versus working directly with some of the N number of projects to create that same experience so so those are the some of the trade-offs you have to weigh in in my opinion. I believe I answered the question on the PostgreSQL DB is it open source I'm not sure what open source means but I can point you to the CFLK repository which is open source and you can see how we are installing PostgreSQL app as I mentioned you would have to use service binding to point the app to the in-cluster PostgreSQL database so if you choose to install it there I'm going to keep going does it have the capability to establish the connectivity between apps under Istio service and generative connectivity automatically does it have the capability to establish the connectivity between apps under Istio service mesh and generative Istio connectivity connectivity automatically great question right now we are basically as far as we know we are going as a deny all service and then to lie on the open service broker to provide egress but this is something that we should we haven't really explored how much of Istio capabilities we want to expose to apps in general Istio we are using Istio to create a service mesh for all of the components within CF so they can talk to the communication between them is encrypted via the service mesh but having that available for apps that's an interesting I think that's where I'm gathered from that question yeah something that we haven't explored my team uses Bosch and CF with core 40 instances for basic core what does this have to satisfy the minimum of five nodes of the development so we are in sort of still in the alpha stage of CFLK we are working towards figuring out what is the what is the level of scalability in terms of some of the scalability questions that enterprises have so that is still sort of a T beauty at the moment minimum five nodes is just the minimum requirement to run today and that will change as we start thinking about scaling let's say 100 apps, 10,000 apps what would that look like what is the control plane you know instances would look like so that's still a set of we're working towards that so I don't have the answer to that at this moment so I have one minute left I don't have any questions I think I answered most of them yeah that was the last of everything I was 19 cool I think I answered all of I did not please to ask me ask me in the chat session that Nancy pasted cool nothing else thank you all I really appreciate you joining me today and that was one session for me I wish it was interactive but it's not so hopefully hopefully in the future we all get to see each other Nancy you want to take over that I'm not sure how to transition here but I'll just wait till this closes off