 Welcome everybody. First of all, I want to thank you all for coming such a late hour. My name is Marco Hennig. I'm working with Iwola, a consulting company in Germany delivering services all around automation and the cloud and today I will be talking about PASS versus CAS topic which gained a lot of interest in the last years and obviously still left a few questions unanswered and In this regard, I'm going to briefly talk about the historical context which brought us here the evolution of software delivery Why this became a problem? Respectively what the problem is we have with the current platform and container oriented approaches and in the end I will look more broadly into How we utilize the more application oriented PASS or the container centric CAS will look into a few decision drivers on a technical as well as organizational level and use cases for the individual scenario So let's begin modern software development is subjected to continuous change development cycles Continued to a bright more and more while at the same time Requirements for features as well as reliability tend to increase So we have a few opposing goals here Which may be not be realized by a one-size-fits-all solution instead This often would be the opposite of what companies really want which is flexibility and agility to cater for the market's needs However in order to get here we had to jump a few hoops with the first being of course bare metal and virtual machines which both had to care for the whole stack By yourself comprising overhead as well as slow development times on multiple levels For then the thing called cloud appeared that the horizon and with it came the as a service movement Resources formally located inside your own data center were now outsourced and enabled to be accessed on demand which all began of course at the Infrastructure layer which offered virtualization of someone else's hardware when you need it You could deploy new new virtual machines Terminate them when you're done with them and only pay For what you really use however This concept had still a flaw which was that it was based on virtual machines Which still needed the full operating system to be started with them keeping both Keeping the boot procedures too slow for on-demand operations and so the hypervisor was replaced by the container engine and Multiple standalone applications were able to be run in parallel including all the necessary dependencies like the code the runtime and settings Which they needed in order to run While they could share a single OS kernel So unlike the VM containers were most streamlined allowing you to deploy full Microservice stacks on your local machine therefore dramatically speeding up the deployment cycles, but These arising microservice architectures were much more diverse and complex They needed to be managed on multiple levels, which is why platform as a service Was introduced where you still got delivered all the data center infrastructure But on top of that the host OS in the middleware and runtime as well as back-end services like database management systems So it was a method and still is for delivering the capability to build and deploy your applications Focused completely on automation. You only had to develop the application and Leave the rest to the platform provider even the containers creation however the developer often wants more flexibility to create his own application with his choice of tools with Outbeing bound to the platform specific technologies and although these past offerings Relyed on container isolation you were not able to control the process of creating the container yourself now Container as a service came along falling somewhere between these IIS and past layer, but It is different in that it does not impose the complete workflow onto you instead it grants the autonomy To the developer by removing a superset of tools It's still all the pieces the container engine provides the cluster scheduler the load balancer and so on But instead of being opinionated about the way your container is created You may create your own images and you have more free to create your own workflows But why care I mean is it IIS is it pass it? Is it cars in the end it's still all the same thing under the hood lot of service in a big room Whether it's your own or someone else's basement, so what's the problem here? I think it's it's our goals It's what we want to achieve and how we want to do it not only is every company different has its own Processes its strength and weaknesses Especially each product we want to sell is different in nature and the way our software is built Calls for either one of these solutions to be utilized Pass and cast have proven themselves to be fitted for most Use cases we encounter at the moment, but we have to ask ourselves when do we use which or Can we deploy our software to either one of these? And then there are multiple Competitors for the throne of course which allow us to run your code in our own or the public data center So cloud foundry It's kind of why we are here this cloud agnostic platform which is supported and continuously developed by multiple vendors the full-scale solution which takes our artifact no matter if it's a container or Code and utilizes the built-in tool sets to deploy operate and scale our microservice architectures at different abstraction levels with the first being the application runtime which was primarily designed to support the fast delivery of cloud native apps with the source code supplied by the developer is Automatically converted into a full functional container and run by the platform by using built-in mechanisms buildpacks Image provisioning and many more the concept Surrounding these platform built containers offers a very high level abstraction. Therefore, you have to trade the platform awareness for a higher developer productivity but in order to also meet the Requirements of the developer who wants increased freedom and flexibility to build up his own Containers his own infrastructure and workflow the novelty was introduced last year here in Basel with the Container runtime the extension of cloud foundry in its original form by a more Kubernetes based architecture for the management of container workflows it utilizes the orchestrator in order to Automate the full deployment lifecycle from the development to the container operation. However Contrary to the concept of platform built containers the developer creates and maintains the container themselves So we have a very low level abstraction with high customization flexibility but with more responsibility on the developers end but What the advantage here? I mean you could just deploy kubernetes and you're good to go by yourself Not quite because you're missing out a few features by that You would still have to create your own virtual machine layer and manage the day two and one operations all by yourself So instead of the pain and suffering of manual labor and the associated Consequences is run on top of Bosch the open-source lifecycle management tool to create a more uniform way To instantiate deploy and manage your high availability clusters So CFCR gives users the customization of Variabilities of Kubernetes with the deployment and management power of Bosch therefore improving the experience for both the developer as Well as the operator this means close foundry gives you now the best of both worlds on the one hand We have a great application runtime for fast onboarding of cloud native apps And on the other hand a great container runtime when you need to deploy more generic low-level containers both share many similar features like Containerization and namespacing but their overall approaches for the deployment of your applications Differ greatly therefore no platform will be necessarily the best fit for every situation and Depending on your individual needs. It is fitted to be either placed on one of them or even both So there's still this one question in the room when to use which and I will go through a few distinctive characteristics now one of the most distinguishing ones is The Architectural style of your app every enterprise has a multitude of workloads They need to run and take care of big trend in the spectrum if the cloud native workload Which is destined to be run on pass by embracing all the features the platform provides by Utilizing the methodology of the 12 factors so Applications which are usually Agnostic regarding their environment meaning for example, they're able to support nodes without a fixed identity So if the note goes away, it may be detected recreated automatically and integrated back into the service without any hassle without The need to get the same IP for example They are distributed therefore allow for a high level of availability and they are high level I highly portable meaning they don't rely on rigid hardware components like GPU clusters for example But every component also has workloads which don't fit this term cloud native Which are the legacy apps or applications which are classified somewhere on the spectrum between the extremes These are not able to fully embrace this 12-factor methodology and therefore the past Capabilities to the fullest extent that top alert G may be less resilient to change since it's constructed more uniquely without a overall general framework dependencies or configuration options may be inherent in the code base and you may also encompass dependencies for hardware like SSDs or GPUs therefore rendering portability as well as horizontal scalability Infeasible for these workloads in the past you were able to run them directly in the IIS layer But now they may be leveled up to CFCR however, despite this classification it may be noted that cloud native applications don't need to be newly created from scratch But may also be achieved by gradually transforming an existing monolithic application This model for example is based on multiple stages, which you may integrate one by one depending where you stand in your application So you may decompose a complete monolithic app Into individual parts for example, which you may transform concurrently by themselves or you may decide to keep the current state you have and Integrate new features for example By adding them as microservices however depending on your current stage It might be a difficult process To embrace all the cloud native features and depending on your path provider It may be enough to only get it to a cloud ready state But no matter what you do The first and foremost you have to care for is the application state Which is probably the most important of these 12 factors our apps don't live alone Usually you have some kind of data which may be static But may also change over the course of the app's lifecycle Depending on the type of your app you may encounter one or multitude of data types Each to be handled differently for example data Which is utilized in order to run the app itself Which is only available inside the container comparable to what you have in RAM today Basically every kind of information you may throw away without the risk of any data loss Then we have the information Which may be utilized even beyond the containers lifecycle data which divides a cattle from a Beloved pet here. We have a further Division on the one hand data sources, which are directly attached to your container This is data we store on hard disk or in the sand for example today and On the other hand we have data which may reside inside External entities like for example databases no matter which kind of app you Data your app will utilize you need to consider the implications for the long-term operations the ease of implementation when using a database for example or Requirements for performance like spatial distribution between your app and the storage area Latency requirements as well as financial aspects like cost-efficient storage for seldomly used data In this regard both the app runtime as well as the container runtime Support multiple options for handling data accesses. However, the core differentiator here is The application the handling of stateful data Although you are able to attach Distributed file systems like NFS inside the app runtime single attach devices such as persistent hard disks block devices Still remain a problem due to limited scheduling guarantees in this regard the container runtime differentiate itself by its ability to attach persistent storage to a container via Staple sets and you even may define storage classes like HDD or SSDs for performance optimizations So after this you may look into how your application is built up every app at its core is based on code You may and should Utilize a runtime or framework which is supported by the platform of your choice in the app runtime The artifact pushed is the code itself During the deployment procedure then build packs are responsible for transforming it into a droplet Which the platform will utilize in order to run the application. However, the platform is rendered to be polyglot language support will Depend on the build packs provided by it indeed these exist for all major languages. However, very special better versions or Custom use cases may not be supported directly. For example the integration of certificates into your Java environment Therefore requiring ring you to extend the build pack or build up a new one from scratch And There you have to consider that this will require a level of future maintenance such as pulling updates or updating the source code of your build pack You have to consider these long term Options Especially if the build pack won't be utilized at very large scale Therefore having to evaluate if containerization might be more feasible for the you here the operating system as well as the runtime Support are defined directly by the developer build container. Therefore, you're more free to integrate Components as needed for the trade-off is that more Responsibility is on the developer's end regarding future updates as well as configuration effort for constructing the image and You may have a hard time of standardization across multiple instances, which you may encounter counteract with new processes You might establish So the idea of build packs is it just works and this counts for the majority of use cases However, not for these special edge cases where you want to use very unique aspects or rely on very strict Version components, which are not supported by your provider Now that you have your app up and running you may require to utilize external services to like databases or caches in This regard the open server a service broker API has been included in both run times it's allowed it allows for the controlled management of the services lifecycle including the deployment management and utilization of external services Enabling you to focus on your code instead of the integration of these resources This is done simply by calling the service broker through a marketplace or service catalog Depending which platform you're on you simply look up what the catalog has to offer Then you can provision the instance and easily bind to the service Accessing the credential through your environment For services, which are not available in the marketplace Like already existing databases you may utilize the concept of user-provided services in the app runtime or in the container Runtime you may integrate the configuration data by secrets or config maps Which are injected into the container. So in the end it doesn't really matter which platform you utilize here There's always a way to offer the service availability as long as your company supports it So now that you have set up you need to configure your application With the app runtime the layers of configuration are pretty small like shown here exemplary for the load balancer You basically just set up your manifest file and you're good to go while You have a few more nuts and bolts with a container run time to care for You're able to configure your setting on multiple levels And even inside these levels you have a further division for services So for example, you can configure different options like cluster IP node ports load balancer if you created Not taken into account the surrounding concepts like replica sets proxies stateful sets So again, you will have much more plumbing to care for which gives you more flexibility but at the cost of higher maintenance and more steeper learning curve and Yeah, so that you know how the coach at look and How it's built up you may look into the difference of actually Developing and deploying it and so the application runtime. It's pretty simple. You just Push your code via CF push CLI gets uploaded the platform cares for the rest like detecting The required runtime setting up routes monitoring and service bindings Inside the container runtime on the other hand you will have a again a few more Switches you can Configure First of all, you need to create your image define all the individual components the application may consist of Like replica sets proxies stateful sets for your storage needs and the exposure of endpoints You may create a manifest use Helping tools like helm for example, then it has to be pushed to the containers registry and run by cube cuddle the CLI of the Container runtime Before you will have to expose your endpoint manually So you have a lot more interactive steps on this side, which I needed in order to get up and running But you will have more control and these steps You may take out of your hands by implementing a CICD pipeline for example When you are up and running you may want to test your code to First thing first your local environment Will not exactly look the same as your productive environment when you use build packs small changes might be Encountered libraries versions. However, this is only a problem if you really depend on these versions a common way For testing with production conditions would be to set up a special staging area Which you may Test your code in isolated from the customers with your head if you're happy with it you may switch the route to your application and Put it into production a simple blue green deployment here The downside would be that the push may take a little bit longer than locally and the staging result is kind of a Black box where you have a little bit less control and insight therefore the troubleshooting might take longer So with a container again You will have more control because you may install Docker locally on your own machine You can be certain that it will run exactly as in production So you can tweak what you want and have quicker feedback loops, but it's also more work Since you have to care for many steps by yourself And then you will have your day two operations for example You might hit a CVE which has to be mitigated while utilizing a build pack the developer Won't have to care for anything since he only provides the application code The operator on the other hand will upload a new release and redeploy everything Automatically afterwards every container will be updated is secure and ready to go a Convenient way for the separation of concerns between ops and devs In case of containers, of course while the operators one basically doing Nothing or not much the developer cares for every level of this stack Therefore, of course the patching to since it's usually not the case You might have to enforce new processes in order to for the developer to recognize CVE's in the first place Then it's up to him to patch the images pull the updates push it back into the registry and restate the applications before the Vulnerability is mitigated so new processes again and more responsibility on the developer side and Then there might be the situation When you buy a service at a provider, but you're unhappy with the general situation So you want to port your app to another provider but You're some more how tied to the infrastructure or specifics of your current provider Which is a place you really don't want to be in However, with both build packs as well as images This is a pretty easy thing to do with build packs You have heroku you have pivotal and many other vendors to choose from you may require minimal code changes But in general, there's nothing to your app. That's specific to build packs with containers It's even easier since they work anywhere a Container engine may run be it Kubernetes OpenShift and others It's not that different than writing an app that works on your local machine so in both cases you have a big ecosystem and platforms you can port your app to and You will of course encounter The point of onboarding your teams. I've seen many people having a hard time adapting to Change or making themselves familiar with new technologies So in order to introduce new and complex technologies, you have to make sure that everybody is happy and everybody is on board Or the project is just bound to fail Getting started with Kubernetes on the one hand. It's not that easy It's not like you can read a five-minute quick start guide and you're up and running There are just too many things you have to know For basic onboarding you have to learn how to deploy Explorer scale and update your app, but there's many components and concepts Inside the Kubernetes world you will have to familiar familiarize yourself with with parts state for sets replica sets And so on and so on getting started with cloud foundry And on the other hand is easy your devs already know how to develop the spring boot apps their Java apps PHP They already know what their artifacts are and may already have pipelines in place They just have to do the same thing in the cloud, which isn't much more than Learning how to define your manifest files So if you mainly develop server based applications in Java and Node.js or so the application runtime Gets you to the cloud sim more simply quickly and allegedly So When is which one best fitted the app runtime favors opinionated simplicity over complex Flexibility it offers fantastic support for 12 factor apps and micro services But storage needs can be provided by external services, but Stateful apps may have a hard time however So it deals with new applications cloud native apps like web and micro services API eyes App that run fine on build packs, especially when you work with short life cycles and frequent releases With a container runtime, you're much more flexible, but encounter a higher complexity Maybe leading to decrease productivity in the end It offers better support for stateful apps as well as the integration of specific customization requirements for example if you run two containers tied to each other in one data center or on one node Give you distance and higher performance when these are factors you need to care for Or if you want to utilize specific hardware like SSDs or GPUs So it's ideal for third-party apps, which are already package data services Large legacy applications if a high customization is required Certain services must run together or specific hardware requirements are in place so you have to consider multiple aspects with the Application state being the most important of course, but even then you will have to look for specific needs That's the platform of your choice Support all the technical aspects your product wants like hardware dependencies UDP or so and Finally, do you have the manpower? Otherwise, you need to level up your ops team in summary the question path versus cans Can be answered directly and once and for all The bottom line is that it doesn't have to be an either or decision It can and has to be an end in order to accommodate for every use case you will encounter Which you have to however evaluate Individually don't hesitate to transform your applications the cloud native way and with that I would like To end this talk leave you all for the final coffee break. Thanks very much and have a safe trip home You