 Okay, so let's get started. My name is Nien Kwong. I'm the program manager for Cloud Foundry running on the Azure engineering team. And my name is Thomas. I'm the engineering manager from Enterprise Open Source team from Microsoft. So today we're going to go through the process to running applications on the hybrid environment with Azure and Azure Stack mainly on Donut applications. So I'll go through the background and the concept, and Thomas later will demo the integrated CI CD process on Azure and Azure Stack. Okay, so 10 years ago, that's when Azure just started being built. And at that time, majority of our customers' software are running on the physical servers. It's amazing how in the past 10 years, the technology evolves. Now a lot of our customers are already running their application in the cloud and they release in a much faster rate in days and weeks. However, still, the public cloud is not the only destination for the cloud. And as actually as a matter of fact, based on our survey, majority like 90% of our customers still think they need a hybrid cloud as their strategy. So when I look at based on our early customer interaction, we identify these three scenarios as the basic hybrid scenarios. The first one is the edge and disconnected solution. This is when you don't have the stable connection to the Internet. You need to run a workload on-prem and then prepare the data before you connect to the Internet. The second is cloud application to meet the various regulations. This is when you need an application that runs across the globe in different regions. While some regions, due to the data sovereignty requirement, regulations cannot run in public. It has to run on-prem. The third is the cloud application model on-premise. This is for actually quite typical a lot of customers who want the modernization of their applications. They take advantage of microservice framework containers. However, they want to stay on-premise. So you need a private data center still. So to solve the hybrid solution, there are different ways. A very typical way is customers have their private data center and then they choose one of the public cloud. And then they need to ramp up and manage two clouds. For Microsoft, when Azure Stack is designed, what we want to approach this is we want to create a consistent cloud. Basically, we want to duplicate the majority of the Azure cloud services on-premise. Basically, Azure Stack is an extension of Azure or a duplication of Azure just in the private data center. And then the basic principles is consistency. So the goal is the developers and the operators. Their experience will be the same. And majority of the services, for example, SQL services, blob storage services, monitoring services, will be both on Azure and on-premise. And the third, they can be worked together. You can run your application on one cloud and still access the resources on the other cloud, use proper connectivity. How many people have been heard of Azure Stack? Cool. So majority of you. Azure Stack is already G8. Cloud Foundry running on G8 is also G8. And Pivotal Cloud Foundry on Azure Stack is G8 five months ago in May. So you're welcome to try that, especially with the hybrid scenario. Combined with Cloud Foundry running on Azure and Azure Stack, you have a consistent pass system, which is the Cloud Foundry, the multi-cloud solution, running on a consistent hybrid cloud. Basically, if you want to run Cloud Foundry, you need to utilize the Cloud Providers tool to deploy the PowerBush and Cloud Foundry. You can use the same tool. We have Azure Resources Manager, also called ARM, and the scripting, the SDK and the scripting to utilize the resources. And for DevOps tools, we have Azure Pipeline, which you must hear the Visual Studio Team Service is just renamed the Azure DevOps Services. And Azure Pipeline is one part of it, which is originally the VSTS build and release service. And you can also use open service CICD tools like Concourse Jenkins. To take a closer look, here is a diagram outlining the interface between the Cloud Foundry and the underlying Cloud Providers. It starts with Bosch, which will create the infrastructure. And usually, you need to use, on Azure, you use Terraform or use the ARM template to build, to create Bosch. And here, you use exactly the same form to build that on Azure Stack. And then, Bosch talked to Azure CPI, and CPI talked to the ARM API and with the underlying Cloud Providers. With the same CPI, you can talk to both Azure and Azure Stack. And also, with Azure Stack, you have the similar concept of HA. You use the availability set. You can scale using the local services on Azure Stack. For example, you can use Azure Stack blob storage to replace the CC blob store. And you can use the MySQL service for the CCDB too. The authentication is a little bit tricky. If you want to use just the private cloud, you can use the attribute federation service, ADFS. This is supported on Azure Stack with Cloud Foundry too. If you want to integrate it, you want to access Azure while you are on-prem with Azure Stack. You can use AAD, which is the Azure Active Directory service. Both are supported with Cloud Foundry and Pivotal Cloud Foundry. So this is the operator's experience, basically the same as Azure. With developers, usually, developers only use CFCLI interface with Cloud Foundry with their applications. However, if you want to access the services on the cloud provider, for example, the data services or shares, you need to write code to be consistent with that cloud provider. And here, between Azure and Azure Stack, you can use the same Cloud Foundry tools. For example, you can use Open Service Broker for Azure, we call OSPA, that works across Cloud Foundry, Kubernetes and OpenShift. That means you can use that for PAS and PKS. And then the service will connect to services on Azure. You can do that from Azure Stack, or you can utilize the OSPA to connect to services on Azure Stack too. So both are okay, and then you don't need to change your code. If you want to connect to a file share or the file services, you can use the SMB volume service. And this also works across between Azure and Azure Stack. So before we go through the demo, there are two snares. It's the implementation of the user cases we just mentioned. One is the Hybrid CI CD Snarrow. This is what Thomas was demo. And the goal is I'll have one pipeline and one single source code. And then you can run this application on Azure and Azure Stack. So you can first test on on-premise and then push to the public cloud, or you can choose on public cloud first and test that, and then push to the on-premise for both is supported. The second scenario is also common actually today. Some of our PCF on Azure customer is already implemented. So you can use the Geo Distributed Snarrow where you can use a global traffic manager that will route it to your application to different locations. So before you can route that to different regions in the public cloud. Now with Azure Stack support, you can route it to either on-premise platform or on the public cloud. And then for the developers, they still use the same application. You don't need to change the code when you push to on-prem and the public cloud. This hybrid solution works across the different languages, the total language independent. But we do want to use Donut to showcase this process for two reasons. The first, Donut have a great ecosystem working with Azure. And now with the Azure Stack support, you can use the same Donut SDK for your program, working across Azure and Azure SDK. And the second, thanks to the partnership between the Community Pivotal and Microsoft, Donut is now getting the first class seat on Cloud Foundry and with a lot of great feature released. So as a background, there are two implementations of Donut. The first one is Donut Core. This is the open-source cross-platform that works on Linux, Windows, and Mac. For Cloud Foundry, it runs on the Linux stack. So the underlying, it uses Donut Core BuildPack and utilize the same runtime with other Linux-based applications like Java and Go. So basically the experience is the same. Donut Core is the same experience with other languages. The second one is the legacy Donut framework that is more matured. And if you have existing apps, you can continue to use that. It's continued to be supported and updated. It does require Windows Stack. So it requires additional work. For example, Bosch is updated to be able to deploy Windows as a stem cell for Cloud Foundry. And then for the Cloud Foundry runtime, it's also extended to be able to create a Windows container on the Windows stem cell and host the Donut framework applications. Besides additional Donut and Windows-specific features, it's also supported. So the Donut framework developers will feel at home. For example, something you get familiar with the user space registry or SDP to your container and also the event logs is forwarded to SysLog. So you can use those familiar features on the Donut application running on Cloud Foundry. Despite the underlying differences, it is really a lot of hard work to make the Windows container work. What is presented to the developers are the consistent experience. So you can see here, this lists the majority of the experience for the developers. The logging, you can still leverage a log regulator and you can still use an SLA segment for the security. You can still SSH to the container, whether it's the Linux or Windows container. And both can utilize Open Service Broker and SAP Volume Service to access the services and the file and the storage. And you can still tow for the microservice framework support both Donut and Donut framework. So it's a great platform for Donut developers. Before you implement the hybrid application, there are several areas you need to consider. The first is the platform capability. So what's your requirement for latency and scalability? The second is app placement limitation. Some application is required for a certain state on the certain region. Some are very flexible. They can be both on public or private cloud. So you need to arrange that. And the third is data storage and processing. Where should you start data? Can you, you want to, ideally you can store the data in the public cloud service, the database. But it's also supported just on the private cloud. Service access and also Azure have most of the services already released and matured. So it's usually a common practice if you need some complicated advanced services like artificial intelligence, IOT, you need to connect to Azure. And then for some services, you can also stay on-prem. And you also need to create a separate authentication and security policy on different location. And it's also ideal, you have one monitoring place so you can have one place monitor both areas. And then Azure monitoring is just get beta supported on the Azure Stack too. To start with, we have some several steps to write the hybrid application between Azure and Azure Stack. Basically you can just maintain a single source code, but you need to create a separate authentication. You need to create a two-access account in Azure. It's called a service principle. You have the same concept between Azure and Azure Stack. And you can assign the role-based access control on the service principle. And you will use that to to run your application. And if you need to access local cloud resources, for example, you create a storage, you will utilize the SDK. The SDK is also shared between Azure and Azure Stack. The only thing you need to remember is the version is different and the endpoint is different. So we will leverage the profile, which I'll mention. And you can use Aspire and SAP Volume Service for access to the service and storage. And one of the important things here is you need to dynamically determine the platform and then you load the profile. So the Azure API profile is an easy way to help you to load the correct API version for your hybrid application. So some profile is the hybrid profile. With the hybrid profile, you specify what version you want to use. It will automatically load the compatible version between Azure and Azure Stack. So you don't have to figure out the compatibility between Azure and Azure Stack APIs. Once you have that application ready, you can develop a CICD pipeline. I use the Azure Pipeline as an example here, which if you want to utilize Azure Pipeline, you just need to create a build and release definition of what kind of build and release you want. And then to run the task, we use a building block called agent, which is the same as Bosch agent. It's a software that will execute the jobs for the build and release. And there is a separate agent for Azure and Azure Stack. And for Azure, you can utilize the hosted agent, which is managed agent, automatic scale and update that runs on Azure. And Azure Stack, you can have a private agent runs on Azure Stack. And then with the different agent, they will execute the same pipeline as Azure and Azure Stack platform. So that is the simple process for having a hybrid application building between Azure and Azure Stack. Now I will give the control to Thomas, and they just build the .NET application running on the hybrid scenario with Azure and Azure Stack. Okay, thanks, Ning. So like Ning mentioned, the hybrid cloud scenario is very important. And it could be used in different scenarios. For example, you have workload that you might only want putting on-prem and also some workload you want offload to public cloud. And also, you may want to use public cloud as a backup site for your company business or you want burst your workload to public cloud in some hot season. So, yeah, in different scenarios, you might accompany while considering hybrid cloud. Okay, so for this demo, I will use public Azure and Azure Stack as the hybrid cloud. For sure, you don't have to use Azure Stack. And public Azure can also use some other on-premise scenario as well as other public cloud scenarios. For example, VMware for on-premise and also GCP for public cloud. Okay, so here first, I want to introduce the context for this demo. So this is the CI-CD pipeline for a .NET core application. So this .NET core application is called eShop. So eShop is actually a very famous .NET sample application developed by Microsoft. And I will use the concourse pipeline. So here, the concourse pipeline is built on top of Cloud Foundry. And I have built two Cloud Foundry deployment. One is on public Azure and one is on Azure Stack. One for public cloud and one for on-premise. And for this concourse, it will include the following jobs. So first, it will run some unit tests for the code. And after the unit test passed, it will deploy the application on both public cloud and also Azure Stack. And finally, it will trigger load test for this application on different platforms. So here, this job is for load test on public Azure and this is for Azure Stack. Once all is done, that means your code is ready. Then this job is to promote a new version to public Azure and for sure you will have also another job to promote your code. And then promote your release on Azure Stack. Okay. So here, I just trigger a new pipeline, a new job. So because it will take some time, I will introduce the Azure Stack and also public Azure. So in case someone might not familiar with Azure Stack, Azure Stack is the Microsoft on-premise solution. It's quite consistent and a similar management experience like public Azure. So you can see this is the Azure Stack portal. So the URE actually is decided when you install, deploy your Azure Stack. Okay. You can see the portal is very similar to the public Azure. The only difference is because Azure Stack is just GA, so it will have fewer services. So some services might not be available on Azure Stack yet. But moving forward, Microsoft will add more and more services, enable more and more services to Azure Stack. Okay. So here, this is the Cloud Foundry deployment, a very typical Cloud Foundry deployment. It consists in different computer resources including the VR, the network, and also the storage. Here, I use managed disk for public Azure and also standard disk for Azure Stack. Okay. So this is the public Azure portal. So you can see the resources, the management experience is very similar. And also this application, the dynamic application is using we call it OSPA, Open Service Booker API, Open Service Booker for both Azure and Azure Stack to create a secure database. Let us check the job. You can see now it's actually deployed this application to both Azure Stack and Azure. So because it's using dynamic call, so it needs to be installing all the dependencies. It takes some time. Okay. This is the CF Comalign that typically an IT operator will use it to manage Cloud Foundry cluster. So actually you can use CF Comalign to manage both Cloud Foundry cluster on Azure Stack and Azure. And I have already enabled the Service Booker on both platforms. So actually we have already enabled the Cosmos DB, the MySQL, the PostgreSQL these services on both Azure and Azure Stack. Okay. So seems the deployment on Azure, public Azure is already done. You can see. So now you can actually you can use this to read to access this application. Yeah, it's already deployed on Azure Stack on public Azure. And once this gets done on Azure Stack as well, I will promote this version, the new version on Azure. The load test is also done and now I can promote the new version to public Azure. Yes, you can see this application is very successfully deployed. And we also use a traffic manager so in front of Azure and Azure Stack. It will do automatic load balancing. For my demo it's configured using IP. So if the IP range in a certain range, then it will route to your Azure Stack environment deployment. And my confusion is if it fits into the Europe region, then it will route to the public Azure. So okay, that's a very typical scenario for burst for burst workload scenario. For example, your company is using on-premise for example Azure Stack to host application and in some season you want burst your workload to public cloud then you can use this, this way to also push the application in public Azure for some hard seasons. I think the pipeline should be already done. Yeah. You can see the job is successfully done and finally I can also promote the new version on Azure Stack as well. By doing this now in both platforms we will have the latest version. Okay, so here's the details for this job. You can see actually it is using the CF-COM line to log in and for the latest release and then push it to Azure Stack environment. And finally it will configure the router to use the latest. Okay. Actually they are the same application just on different platforms. That's all the demo. This demo, the process is also available on GitHub so you want to try the CI-CD process on Azure and Azure Stack and the traffic manager configuration you can follow on our instruction on GitHub too. Yeah. So the Cloud Foundry on Azure is an organization name so the source code of the application is here. And also we have OpenServicePoker API which is also on our Microsoft GitHub. Yeah, so thanks to Ning and Thomas. Give them a round of applause.