 Hello and good morning. My name is Bhavani Alapragada. I run DevOps for one of the products units in Nasdaq. So many of you have heard of Nasdaq, but Nasdaq is normally associated with the stock exchange. But Nasdaq is so much more than a stock exchange. We have the market technologies. We own the software behind our trading platforms. We own the software behind our stock exchanges to itself. It was the first completely electronic stock exchange. We have the listing services and then we build a bunch of financial products all the way from products that help secure collaboration between board of directors to products that help you with SEC filings, products that help the investor relations and in general the C-suite. The way Nasdaq has grown is both through acquisitions as well as a lot of internal development. So if you look at the spectrum of products that we have in Nasdaq and the spectrum of technologies, it's pretty daunting. I'm probably dating myself here, but we have a couple of applications written in cold fusion. That, yes, true story, right? And we are pretty well invested in the Linux Java stack, but we are also very heavily invested in the .NET and the Windows operating system and that stack. So when as an organization, we consciously decided to move towards containerization. There was the .NET. I was in a meeting when somebody said, oh, just containerize it. Yeah, sure. Why not? And yeah, the .NET was definitely our challenge and we wanted to be able to containerize the .NET stack without doing a lot of refactoring of our applications. So we wanted like the really optimal efficient path. I will not walk you through the torturous path we took to shortlist the vendors available, but we did land on Pivotal Cloud Foundry. We spoke to Pivotal and then we asked them to do a POC for us. That POC was supposed to cut across all of our functions, starting from cloud infrastructure, information security, the traditional data center infrastructure, platform apps who helped deploy our applications, development group, the test group and every business unit too. Yes, I didn't think they would ever talk to us after that kind of a POC we requested, but thankfully they stuck with us. And the beauty of it was, and the few evaluations, the dimensions from which we evaluated PCF, right? One was NASDAQ and its applications debuts all the three major cloud providers. So whatever solution I picked, I wanted it to give both the development group and the operations group sort of a similar experience. You don't want to learn every time you go to a new cloud provider. So that was one thing. The other thing is, as a stock exchange, our security posture is pretty stricter. There is a joke in NASDAQ that the information infosec team does not trust the infosec team. So that was extremely important to us, that we had to get there by and then the cloud infrastructure team. And more than that, what I also wanted was a solution that could provision my infrastructure and deploy my .NET apps in one fell swoop. I didn't want it to have like, we were earlier using Terraform for our provisioning automation. We were using other conventional deployment tools for the application deployment automation. We wanted one tool which would do both for ASAP. And of course the .NET paradigm and the challenges there to be able to do it without too much of refactoring and re-architecting the entire application. And this is where we found, we landed on PCF, right? There are solutions out there that I'm sure can do pretty well on some of these dimensions. But what came out of the POC with Pivotal was they took a very, they treated themselves as strategic partners. They were not somebody who was going to sell us software and walk away. And in my experience, I've done this a long time, the solutions that work best for any organizations are solutions where the vendor aligns themselves with the success of the company. And that is something we liked about it. In fact, I was joking with Madhav the other day. We almost treat him, I'm sure Pivotal won't like this. We sometimes forget that he works for Pivotal because he works closely with our development group. The next slide please. And this collaboration between us worked extremely well. Cloud Foundry is open source and it is super important to us that we contribute back to that community. We were able to do that. And one of the cool things, this is just an example of some of the things that we were able to do in partnership was how quickly we had the Windows 2016 stem cell certified by AWS. Pivotal was trying it, NASDAQ was trying it separately, didn't work very well. Then we joined forces, went to them, and that worked very well too. So Madhav, we'll walk you through some of the details of both the PCF solution as well as the pipelines that we were able to implement at NASDAQ. Thanks Mohini, thanks for the introduction. Yes, partnership has been great. Stem cell is just one example. But when we started this discussion and the questions came up that containers for .NET really, is that really possible? And we said yes, but then how? And the business requirement that were given to us was that hey, take my .NET application, run it in the cloud for me, I don't care how. I think they borrowed that from the Haiku from OnSync. And we actually followed that model for .NET application, whether it's a .NET application, Java application. We provide the CF push experience to the developers. You CF push your .NET application onto the cloud, it doesn't care where the cloud is running, where the PCF is running. And it will containerize the application for you. It will secure the containers for you. And it will schedule the container workload across the ego cells that are already installed in the platform. So when the application artifact, which is your .NET build artifact is pushed to the PCF, the cloud controller grabs it. It stores the artifact into the platform blob store and the staging process starts. The staging process is where the artifact and the marriage of the artifact and the build packs takes place. In this case, we have taken the approach of multi build pack. So using HWC build pack for legacy .NET framework applications and app dynamics for monitoring the applications. The droplet is created out of that which goes back to the platform blob store. And based on the desired declared scale of the application, the Diego starts scheduling the workloads across multiple Diego cells. And it picks up the Diego cell optimally based on the workload that is already in the Diego cell. It places the droplet over there and containerize the droplet. And that's how the OCI compliant windows container is created onto the Diego cell. The number of instances, obviously, as I said, is based on the declared desired state. And it will not only containerize the application. It will also help monitor the application after the container is started. It will create HTTP endpoint for you. It will secure that endpoint will create a TLS termination for you. And it also makes sure that there are four layers of high availability just the way for any other application that runs on PCF. So from developer's point of view, the operator's point of view, the model is the same. So here if you notice here, instead of creating a Linux container, it's for legacy application, legacy .NET framework application. We actually go ahead and create windows container. The windows container obviously needs to run on Windows operating system, which is the feature of PassW, Pass for Windows. So Pass for Windows automatically creates the container image based on the container base image. And it adds a layer on top of it. We'll take a look at that later. But the apps are deployed with a single command of CF push. So that experience is true for the modern .NET application, which are written in .NET Core. Again, the CF push experience works. But however, here we have an option of running the .NET Core applications on Linux containers, right? And other than having the application containerize and doing the lock streaming and all of that health monitoring taken care, the advantage of Linux containers is that you can also take care of NY proxy that sits right into the container. It's a sidecar in NY proxy that provides AppInstance MTLS. So with this in mind, we have both kinds of applications currently running on the platform. We have .NET framework application running on the platform. That is how we started a replatforming effort. We had to make a few changes. As one is said, she didn't want to actually go through the whole lot of changes to make the application work onto the platform. So the initial set of applications which were internal facing, we were able to replatform them by making some tiny changes. For example, don't write anything to disk. Write locks to the console. Don't have application state within the application instance, right? So some of those changes that were done in the replatforming. And now on, just last Friday we went live with... Yes. So one of our biggest strategic applications was based in and around investor relations. And that has the spectrum of users go all the way from CFOs to individual traders. And that went live on PCF. And we worked for it, right? So that going live was not actually the breaking news. The breaking news was Heather, my manager, and I barely heard about it. So it was after the fact. And anything that is that quiet during deployment, I am a fan of. So that's about it. So in terms of replat... All the new applications that are going to come on the platform will now follow the model of .NET Core, which is a modern way of developing .NET applications. So let's take a look at how the container looks like for Windows containers. You have heard about droplet and build packs. Have you heard about droplets and build packs? How many of you? Okay. The same concept applies here too, right? You have a build pack and you have a droplet that is created by marrying the build pack to the application binaries. But the layering of the root FS is slightly different. Instead of Linux root FS, you obviously have the Windows Server Core container base image. Here, I want to like to mention that Microsoft has two kinds of containers, Hyper-V containers and Windows Server containers. We use Windows Server containers. That means that the kernel is actually shared across all the containers running on a Diego cell, right? So the Windows Server Core base image is the bottom of the layer. And above that some new features are added, like new stuff is added in order to make the .NET application work, such as HWC .NET modules, URL rewrite, HTTP compression. Both these modules are used today in their applications. That forms the entire root FS, which is sort of a mini file system, but it cannot run on its own, right? It's a root file FS because it provides the basic functionality for the container to work, but it still needs a shared kernel. That is why we use Windows containers running on Windows Diego cell, which has the kernel. So that's about Windows container and how it's manufactured. Now let's talk about the automation. That's a cool part of this whole story, right? And we have automation going all the way from platform installation, platform upgrade, operating system upgrade to application deployment. So we use concourse for that. Why concourse? Because it provides a unified solution for all the things that I talked about. And we create concourse pipeline. So pipeline heavy solution, not the installation of the configuration heavy solution. So concourse is really great for that. So what kind of automation we have? Well, we got pipelines that create other pipelines that create other pipelines, right? So why do we have that? That is because we have something called master pipeline manager, which is at the root of all the pipelines. This master pipeline manager is nothing but a concourse pipeline, but it is called pipeline manager because it creates other pipelines. It creates two kinds of pipelines, right? The first kind of pipeline on the left-hand side in the purple box you see are those PCF installation, PCF deployment pipelines. Whether it's operating system upgrade, PCF file upgrade or new PCF tile installation, those pipelines are meant for environment setup. On the right-hand side, each team that onwards onto the PCF, for each team there will be one pipeline manager, team pipeline manager. Now team represents one application. For example, we have IRI, investor relation insight, as the team pipeline manager. And there will be multiple modules within that, multiple applications within that pipeline manager, each of which will be represented by individual pipeline per PCF environment, right? So last Friday when they went live with few.net code application, each of those applications represented here by one application pipeline per foundation. So we have two applications, two environments, two into two, four pipelines per application, right? So here we have one for sandbox, one for the event, which is a higher environment, and one for QC prod, which is the final environment, right? So let's take a look at the PCF automation repos. The PCF automation uses a concourse setup that is running in a separate VPC. It has its own TFS configured, TFS repos configured right next to the Confor setup. It did run on AWS and the TFS one repo is used by master pipeline manager to store the YAMLs that are used to create other pipelines. In this particular instance to create the platform automation pipelines. TFS two is used by the second level pipelines to actually set up tiles. S3 is a storage for JSON. We'll talk about that and credit is used for secrets. It's important to note here that the pipeline, all the pipeline that you will see none of those use plain text credentials or plain text secrets. The concourse uses credit hub in order to access the credentials that are necessary to set the pipelines and the credentials that are used by the pipeline itself. So everything comes from the credit hub. The pipeline doesn't have any open text or any plain text. And there are internal repositories that are used to maintain the binaries. That's what that is feeds URL. So let's take a look at how this automation work. This is the cool part. Why this is cool is because we managed to abstract the platform automation. Usually like really geeky stuff to a business process workflow level, right? So that business users can now understand what's going on and they want to adopt that model. So let's take a look at that. At the top of the layer, we have three foundations as we discussed earlier. PCF Sandbox, PCF Devint and PCF QC Prod. On the right hand side, we have three pipelines. One is the master pipeline manager that we discussed earlier. Then the promote pipeline. The promote pipeline is meant to promote something from lower environment to higher environment. We'll take a look at that. And the last one is the pipeline actual pipeline to set up something on the PCF foundation. Whether it's a PCF foundation itself or a particular tile on a PCF foundation or upgrade of stem cell. So this myster pipeline manager is watching the TFS one repo. As we discussed TFS one repo is where the pipeline configuration for other pipelines is stored. This happens all the time. It is always continually monitoring it. The platform operator, if he wants to add a new tile or makes changes to the tile, he goes to the TFS from the repo makes the changes to the YAMLs. Once the change is made or once a new pipeline is added over there or new YAML is added over there, the master pipeline spins off these three pipelines for each of that YAML. So for example, one, let's say I have to upgrade PCF, right, or pastile or have to install new pass isolation segment. For that, I will add a new file to TFS one. And for that installation, the master pipeline manager will create three pipelines for one for Sandbox, one for Devend and one for QC prod. And there will be a promotion pipelines in between to take the installation from lower to higher environment. Now interesting part here is that each of these pipeline as soon as they are created, they are watching their respective S3 bucket. What is the function of S3 bucket? That's where the platform operator ingest the JSON with a key value pair, pretty much key value pair to the S3 bucket and say that, hey, I want to change the past version to, let's say 2.4.x. This is the shop for that pastile. And I want to use this stem cell version and the shop for stem cell version. That's it. The JSON contains nothing else that JSON once the JSON is uploaded to the S3 because the Sandbox pipeline is watching that S3. It immediately kick starts its work, right? What does it do? It becomes the pipeline configuration from TFS to and installs or upgrade whatever that action is to the Sandbox foundation. Everything clear so far, right? It picks up the upgrade that is necessary. The configuration that is necessary from TFS to the secrets are already received from credit up and upgrades the tile on to the Sandbox foundation. And when it upgrading the Sandbox foundation, we have set up to the scale such that all of that is done with zero downtime to application. Whether it's a past upgrade, whether it's a past W upgrade, right? Or the operating system upgrade itself. All of that happens thanks to Bosch with zero downtime upgrade, right? So once that is done, the platform operator goes ahead and makes sure that the platform is healthy after upgrade. Developer checks that applications are healthy after upgrade. This is a more of a mandatory process that we follow to make sure that we are not breaking anything. Once that is done, but both of them have certified that then the platform operator goes ahead and kicks in the promotion pipeline. The promotion pipeline pretty much copies that JSON from the lower environment S3 bucket to higher environment S3 bucket. In this particular instance from Devant S3. Because then the change has been made to the Devant S3 bucket now and the Devant pipeline is already watching the S3 bucket for Devant, right? What will happen? The file change has been made. New file has come in. The pipeline is watching that it's a concourse pipeline. It will trigger the action. The action is whatever the pipeline is supposed to do. It's going to pick up that tile, pick up the tile configuration from TFS to repo and upgrade that tile now on to the Devant foundation. Makes it clear. Now the same process repeats after the upgrade is made. The platform operator does the routine check. Developer does the routine check. Everything is good. What will the platform operator do now? The platform operator will just click the button on promotion pipeline. Once the promotion pipeline is clicked in, as I said earlier, it will copy the JSON and the process continues right through the production deployment. How cool is that? Platform automation, OS upgrade automation, all of that managed as a BPM process, similar to BPM process, right? Now this is not it. This is just the beginning of the story because this only talks about the platform automation. Now we want to talk about the same tools, same unified solution is used to deploy applications and promote applications from lower environment to higher environment. Now here we talk about personas because here there are multiple players interacting with each other. So we need to talk about personas. The first persona is obviously developer. Guess what the developer does? He writes a code, right? And he writes a code and pushes the new code to the repo. Then the release engineer, release engineer is responsible for the master pipeline for that particular team. In this particular case, we have a release engineer who created one-time task of creating the master pipeline scaffolding for the pipeline manager for his team, which is the IRIT, right? And that pipeline manager creates app-specific pipeline. We'll take a look at that. Platform owner is the same guy who used to do the platform upgrade we talked about in the previous slides. He's responsible for PCF setup, concourse setup, the master pipeline setup and obviously the PowerShell scripts which are used to create other pipelines. This is where the platform operator at NASDAQ has made some contribution. His name is John Sherry. I want to thank him. He's made some contributions and they're in the process of making that contribution back to community. So platform operator maintains all of that and there is an application operator. He manages the go live tasks. We'll take a look at what that means, but he manages the go live tasks, takes care of the APM events, log events and all of that stuff. So let's take a look at the personas and how they interact with each other. What we have been able to do other than just the technology and the pipeline is also the culture and process change. That's very important because we want to make it work at a scale. It is right now working for one team, but we want to make sure that any new application onwards to PCF, they don't have to go through the learning curve. And that is where the contract between personas is so crucial. That is where the people, processors and technology come together. Contract between the platform operator and release engineer is that platform operator tells release engineer, I expect your repo to be in this structure. As long as you are in this structure, everything will be automated. Just make sure that your repo is in this structure. What does the release engineer do? He creates a TFS repo in the structure that is told to him and gives the repo URL to the platform operator. Platform operator tells him that, okay, fine, you're doing the right job. I can do zero action and your pipelines will be created by my master pipeline. And here are my wire files for PCF deployment that you will require when you push your application. And here are my credit pipeline, credit locations that you will use in your pipeline. There's also a contract between the developer and the release engineer about where does the release engineer receive the build artifact? How does the release engineer supposed to push application? The release artifact or the build artifact is received on S3 and the deployment manifest is like a command to do a CF push. That is also given by developer. He will put that in his own pipeline. That's a contract in place now. So every time a new team comes on board, every time a new group of application comes on board, they have a process to follow. As long as they are there to process, everything else is automated. We'll talk about the repos in the context of the pipeline, but there are two different repos here. First one and second one, they run into the VPC of the concourse setup because they're used by the master pipelines or the pipeline managers. But the third repo, which is the TF S3, that runs into the data center of NASDAQ. That runs to the data center of NASDAQ and that's where the developers interact to push new code. NASDAQ S3 is the location where the build artifacts are stored. Let's take a look at the actual automation. As we discussed earlier on the left-hand side, we have three foundations on the top, Sandbox, Devin, QC Prod. On the right-hand side, we have master pipeline manager, team pipeline manager, which is for a particular application group and individual app-specific pipeline that are part of that team. And then the business unit will talk about that. So what happens is the master pipeline manager, as we discussed earlier, is always continuously watching the TF S1 repo. That's its own repo. That is where it expects any team to have their YAMLs declared for the new pipelines to be created. This won't happen on a day-to-day basis. It's just watching it. So the release engineer creates a new pipeline structure. Let's say tomorrow a new team comes on board and they want to take advantage of this. They will create a bunch of YAMLs in the directory structure as per the contract. And as soon as he does that, the master pipeline will start kicking in and it will create the team pipeline manager for that particular team. The team pipeline manager itself watches another repo for what it monitors the repo for individual application pipelines. Team pipeline manager creates. So what happens here is the any pipeline for application-specific YAMLs changes are made to the repo. The team pipeline manager will spin off three different pipelines for each application. One for Sandbox, one for Devent, and one for Prod. And the Sandbox Foundation pipeline is watching the S3 bucket to receive build artifact. Developer pushes the code to the local TFS repo which builds the dotted artifact and pushes to S3. The Sandbox pipeline picks it up and it will go ahead and see a push the code to the Sandbox Foundation. It's all automated. Developer will check the code, let him know, go ahead and promote it. On promotion, the next pipeline kicks in, see a push the code to Devent. Sorry for that. I had a bad throat last night. So the same process continues, a promotion of the application pipeline from lower environment to higher environment. The promotion takes place. Finally, when the application is pushed to production, the operator will do the due process. The application operator do the process with the business unit. The business unit gives an approval. And then the application that is running in production actually goes live by making changes to the HAKAMI, which is the CDN layer. So they make the URL rule changes that will point the, that will make the flip happen. How cool is that? So what mother was talking about the pipelines, right? When most of us in engineering, when we do select a vendor, we don't want a tool. We really want a solution. There are a myriad tools out there that can solve your technology problems. What we wanted to be solved was the process change, the culture change and also the ability to replicate and grow at scale. And that is what we have gotten from PCF. The other important part that we thought would be too geeky to go here is the integrations with the existing tools that NASDAQ already has. We have long-term license agreements with the ABDs of this world, the Splunks of this world, the Neuralics of this world and all. And the out-of-the-box integrations with the popular tools in the marketplace also helped us to quickly stand up the platform and scale. One incident that happened, like when we were deploying, they decided that they would scale the MongoDB layer. MongoDB, the DB layer is not containerized as you know. And they also wanted to scale the application layer. I think it took them all about one to two minutes to do it, right? And the other thing is even now we do not have any downtime. Like imagine NASDAQ going down for 15 minutes during the trading, right? Not a good story. And Twitter lists with us. You can imagine how fun it will be. But the way we do it is we take each web server out of our load balancer, run a script to patch it and then put it back in the load balancer again. It's an involved process. But if you could do something with the push of a button and without any downtime at all and a process that you can depend upon, I think that served us now very well. Some of the things that also are perhaps more unique to NASDAQ are our work process. The numbers that you have here, John Genoves and me, John who works for Pivotal and who is our interface. We did this excruciating task of documenting every single task that needs to be done in order to stand up the infrastructure in order to deploy an application in ours. And those numbers are really true. We went from requiring 30 days to stand up a full-fledged environment set all the way from the lower environment to Prad to be able to do it in two days. And while our target was always to do smaller releases so that the risk is lower and the approvals are much easier for us from our customer base. And we really are able to, PCF has become an enabler for us to go in this direction. Guys, I'll be lying to you if I told you that it came out of the box, we installed it and then went home and had like whatever our Indian desserts are right now. It had its challenges. But I think what made the journey easier was what I started with. When you have somebody who is willing to innovate and work with us, I think that what makes it a much smoother journey. Thank you for listening to us and thanks to our madam too. Thank you, Bonnie. So yeah, any questions? We will take it happily. Hi. Good talk. There's a lot of talk at this show about things like Kubernetes and Istio integrating with Cloud Foundry. And I wonder, is there Kubernetes in your environment anywhere? Do you plan to kind of mix that in with Cloud Foundry or do you do it already? So that's a very good question. We do have Docker and Kubernetes. We started it off as a DIY project, like, you know, we were doing everything by ourselves. And now we are looking at the PKS solution that Cloud Foundry gives us. So yeah, I will not go into the challenges of do it yourself projects, but yes, we are looking at PKS. The slide deck, where can we get it? We'll share with you Cloud Foundry team. Yeah, they uploaded on to the CF sketch side. Any other questions? Thank you. There may be some rotator cuff injuries preventing arms from being raised. This is, I see this a lot. Yeah, rash of injuries. Thank you so much. That was fantastic.