 Hello. Thanks for staying with us until 5.30. It's good to see you here. If you have come to see a talk about platform operations and you'd like to see a real-life case study of that on Gimaltto, you're in the right place, if you don't want to see that, please leave and find a talk, which is what you're expecting. We're going to talk a bit about who we are and we also want to find out a little bit about who you are, so I'll ask you a couple of questions about that just so we understand who's in the room. So firstly, Vinchenzo, who are you and why are you here? So I'm an automation architect at Jamal 2. Jamal 2 is a company which is the leader in digital security. We provide a lot of different products. I'm in the enterprise and cyber security business unit. So we deal more with access authentication and data protection. And I'm here to present you the DIPOD solution, a solution that we built on top of Cloud Foundry with the help of Dan and Engineer Better. And my name's Dan Young. I work at Engineer Better. We're a consultancy who work a lot with Cloud Foundry customers, including pivotal Cloud Foundry customers. And that's the version we'll be talking about today. And we've helped a lot of our customers learn to operate platforms in a cloud native way. So what we do is, and what I'll be talking about today, is getting a team working process that really, engenders this idea of a platform being a product that you operate for your customers inside the business, not the traditional notion of IT operations. So that's what I'll be talking about today. And that's really what I'm going to be focusing on today, is helping you and hopefully you'll be able to take away today some examples of what it means to really operate a platform like that. And not just at the team level, like not just how does the team behave, but what does the team do, and what's different about that compared to traditional IT operations. Vincenzo, what's your specialist topic today? So I will talk about data protection on demand. I will show you why we proposed this platform, what kind of advantages we expect you to have by using the service. And I will also show you a high level of the architecture of the solution and how we engineered it on top of Cloud Foundry using the Cloud Foundry concept. So the D-pod platform that Vincenzo is going to go into detail on is underpinned by Cloud Foundry. In this case, it's Pivotal Cloud Foundry. And the way we encourage our customers to work with this platform is to think of it like a product. They organize their team to have a product mindset. And that's an autonomous team. And they have control over everything about the way the platform is deployed, all the technologies around it, and everything that's plugged into. And they operate with that product mindset. So we often encourage either the customer who we're working with or ourselves to provide a product manager. So someone who's really going to think about what the internal needs of the customer are in this organization. And then operate a relationship with that customer that is defined by what we call the platform contract. So there's an agreement between the consumers of the platform, who are the developers and the operators of the platform, that it will do certain things for them. And Engine of Better, we've sort of been thinking recently about how to express this. And this idea has come from someone called Colin, who works with the CTO of Pivotal Cloud. And it's pretty much, this is a picture which describes what we believe DevOps is evolving to. So originally, if you forget about the top right and the bottom left, we just had application developers trying to get their stuff into production by talking to IT operations teams. And that contract line didn't exist. It was a wall. It was like the Berlin wall between these two teams. And there was no real relationship there at all. And it was just people throwing things over this wall and trying to get stuff done. And then DevOps came along and we said, OK, well, maybe if we could merge those two teams together, they'll work really well together because they'll just be one contiguous cross-functional team. The problem with that is that then everybody's operating everything in the entire technology stack again and again and again across the whole organization for every application. That doesn't make sense either. But what happens if you get really, really good at DevOps and you're very, very good at configuration management and you're amazing at containers and you can plug all this stuff together, you end up with Cloud Foundry. So that's where you get to if you're really, really good at all of that stuff. So why don't you just start using that anyway? And then if you can operate Cloud Foundry with a product mindset, you're able to present to your internal users that notion of a platform contract and an API. So if they can't do stuff with the API, it's not worth bothering about. So what we're focusing on in this presentation is that bottom product team there in the dotted line. We're encouraging Jamalto and the engineers we've been working with to think about how to form that team. The philosophy that we have in that team is all about small batch sizes and fast feedback. So everything we do is trying to encourage a way of working that allows us to learn faster. So the automation we have and the continuous deployment of the platform, the purpose of that continuous deployment pipeline is to form a feedback loop so that you can learn very, very quickly about what you're doing. So that's the first part of it. The next part is having a sustainable engineering culture. This is in contrast to what you've had in IT operations for a couple of decades, which is heroics. So in IT, we've always had heroes. And heroes, some of them are the nicest people you've ever met, but they are not healthy for a team because they become silos of knowledge and then that person becomes indispensable. That idea of the bus factor. If that one person got hit by a bus, everything would stop. And if things are going badly, that one person gets completely overloaded all the work. So we encourage teams to get away from that idea. And one of the ways of doing that is through pair programming. So we will pair with our customer teams and we've been doing that with Jamalto's team so that the knowledge is diffused throughout the team and there is no one person who becomes the hero that person can do all the work. And the way we work is by having a single prioritized backlog of work, and that means that we're always working on the most important thing and we always do that in the simplest possible way. So when you have one clear prioritized backlog of work and you are totally open to the idea of changing the priorities of that list continually, you can embrace change. And this is also very different to the traditional IT ops where everyone says no all the time. In a team that operates like this, you can say yes. You say okay, if that's the most important thing this morning, that's going up the list, but oh look what's happening. This other priority is coming down the list. Do you still want to do that? Oh you don't. Okay it's going back up the list and we use the backlog to communicate clearly with our customers inside the organisation who are asking for these conflicting things. What do you, it forces them to think about what's really important and it becomes a communication tool for us. So we're able to say yes to change, but we also make it clear what that means. There's a trade off in that decision. And we've done this with the Jamalto team here in Ottawa, so they sent some people over to us who we've been pair programming with. It's quite possible, we'll send some people back over to them. And in the meantime, we're doing remote pair programming to make sure we get that contiguous fast feedback from the team. Because pair programming enables that as well. If you're pairing with someone, you learn very quickly when you're doing the wrong thing. So that's the way the team has been working. I was going to give you a brief look at how we've actually been working with the technology. So how many people in the room are operating Cloud Foundry in the moment? Okay, and how many people are, the consumers of the platform? So like the developers or, okay, not so many, I wouldn't expect that. And who's an architect? Okay, so we've got half architects and sort of half operations people. This, how many people are using concourse? Okay, good number of you. Hopefully some things in here you'll expect to see, some things might be new to you. Come and ask us afterwards if that's new stuff. So we have, every pipeline, every foundation we build for the customer is completely and utterly reproducible and we can have as many of them on demand as we want to. We don't name them after their semantic meanings. So we have no dev staging or production platform. We just pick a naming scheme. So we've chosen arthropods. And that enables us to decouple the names of the platforms from what they actually do. This is really, really useful when you've got three production environments and you can just map on to their arthropod name. So we've got complete reproducibility. Every single one of them is a bit-for-bit replica of the next one because we've got a single piece of YAML pipeline for everything. We've got some optimizations for cost so we'll sleep non-production pipelines at night. So we shut down all the instances at night when not being used in non-production environments. I've got continuous upgrades from PIVNATS, which we're caching into S3, and then we're applying those as they come out. So we're narrowing the time of exposure to CVEs to mere hours. And in most enterprises, it's days, if not weeks and weeks of people being exposed to CVEs. So we're narrowing that down. And for Trafalto, that's pretty important, right? Because you're a security organisation so you don't want to be exposed to CVEs. Then at the end of the pipeline, as you might expect, we're continuously testing, we're doing continuous smoke testing for acceptance so we know we'll run that every five minutes or every 10 minutes in addition to other monitoring. So we know, can I push an app? Can I stop an app? Can I start an app? Can I scale an app? These are basic functions and when we're doing this, we're testing that platform contract. It's like, am I delivering the things that I promised that I said I would do for the customer? The last thing you can see there in the normal production pipeline is, or sorry, the staging pipeline or any of the other intermediate pipelines is this stopover pattern. So what we did was originally, if you've got multiple environments and you want to promote change from dev to staging to production in Cloud Foundry, what you could do was just build a gigantic pipeline and have groups in concourse for expressing the different pipelines. That's not great. So what we did was this stopover idea that you can snapshot the numbers of all the versions of the resources that went into making that pipeline green. And if you snapshot all the versions by querying the concourse that you're running, using ATC, using the API, you can then get those versions and export them and use them as parameters to pass into your next pipeline. This is really advantageous because then we can promote change. We know exactly what made that pipeline go green in staging. We will pass exactly the same versions of all the resources into another pipeline by means of parameters. So that's me. I've sort of spent about half the time talking about the team and about the technology. And then what Vincenzo is going to do now is talk a bit about what D-Pod does on top of that platform layer for Jamalto. Right. So the... Thank you. So the D-Pod stands for Safety and Data Protection on Demand. And what we built on Jamalto's side was to a cloud-based platform that could provide you a wide range of on-demand key management and encryption services. The main idea was that you can get access to a lot of data protection services without having to deploy any hardware on your premises and having immediate access to a lot of different functionalities that could cover your data protection use cases, especially considering new requirements that are coming with new regulations like the GDPR for just to mention one of them. And all these through a simple online marketplace where you can have a user interface that gives you just a point and click, choose your service, and then you are ready to use data protection service in your applications. So when we started the D-Pod project, we had a quite tight timeline. We had to come up with this product from an idea to an actual deployment in production in a very short timeframe. And what we had was a set of business goals that we wanted to achieve. So we had to ensure proper tenant isolation so that we could have a platform that would allow us to properly isolate the different tenants that we would serve, a click-and-deploy approach so that our customers would easily through a user interface be able to activate the service that they needed. A centralised platform, so having all these services managed in a central place, and a utility model pay as you go so that our customers didn't need to do an initial investment, for example, as it is normally the case with a HSM device, and at the same time have a visually unlimited capacity and infinite scalability. These two accommodate different kinds of needs from our different customers. And finally also have the possibility to be always up-to-date, this for many reasons, especially from a security perspective, being able to provide always the best services for data protection. So in this slide, I'm trying to give you an idea of how the depot service is going to... What are the main workflows that we set up for the depot solution. We have three main actors. The first one is the Gemalto operations team, here represented by a Gemalto service provider admin. And this is a workflow which is triggered when we onboard a new tenant. So when one of our customers buys the service, we have an initial provisioning activity which consists in creating a tenant. We create a domain inside the Pivotal Cloud Foundry platform. We set some initial credentials for the new tenant and we trigger an automatic deployment of applications in a tenant-dedicated organisation and space inside the Pivotal Cloud Foundry platform. At this stage, we provide our customers with some initial credentials and on the customer side, a user with a tenant administration role will use the initial credential, change them after the first access, and have the possibility to administer this online marketplace of services for the customer organisation. Within the customer organisation, the tenant administrator is also able to create groups and users that will be able to create their access to the different services that are available on the marketplace. So in a way, the tenant administrator is managing the set of services available in the Pivotal Cloud Platform and it provides access only to a subset of users within the customer organisation itself. And these users are playing the role of tenant application owners and these are the final users of the service. So these are the users that are actually using the data protection services. And in this specific workflow, we are showing how they create one of the different services available in the marketplace, which is the HSM service. The HSM service gives the tenant owner the possibility to have instantly provisioned an HSM service that can be consumed directly from the tenant applications. What happens, we see it more in detail in another slide, is that as soon as the tenant requests the creation of a new service, the DePod platform will create a partition in one of our HSMs and it will compile a bundle, which we call client package, that the tenant application owner can download and this bundle contains some libraries that allow the client application to communicate with the HSMs. So the main building blocks of this solution are a component which is completely implemented on Cloud Foundry, which is the DePod and we run this in the AWS public cloud. And DePod provides in AWS the entire management side of the application. So what I just described. And then we have the HSMs, which are fronted by a set of microservices that are hosted in a gemaltodata centre, which is connected to the AWS public cloud over a VPN connection. The customer application owners are able to access the DePod management interface directly over the internet through the credentials that are assigned to them. And then once they download their client package, they extract the CryptoKey library that their application are able to use in order to communicate with the HSMs hosted in the gemaltodata centres. So how did we leverage the features of the Pivotal Cloud Foundry in order to create this service? What we did was to deploy inside a specific organization and space some of the components of the DePod solution, which is following a microservice architecture. Without entering too much into the details of the components, the interesting thing here is that the microservices that we deploy inside this gemaltorganisation and space are in charge of doing two things. During the tenant provisioning phase, they're in charge of creating a tenant organisation and space in which we will deploy the microservices that represent one of the services available on the marketplace, like in this case, the HSM services service. Then the microservices which are inside this organisation and space are completely under the control of the tenant. So in a way, we use Cloud Foundry, creating some organisation for each of our tenants, and the tenants have access to the functionalities of all the modules that are running inside this dedicated space. The final thing is the important thing is that we implemented the service broker, which in the case of the HSM services service provides brokers the access to the HSM gateway, which is the component which has access to the real HSMs, which are deployed in our gemaltodata centres. So about the future today, the service that I show, the architecture I show you is deployed in Canada. We are actually building the service in a new region in Germany, in Frankfurt, and we are going to add more regions depending on the customer needs and customer demand. We are to do with the proximity between the Cloud Foundry and the HSMs. Correct. At the moment, we have everything in Europe, but it's also talking to HSMs that are in the US. So at this very moment, we're deploying a new production environment using the same concourse patterns that I was talking about. So take a new Foundry, spin it up in a North American AWS region, and then connect it up. This is a lot of the deployment of the EWS part, the Cloud Foundry part, in all the different AWS regions. Then we are going to add new data protection services. Today we have the HSM services, and we are adding some additional services that we have in our roadmap in 2017 and 2018. We are going also to provide services like transparent data encryption. Today we are just providing you the possibility to use the HSM services as a key vault or to store the root of trust of your PKI. And we are working together with the engineer better also on a better network topology to improve the way we are building our deployments in AWS. So, as you might expect, being an enterprise that is combining public cloud with data centres and external agencies, the networking can become quite complex, and the way it's evolved has been starting with VPN connectivity into AWS but evolving towards direct connect. And up until now we've been taking these VPN connections directly into the VPC that the PCF is deployed into. That doesn't give us a lot of flexibility because if there's any disruption to that VPC, the VPN connection is going to be interrupted as well. We want to be able to consider those VPCs, those PCFs, as fairly disposable and fairly ephemeral. The VPN connection shouldn't be. So that, what we are going to try and do in collaboration with Jamalto is have a transit VPC. So we actually pass our data centre traffic into a VPC that can then distribute and route traffic to multiple cloud foundries depending on where they are, potentially on a global scale. And that's going to change the way that we deploy the network level. And then the last point is continuous audit mechanisms on the deep-out platform. An important thing that I missed is that, talking about the new data protection services, there is also the FIPS compliance of the service. Today it's a FIPS 140 level 2 compliance, and we are aiming to have a level 3 compliance by the beginning of 2018. And there's also GDPR compliance as well, I believe. Yes. So the point of continuous audit is that, again, traditionally audit is treated as kind of a point in time activity. You do like a check of everything and then you say, oh, we're done now. If you're continuously deploying the platform and you're continuously testing the platform, why aren't you continuously auditing the platform? So that's what we want to do is you want to build into concourse proof for the auditors that we are compliant with all the things we're supposed to be compliant with and we're checking it continuously. So we just need to know the rules that we need to be compliant with and we'll build it into our system testing. And... If you want to have more details about the DIPOD service, I provide you here a few links. There is an entire site where you can get all the details and also some product briefs on DIPOD. There is also a registration form to sign up for a trial of 30 days for the service and there is a video on YouTube which gives you also a glimpse of how the user interface looks like and what kind of services you might be able to find right now on the DIPOD platform. We'll update these slides on Shared and put them on Slide Share or something and tweet about it if you want them. Okay, any questions? Q&A. Any questions? Yes. We've got Mike. Is it on? Not sure. Give it a go. I'm possibly going to ask a really stupid question. No such thing as a silly question. I'm not an expert in either field but was wondering if there was any overlap between what you were looking at for continuous audit and compliance masonry, which is a thing I've heard of but don't know much about. I haven't heard of it but potentially what does it do? Can you talk to the compliance masonry? It automates compliance documentation so it's a way of using the open control schemers to build the kind of documentation that you need and tell you where you're missing controls. It sounds very useful. I can imagine that we would... If we didn't do that natively inside the concourse jobs, what we might do is even go as far as writing a resource for something like that and it's fairly common for advanced concourse users to write a resource for enterprise tooling that they need to interact with, but that sounds really good, like generating the audit documentation. Yeah, so it definitely would maybe be not a competing tool but something that could be compatible. Anyone else? OK. Well, we'll be here immediately after the talk if you want to come and grab us for questions. If not, yeah, you can find us on Twitter. Do you intend to? OK, so you can come and grab our Twitter handles. We should have put them on the slides, but we didn't. So that's a mistake. But, yeah, thank you very much for coming. Thank you. I hope that you have a good evening and enjoy the rest of the conference.