 Okay. Then I think we should start. Welcome everybody to the Automotive Pagoda. My name is Michael Steffens and I'm working for the leading supplier in the domain of tools and software in the domain of electronic control units and vehicle networks. The Automotive Pagoda is the vector cloud. And I would like to show you how this cloud is built starting as with any reasonable building with the foundation, tell the story about that building, the motivations, requirements and the like, give an introduction into its architecture, how we have chosen the material and finally draw the conclusions. Vector customers are car manufacturers and that means pretty much all car manufacturers around the world and suppliers of car manufacturers that we call, that we used to call tier ones. We sell our products or our cloud services together with vector products. That means that you get to use our services, our cloud services by purchasing a vector product the regular way. Now let me introduce briefly with a couple of examples what our customers are doing when they are using cloud services and why they are using cloud services for. Let's start with remote diagnostics. Electronic control units have fault registers which can be read for diagnostic purposes and remote diagnostics enable kind of remote debugging of cars. Remote debugging is always required when the engineer actually in the workshop at the assembly line or during a test campaign can't make sense of the diagnostic output and needs back and help from the factory. Then a back end engineer from the car factory can connect to a diagnostic session opened by the remote connection point using either mobile device or a PC. Another example is logger service that is mainly used during car development, development of new models or new makings where logger devices are mounted in cars and are connected to the vehicle network and are just logging events. As the amount of ECUs in the car is increasing, thus the amount of testing requires is increasing while the time available is not. Testing fleet becomes an important economic factor and returning each car to the factory for log analysis is not feasible. That is our logger service allows to put logger configuration in the cloud for individual cars to pick it up for configuration changes and in the other direction whenever they have mobile connectivity cars to upload logs for the factory to download and analyze. The third interesting example is you could coming from the IT world imagine like something like subversion for the electronic control unit. ECUs are in their behavior driven by data, by calibration data representing physical properties and they are being calibrated in order to meet certain requirements with respect to emissions for example, consumption or performance and also are generated in variations for example for certain markets such as EU vehicles, certain engines or certain forms of the drive chain. All these calibration data need to be tracked, archived, merged of different calibration sessions and the calibration data management services, pretty much the central repository for calibration data providing these functions. The vector cloud belongs to the SAS category of cloud services that is we provide software to our customers, we don't provide virtual machines or networks from customer or end user perspective if it all looks like software. The software is presented with the help of middleware itself running on an infrastructure as a service platform. As I said in the beginning, our cloud services are dedicated to be used with vector products that is you won't find a sign up page, you can get access by purchasing a product that is cloud enabled and it targets mainly programmatic clients. Programmatic clients means in most cases not a web browser but an application that is emitting HTTP requests in its back end. These clients can be PC applications extending their functionality or as shown before devices mounted in cars. Our cloud framework is supposed to support both legacy applications that existed long before we started cloud activities and new applications being developed dedicated for the cloud. And we need to offer customers the option to use these services either in a public offering that we provide and connect via internet or to use it on their own premises. Doing so, it's the vector cloud framework is charged with the task of mitigating risks that are specific to cloud operations. That is that would make the difference for a customer to run the service in our public offering as opposed to in its own IT department. Which means we need to consider security and tenant separation or tenant isolation. First of all, as our industry, the industry we are working in is quite risk-averse and not competitive. As a customer, I always want to have my data protected against any kind of unauthorized access. And you know the usual responses, it's use authentication and encryption of data and transit. But also as a customer, I don't want any competitor to see that I have subscribed to a certain service at all. Because doing so would allow a competitor to draw conclusions what I'm doing during development of my cars. That means my SAS front and our SAS front and should not expose any means to guess or probe for other tenants. It must not be possible to check whether a competitor of mine is using the vector clouds by doing a DNS lookup, for example. We do so by using two ways. First of all, we have to address a tenant with a request and we have to address a particular service for that tenant. Addressing tenants, we do by encoding them in credentials. I'm showing here as an example a client certificate where the tenant, here it's a company named demo is included in the subject of the client certificate or when using username password, it's encoded in the username. Regarding addressing the service, I'm here now showing as an example one of the very few web applications that we have in the vector cloud, which is the tenant user admin service and application spanning application UI that we use to allow our corporate customers to manage their own user accounts and to manage their own credentials. Here the actual application, which is the two us web client, is not in the host portion but is part of the URI, it's the first element. This is a generic principle that we carry across all applications. What does programmatic authentication mean on the client side? It means we focus on methods that have been around for decades and that you can get readily from pretty much every HTTP library around for all kinds of programming languages. It's basic authentication and TLS certificates. The exciting thing here is that the method, the choice of the method is entirely at the discretion of the client. That is not we are deciding what method the client is using. It's the client. When the client presents a certificate, we will use that. If it doesn't, we will fall back to basic authentication. With the request traversing the entry, the service gate of our cloud framework, credentials are being substituted. Inside when forwarding the authentication to the actual applications that are providing the payload, we are using a single sign on using the open ID connect method. Using open ID connect allows us to use access tokens to forward the authorization claims. There's this information like the username or group memberships or email address that the user presented at the entry. They can be forwarded along service chains if the first service receiving the request needs to contact secondary services in order to complete a request. They are, for performance reasons, cached. Access tokens have a configured lifetime and during their lifetime they are being reused from a cache as long as the user on subsequent request presents the same credentials. Facing our application developers, our framework has to be generic in the way that the product departments that do the actual development of our cloud payload and who have completely different ideas of what they want to do and what kind of technologies they want to use. It has to be generic. That is useful for pretty much any kind of HTTP service. It has to be adaptive in a way that it's very easy to publish application in our cloud from the inside and that we offer services to the application such as reading the access token or browsing the user directory of the tenant or whatever to the application but not demanded. In the simplest case, if a particular web server provided by a product department does not depend on any of these information, they can just dump there a simple web service. It's covering common needs such as the identity and access management in a way that customers using more than one service at a time do not need individual logins for the different vector applications. A management view on the game is make it as cheap as possible, develop on your own as little as you can, use it out of the box, provide global presence but in every region with low latency because the applications that we develop depend to be sensitive against response times when providing a reasonable user experience. Be able to scale up in case a big customer is demanding a lot of storage traffic or compute power but be able also to scale down in case a certain region is having a very low demand or is very slow in catching up demand. And basically no facilities, no data centers and the like and also no IT operations run by our company. We are not an IT company and we are also not a cloud provider as such. So that means we do what? Let's start with the way you get into the cloud and how requests can then be processed in the cloud. A few things are obvious. Your request will first end up at a reverse proxy that will then forward the request somehow. The request will somehow finally be answered and be worked on by a tenant application instance here shown at the right. There is in our present model a dedicated instance for every application and for every tenant. That makes tenant separation easy from the application development point of view. You just need to care for it. Don't need to care for it. The reverse proxy will need to use services of both a service discovery in order to find the right tenant and it also will need to use service of an authentication authorization service in order to check user credentials and in order to make a decision whether to forward a request into the cloud or not. All of this is running on an infrastructure platform providing virtual machines, a virtual network, storage in the form of virtual disk or objects and an orchestration framework. And you can guess what kind of IIS platform is behind that. Refining the image a bit, we see that we have put a façade service, thin service between the reverse proxy and service discovery and authentication and authorization service in order to decouple from a software design point of view but also in order to be able to narrow down the interface in order from a security perspective such that an hijacked reverse proxy cannot so easily be used to propagate an attack into the internals of our infrastructure. The authentication authorization service is based on LDAP directories. There is a dedicated LDAP directory for every tenant that is for every corporate customer. And they are representing the persistence layer for the tenant user admin service, the web page that I have shown you a few slides before, which is basically technically speaking from our perspective just the cloud application or HTTP application just as every other is. And it is sitting side by side with the tenant application instance in the tenant area. All this running as said on the infrastructure service, colored red are here, all components that we have written in house, everything else is third party. So let me show what third party elements we have used, the building blocks, choosing material. And let me start with OpenStack. I'm not going through every OpenStack component, I guess you know them, but those with, say, the differentiating and most important advantages for us. The first is Neutron, which we prefer due to its very explicit model. What I mean by that is it has project networks, subnets, routers, IP ranges, load balancers, security groups, managing port access, that you can plug together just as you would with physical components. And all components that you haven't connected aren't connected. There are no hidden routes, there are no ports popping up out of the blue and providing some platform services or routes into the internet. Virtual machines can have ports in different networks inside a project. And we can use private provider networks as vector backbones spanning across projects. So it's quite cool feature that our cloud provider has set up together with us. It means, sorry, it means that we can set up virtual machines here in a kind of transit area, a dedicated OpenStack project with multiple project networks and virtual machines bridging these networks. For example, the reverse proxy, which finally ends up traffic in our private backbone provider network and OpenStack terminology, which is not connected to the internet. We do have VPN connections from vector into these private backbones, which then link the transit project to the individual tenant projects. The other nice feature was introduced with Keystone Version 3. It's the stack tenancy introduced by the domain concept. Vector has subscribed for a domain at a public cloud provider while we are mapping our SaaS customers to projects inside that domain and are making use of project separation provided by OpenStack in order to implement our tenant isolation. The public cloud that we are using is being provided by a Swedish company that is also present on the summit at City Network, residing in Karl's Krona. They are providing the infrastructure as a service, means stuff like virtual machines, multi-tenancy, the storage and networks, but they are also providing us with managed hostings. Helped manage hosting that means they are trained into operating our heat stacks and our Ansible playbooks in order to set up and support our infrastructure and our applications. We have provided the software as a service layer on top of that, doing HTTPS to the end clients, providing user authentication, the payload and the billing, and we have set together all the orchestration. The reverse proxy, we are using a quite common choice for engine X. I think most of reverse proxies in the world are running on engine X. The most important feature we are using there is its capability to customize request processing with an engine X Lua module in a very deep manner. We can access all request metadata from our Lua modules and derive back end requests through a facade service to the service registry and the authentication service that are being performed for every incoming request before the decisions whether to forward a request and where to forward a request to the upstream server is actually done. The federated view on identity and authentication is provided by key cloak which is a Jboss-based identity provider for OpenID connect and SAML. It is a Red Hat project and one very nice feature is here that key cloak is able to split its management in realms which we can then nicely map again to our tenants connecting the tenant held up service that I have shown in the beginning of the architecture slide to individual dedicated realms. For service registry, we are using console with a console with an authoritative ACL data center sitting inside our service gate and kind of satellite servers sitting in each tenant area and console nomenclature that's console data centers and console describes the connection between them as van links in our case. It is our backbone network. It's not really wide area communication but this allows us to have service publishing to be done inside each tenant area catalogs for the respective tenants being synchronized with the central catalog and any kind of malicious cross registration that is capturing requests for another tenant by trying to publish a service for someone else being prevented with console access control lists. So as with Keystone, we are just mapping an existing organizational model in order to implement our multi-tenancy. What we have learned from that and conclusions. There are a few success stories and the biggest one is I think that we are operational and in production now with this infrastructure for pretty exactly two years with having great points of presence in Europe, America and Asia which allows us to provide well not really the minimum latency possible but reasonable latency across the world. We could do so providing a central infrastructure for orchestration provisioning and logging that I haven't demonstrated here or shown here in detail. It's possible because our provider network backbone is globally interconnected, nice feature provided by City Network. Our offer has been surprisingly well received by customers even though we are operating in a risk averse and competitive industry because for many customers just as for us not running IT in their own premises is a bigger plus than not giving out any data outside the own IT provided they accept the underlying security model. We were able and this is also something that is not possible everywhere, able to seamlessly adopt legacy IT server applications. That is we have services that have been originally written for traditional IT environments, enterprise environments, some even just to be installed on a PC here running as a cloud service and they run within the same cloud architecture as newly designed cloud applications. Our globally interconnected provider networks are from what I have seen at least a year ahead in the public infrastructure market. It sounds like a quite simple idea to have these private networks and have them globally interconnected but actually if you try to get that at one of the big IIS players on the world you will notice that this there got possible either only recently or is still not possible at all. But there is also room for growth of course. The strict mapping of tenants to projects that we have introduced in order to make the tenant separation for the application development easy is on its downside imposing issues for scale down because it means even for applications that have very, very little load just operate a handful of requests per day and have nearly no data associated with them. Just reserving dedicated virtual machines for their processing is really overkill. This can be and has been solved by a more sophisticated use of console access controllers than we did before which now allows us to also support multi-tenant instances. Then of course it's the application itself which needs to cater for secure isolation, consuming much fewer virtual resources. We are also considering to use container back end and Kubernetes in order to scale out. This has been requested by some of our product departments and also sort of adapting to the new mainstream of deployment. What I mean by that is that what our product development teams often say is we want to deliver a Docker image. Please run that. We are looking into a cloud API for test and simulation, a test and simulation product where Vector is providing the entire environment ECU is running in in order to simulate, for example, the response of peer ECUs and so on is being investigated. We are looking into supporting MQTT and how to properly implement tenant separation and the kind of adaption that we have on the HTTP level there because you know the world of Internet of Things is slowly migrating into the automotive sector as well. And we would have on the platform side database as a service high on our wish list to be provided by our public cloud provider. I thank you very much for your patience and for joining me the tour through the automotive Pagoda.