 Okay, then, talk number one. So I hope I'm pronouncing their names right. So we have Gorab and Bosida from Pivotal talking about Bosch links, a topic that is also kind of dear to my heart because usually you not only have like one Bosch deployment but then you have multiple ones and then you end up with questions around how are these pieces actually connected. So just with that quick intro handing it over to the folks talking about the details there. Thank you very much. So brief introduction, you heard about us but we're gonna talk about a new development in links called Lynx API. Oops, and this is us. So this is Gorab and I'm Bosidar. We work at Pivotal in the Toronto office and we have experienced some different teams in Cloud Foundry. So a little bit about the background of Lynx. Lynx was introduced to help operators reduce duplication across deployments and to provide arbitrary information. For example, instead of hard coding static IPs wondering where you might securely store them reducing errors in operator duplication. And this was introduced in May 2016. Since Bosch 255.5. And another thing it can simplify is automated service broker deployment. So you can have that functionality. Quick example, which most of you hopefully have used Lynx and know about them. There's essentially two types of Lynx. You can have internal Lynx within the same deployment. So within a job spec, you would declare the name of the Lynx, the type of Lynx, what properties it would provide. And you can have default options. You have consuming jobs, which again specify the name and type of Lynx to consume. And you might have a consumer job template such as a configuration file or something else that actually reads the link information and renders it into a template. And you have the deployment manifest where if we examine, we can see the providing jobs. You can have aliases on the link, which if there's multiple, for example, jobs which are databases, for example, front end and back end databases, you will need aliases if they provide the same type of link. The consuming job, on the other hand, would reference that alias you provided. And as we'll see in the next slide, we'll see what happens if you want to connect Lynx across deployments. So this would be external Lynx. Similarly, provider and consumer job specs, you would define your link information. You would have a separate provider deployment and a separate consumer deployment. And there'll be much the same as we saw in the previous slide. One difference is in the consuming deployment you would have to specify from which deployment you're consuming the link. And we have a similar job template as well in the consuming job and the consumer deployment. So moving on next, we'll talk a little bit about the motivation for Lynx API, the nature of the stock. It was introduced to automatically share information between different deployments. And those could be deployments not managed by the same director. You'll see examples of the API and how you can create Lynx with it outside of deployment manifests. For Lynx content, we still have much the same properties that you would provide. But Lynx API also helps with Bosch DNS. So you can have DNS addresses provided by the links instead of just IPs. Lynx API also enables future development so operators can better visualize how changing a link might affect deployments. Who is consuming links, who is providing links within their foundation. A couple of our colleagues would be elaborating more on that in a separate talk which we'll mention a little bit later on. So in terms of changing the link, Lynx API, better visualization would help you see what happens if a password changes port or AZs for example. So there's a couple of issues. For example, how do changes propagate? With the existing Lynx, there's less awareness of the changes, the impact changes to link definition or providing a link, what the changes would be. And there might be, there's less information about potential downtime because unless you manually go and, or you know the deployments which consume the link, you might not know how much downtime there could be if a link changes. And the current solution to what the existing Lynx is redeploying the consumers of the link to get the new information after the link is updated. So Lynx API was and currently is still under development. It's mostly stable. And by that I mean it has been developed for about a year I think. There's still small updates going on. And there's talk about Lynx API v2. So visualization which I briefly touched upon is one application with the new Lynx API. Operators can better calculate the footprint of the links, how many consumers of the link are there so they can better gauge downtime. Lynx API allows link creation outside of the manifests. So if you have the appropriate credentials to make the API call you can create a link and for example get the address of a service instance that you know and start using it outside of a deployment. Or within a deployment not managed by the same director. Lynx API allows monitoring link consumers. And now I'll pass it off to Gaurab to talk about some of these updates and how Lynx API helped reimagine links. Thank you, Paz. So the main problem with previous link operations was mostly the component that's associated with the link were generated during runtime and the changes that are getting propagated between various deployments are within the same deployments. There is no state management for that. So except if it's across deployment link then your deployment itself used to contain the link contents as part of the deployment and we used to store it. But this caused like lots of issues in terms of like multiple regeneration of the same component for the link and other properties which was written then. So in order to refactor this whole thing we considered certain topics like what are the common properties within the instance groups that are taken under consideration for not getting like recalculated every time you deploy. So we fragmented the whole link component into various sections. So now we have link providers, they have their own intents. Then we have link consumers and their association generates a link. So whenever we have a consumer that is associating with the provider it's considered as a link. And rather than associating it like adding this whole component and replicating it again and again on each instance we associated with the instance as a separate table. So it's only the association that is being replicated all the time not the actual component. So it also reduces the amount of calculation time that is required for your deployment significantly because calculation of link itself is very costly. So as previously said what is a provider? So instead of the intent of the provider so we kind of reimagine the whole intent of what provider and consumer might be. So provider is basically whatever properties you want to expose either externally or internally that are part of the provider components. So your properties, if you don't want to actually expose any properties but just the IPs you can still create like blank providers that will just expose the IP addresses or the DNS addresses or instance IDs of your instance and that can be either consumed within the same deployment or externally by the APIs. So other consumers can actually know about just your IPs not the additional properties. And similar to that is like consumer, you define your job can just specify a consumer and list what are the properties that you want to consume and Bosch will just consume whatever, Bosch will just add all the properties that you are specified in the consumer and update your deployment. So there are a couple of objects associated by default for each links. One of them is address that gives you like the DNS entries of the link that you have called for. So this address can be either IP and a DNS, sorry, either IPs or a DNS based on the provider deployment. So if a provider explicitly says I just want to deploy with IPs, then when you query this address it would be just IPs not the address, DNS address. Then you can filter out based on AZs also. Then there are like P level, P processes associated, pre-accesses associated with like each link. So if your link is providing multiple properties, you can query those properties based on like name of your property and this will actually pull out your provider and extract the properties from it. And then you can also list instances associated with it. And each instance have some default properties inside them like the name associated with them, IDs, index, AZs. So AZs is, you can specify multiple AZs so it can actually use it as an OR operators and based on the AZs you have specified it will give you one of them. And also you can specify like which instance is your bootstrap known. So it will give you like a Boolean value based on like is the current instance a bootstrap instance or not. So this property is basically used by most of the cluster environments where your initial node need to know that it is the first node in the cluster and it does some configuration based on like which one is your initial instance. So based on all this reimagination, we introduced API and I think it's, Bosch will go forward and like add more details on what this API endpoint might look like and what are the properties associated with them. Thank you Gorub. So we sped through the first beginning which hopefully most of you knew about the examples of links and the properties that Gorub talked about, they're also listed on Bosch IO. So this presentation is also posted on the schedule website if you wanted to refer to it after. In terms of the new links APIs, this table shows the different API endpoints that there are and we'll go into more detail about each one. So there's a few get endpoints which can list providers, consumers, links within a deployment and the addresses that those links provide. There's one post endpoint at the moment to create a link and one delete endpoint to delete a link. At the moment, these APIs require admin privileges. We'll talk about how this might change with links API v2. So jumping into the first one, listing the link providers. In terms of request parameters, you provide your deployment name and you would get a response similar to this, listing the providers and some metadata about them in that deployment. Similarly for consumers, the query parameters are again deployment name and you get a similar response about all the consumers within the deployment. The third endpoint is listing links. So within the deployment again, you get a response with metadata. The IDs here are important and they tie back to responses from the previous APIs. So you have provider ID and consumer ID as well as the link ID for each entry. Jumping into the post, so assuming we all have the right credentials and we provide the right parameters which are link provider ID and the name of the new consumer. This is an arbitrary name type. Should be always passed in as external. If we post this, a new link would be created and you receive the ID of the new link that was created. At the moment, there's only an API endpoint to retrieve the link address. It requires you to pass the link ID, so which you might get from querying all the links in a deployment or you have it from a link you just created. There's an optional parameter about AZs in this. You can provide the name of one AZ or multiple and you can also filter by health status and here are some of the values which are accepted. As a response, you would get the DNS of the link and for all of these examples, we use a relatively new feature of the Bosch CLI, Bosch Curl which has been available since Bosch CLI 5.3.1. Lastly, how to delete links. There's an API endpoint to delete and it will only act on links which were created using the links API, so if you try to delete a link which is consumed and provided within the manifests like the old style links, I'm using old and just as a word, they're certainly still being used. If you try to use it with an ID of an existing link that was not created with the API, the request will fail. So this endpoint only deletes external links created through the API. You get these response codes in various scenarios. If it is successful, you get 204. We'll briefly touch upon some of the improvements coming up in links API v2 for which work is currently ongoing. One is improved authorization for endpoints. So as I previously mentioned, usually you need to be the admin user right now to successfully list endpoints and create new links using the API. In v2, you would have more deployment or team specific permissions. So it's a little more granular. There's a new instances endpoint. And details about all of these can be found in the Bosch notes repository. We'll have a link to that at the end of the presentation. And as usual, you can reach out to the Bosch team on the open source Slack so you can participate in the conversation about the direction of links API v2 and share what's going well, what's not going well. One thing worth mentioning, which is connected in a way to links API is links and variables. So the recent work around this area allows variables to provide part of themselves. Right now it's only limited to certificates. But for example, they can provide the SAN or common names portions of a generated certificate. And this could be consumed within links. In a similar fashion, links could be consumed within variables as well. And we'll see an example coming up next. This is still under development, but it has been, the basic functionality has been available since Bosch director 267. So to show you as an example of this, see, I think that's all of it. We have the usual variable section and as you can see, this is a generated certificate which consumes an alternative name from a link. And this could be used within service brokers, for example. And now pass it off to Gorab to talk about this application of links. Thank you. So they are like various use case of this API endpoint that were taken into consideration. First of all, the most important was designing it for the service brokers. So like any service broker which is not maintained within Bosch environment or within CF, they can actually query the endpoint and create links for it for the components which resides within the Bosch world. And share those content to different components which the service broker itself is managing or is aware of so that a component that resides within the Bosch world and outside the Bosch world can talk to each other. And then it's also kind of helps decoupling like your service instances from your app instances or any changes on the service instances which is managed outside Bosch world can be done separately to the app instance managed by a different world. So it's act like a glue point where you can create an external links and then specify those link properties to an app which help you not co-locate it within the same ecosystem. And based on like changes on one side of changes in Bosch, it's kind of non-impacted to the app. And with the feature of DNS which is already there in Bosch, the links that are created, the addresses associated with them is like DNS address. So rebinding app is going to be like significantly non-problematic anymore because the address that is associated with the service instances or the links are, if it's DNS, then you don't have to like rebind your app every time you redeploy your service. So lesser downtime for your app and you can manage your services better. In the background without impacting any app changes. And another very significant use case of this API in point was visualization. It help us understand what your deployment is actually exposing when you have like something like CF deployment which has like 90 instances group or 90 instances and like 15 to 20 instance group. It's significantly troubling to see what links are being provided and what properties associated with the links are being exposed and you're not aware like is there any security issues with like exposing your password to this endpoint. So this visualization will help you, this API endpoint can help you understand like what are the properties that you're exposing to external world or outside Bosch environment and it also help us understanding like what are the consumers and the producers associated with the deployments are doing and what they are sharing and for the better visualization details and what it might actually look like you can go to our colleague's talk which is in daily room at 2.30. And the other use case was deployment impact changes. So with the API endpoint you can kind of calculate the footprint of your changes in your deployment that can cause within the foundation. So if there are single password change it might look very insignificant for you because it's just your own app but if 30 other deployments are consuming your properties then its impact footprint is like significantly increased but as developer you are always unaware of this footprint like if changes in your property what significant changes other deployment might have to do to consume it. So this will give you a better impact calculation before you make any changes and you can basically also communicate with like other consumers saying like hey we are like changing this property this is not going to be called port anymore it's called super port now. So this type of changes can be easily shared with like other teams and you can better manage either down times or any other thing associated with the deployment. On demand brokers can now create links multiple links for the same service instance so they can share the same they can filter out what properties they can they wanna share either you can request for a single AZ if your service instance supposedly is deployed in three AZs structure and you want just app A to associate with AZ one so you can filter out which instance you can you want to associate with the app with like just filtering the AZs so these are like case studies for links API and I guess now it's the demo time so the setup that we did for the demo was we have two different watch director and we are using MySQL deployment to share like cluster changes and we are deploying two different MySQL clusters and then with the links API endpoint creating external links from director one to two and this will give you the link and the properties that are actually required for your cluster to communicate with each other and yeah I think and since live demos are always very risky we have a recording so okay so at this point we have a director one that is Bosch one and another director on the left side Bosch two so first one is already deployed with the cluster environment of MySQL and we have two running instances on it so what we're gonna do is see what are the properties or the links provider associated with them with the API endpoint and this will give you a list of all the providers that are there in the deployment one so the provider we actually want to consume is need to be shared so if the shared flag is not turned on on your provider then creating a link with the provider association will cause issues and you will get an error so for an example we have selected this one because the shared flag is like false for this so if you try to create a provider with this it will fail so now we have selected like another property which is shared so that was the property that we want to create as a link associated with it so the ID associated with it is 27 so in the downstream we will now try to create link with the payload so we got an error for that and now when we change the IDs associated with that to 27 it should actually create a link for you so these are the link properties this is actually the link content so we filtered out the link content in order to see like what are the properties that are being exposed and what are the properties that you get by default for each link so you have instances, you have like addresses associated with each instance since we are explicitly specifying static IPs so you only see IPs here and other properties that we have mentioned in our previous slides like azs, bootstrap nodes or not so for this one of the instance bootstrap node is true another four is false and other link properties that are associated with this so we'll just use this property so the property that we want was just the Galera cluster health properties and we just update these properties in our manifest too so we're gonna deploy the second cluster so for time saving we already updated all the properties within that so we're gonna just deploy at this point and before that we were like checking cluster one to see if the instance are getting connected or not with the changes in property so we can see now from the previous state now you see the cluster size increasing to three from two at this point and apparently there is like deployment going on so as soon as you get like updated instance they started connecting back to this cluster even though they are being managed by two different Bosch directors so you can have downtime in one of the cluster and still apps running because the cluster is still fine and managed by a different Bosch director this update takes time based on the IS so you're gonna just skip because I think we are running out of time so just in order to see if the clusters if the Galera command is actually lying to you or not we created like tables in cluster one and it should automatically get synced to cluster two without any changes because now the cluster is like connected to each other so we create a table in one cluster and query it on the second cluster so we can see a table getting your database getting created on the cluster two and when you add entries to this database like if you add entries you should see so if you insert a value into cluster one it should get synced to cluster two yeah that should be it for the demo I think we are done for that. So yeah the last slide is just some resources which he can access from the presentation uploaded to the website I think we've got maybe a minute or two for some questions thank you very much