 I work for a company called Bink and this presentation is going to be how Linkedee helped us partner with Barclays and deliver the service we required on a cloud native platform. So I'll start with, who am I? I head up Infrastructure at Bink. I've actually been working at Bink for three years now. When we started we're very much in our infancy around our DevOps side of things. We've successfully migrated from a on-prem to a cloud native stack and this presentation is going to be a little bit around some of the problems we found and how Linkedee has helped support those. Previous to working at Bink I worked for Oracle and MySQL for seven years as an implementation engineer and pre-sales consultant working with various aspects of MySQL on various different platforms. Before that I worked for level three who are tier one ISP working for their CDN function. So I've done a few different things which basically means I know a little about a lot. So what I'm going to do is just cover what Bink does really. So I've always thought Bink, you know, good concept, bit of a silly name. Essentially what Bink does is linking loyalty and membership cards together, loyalty and payment cards together. So what this effectively means is I'm sure you know any of you in the audience or listening to this now will have loyalty cards whether it's for coffee shops or supermarkets or anything like that. Actually remembering to carry these with you and present them can be a problem. You know leaving home without your payment card or the apathy around actually having a membership card to present is you know something that's a bit of a pain and I know personally I haven't always got the loyalty points that I should have or taken advantage of loyalty schemes because of the just the apathy of having to carry the cards around or having to remember to present it at time of payment. So how payment linking works, what Bink does is the customer will link their payment card to that profile. This will either be done inside the app or through one of the banking channels that we've got for mobile banking apps for example. So what we've done is you've added your your loyalty cards and you've linked them to your payment cards, your visa cards for example. So what would then happen is all I need to do is shop as I would normally whether it be contactless or you know chip and pin what Bink then does we process that transaction data and we link it and match it to a customer profile. We then send this information to the merchant. The merchant then receives and processes all this transaction data linked to their profile and then loyalty points can be issued automatically. So essentially what this means is that you can just pay with your payment card as you would normally and don't need to present any of your loyalty cards at time of sale in order to get the loyalty points. So the use as a payment linking could be for a merchant to upgrade their existing loyalty schemes very very simply we can plug in to transform ongoing schemes with payment linking very frictionless and very easy. We're also aware that not all merchants especially the smaller ones will have their own loyalty scheme at this stage. So what this allows us to do is to create a new loyalty scheme and Bink provide that loyalty scheme to them and then you know utilize the benefits of payment linking. So Bink has partnered with Barclays in 2018 and are projected to reach a 25 million customer reach in the UK by Q3 2022. We have just signed a deal with Lloyd's Bank as well so we have quite a large sort of market share now. So Bink you know with the Lloyd's Banking Group in Q3 will boost our overall customer reach to over 25 million people and provide extra functionality inside the Barclays and Lloyd's Banking Apps in order for you to link your loyalty cards to your payment cards and shop without having to present your loyalty card itself. So it was quite a simple concept one that I really thought was interesting and like anything it requires a technology stack in order to deliver that. So how did Bink build out the infrastructure and how do we do it to support all this cool loyalty card payment stuff? So I'll start with a brief history lesson. You know in terms of DevOps I'm old enough to remember where you had to provision you know in server rooms there was bespoke hardware one of my first jobs actually was working for IBM working with IBM mainframes. So essentially if you wanted to later to computer tech you had to physically buy the hardware install it in your own office server room it was very bespoke the time to build the time to scale was massive and I'm sure mostly in the audience are too young to remember any of that. In the 90s and 2000s you know there was ISPs and colo co-location facilities where you could house your own hardware in a data center. When I joined Bink this was still pretty much what we were doing. We had bought hardware renting hardware load balancers etc and this hardware is what we provisioned our services on. Very quickly this became you know a bottleneck in terms of what we were doing you know even in a data center bad things can happen. I think this is an example of a few years ago what happened in one data center. So you have to provision for failover for resilience and all this kind of stuff to actually do it yourself. Now we weren't at the point of you know wanting to do this you know we were a fintech startup we didn't particularly want to have to re-architect application stacks from the ground up which is obviously what you used to have to do. Very soon the complexity you know how long it would take to grow you know we were talking weeks if you wanted an additional server to be added you know things like that we started to look at the cloud and honestly to be totally transparent we chose Azure as a cloud provider because at the time they're prepared to chuck lots of free credits our way allow us time and support to get on a you know a cloud stack and to migrate our applications over with with zero cost. There's lots of arguments you know which cloud provider is best and I think there's definite arguments in terms of Amazon versus Azure versus gcloud you know and but for us it's definitely been a learning journey. I think Azure definitely had some catching up to do in terms of their Kubernetes implementation we actually ended up rolling our own and this is where we implemented linkerd. We are currently looking at AKS and migrating onto it now the actual sort of Kubernetes as a service we feel is caught up with what we want to do but long and short of it we've ended up with Azure. So what we found very soon is the cloud isn't perfect and here at Bink we rapidly discovered this. Again one of the benefits of deploying on bare metal you know you have whilst you have to deploy everything you do have access to everything. If there's slow network issues you generally own the network hardware you know if there's issues with the OS you're the one that installed it so you do have a good view of everything that's going on. One of the issues we found quite rapidly quite quickly were transient network failures. Applications would show gateway timeouts. We were seeing DNS timeouts. Essentially our application stack hadn't actually been built with any kind of resilience in mind. The DevOps team were under a fair bit of pressure to how we were going to address this. At the time we were rapidly growing you know it was new features all the time so the the thought of having to actually go back to our excuse me go back to our application stack and add retries you know add resilience on a on a service mesh and actually code that in weren't really you know plausible. We even got to one point where we considered actually moving to another cloud provider or even moving back to bare metal colo just so we felt we had more control over what's going on. This isn't a particular criticism at Azure you know we've seen this across various different cloud platforms going forward but in essence we weren't able to get the the uptimes and the reliability we had applications hanging for example things like that. As we're working with banks it became very soon very quickly that we had the requirement to talk TLS across everything even internally so every every microservice every application had to be encrypted point to point. So in order to do this we we looked at configuring to Unicorn but that was looking like it's going to be quite difficult to do and having all of our applications being able to talk that was quite difficult. So in essence we were really struggling to deliver the service reliably and as I said questions were asked over whether we needed a platform redesign or the suitability of the cloud hosting platform so either way we had problems that we needed to address. So one of the things that we tried to do is look at various different products that were out there and Linkadee was one of them and our senior DevOps engineer who looked into Linkadee as well as some other alternatives as well had a look through and very quickly you know I remember him coming to me almost within a week and saying how quickly he was able to implement this and how the problems literally went away. So by introducing a service mesh we could implement the application retry logic and we saw immediate positive effects around this. We found it really easy to install with the automatic proxy injection you know it mitigated any complex setup. We were able to implement MTLS which was implemented through Linkadee which immediately because Linkadee was the service mesh any application that was talking to any other application automatically was TLS encrypted which again was fantastic for what we needed. One of the other things as we were maturing and I came in we really didn't have any SLO metrics or decent dashboards or alerting and it was quite obvious to us that we needed to do something and because Linkadee sits as a service mesh it gave us the perfect place to be able to develop our own internal SLOs, have some dashboarding and because of the Prometheus integration we were able to set up really good alerting through issues that that Linkadee would raise. As fairly recently we've actually invested in a subscription to buoyant cloud which gives us a really nice that dashboarding feature and the ability to go further looking at the metrics looking what's going on and being able to share dashboards and have good insights into everything that's going on so for us it was you know really fantastic and really was a quick win to what was going on. Everyone loves a live demo so what I'm going to do is instead of just talking about what Linkadee has done I want to give you a live run through example of how implementing Linkadee really is quite quick and easy and how if you have an application which you know is suffering failures or transient network failures or anything else how quickly it is to do. I always swore I'd never do a live demo but we'll have a go so this will be up next. It's a live demo part. What I want to illustrate very simply on this demo is how easy it is to deploy Linkadee as an application layer to be able to mitigate any connection issues that we're seeing. So to give you a bit of context what we're seeing across the cloud was transient network errors, we got the occasional timeouts, connections being dropped, stuff you'd actually like normally expect although to be fair we didn't expect it to be quite so bad initially. As stated before we could have implemented something on the application side we could have done something you know the application stack to implement software retries or retry logic. The issue we've got is it's more of a microservices architecture so we've got various different applications across various different squads so we'd have to implement it in different ways and there was also time constraints around this as well. So what I'm going to illustrate on this demo is it's a really simple demo that one of my DevOps engineers put together. Effectively all it is is an application that's actually designed to fail one in 10 times so it makes a request and one of those 10 times it'll get a 500, the other nine in 10 times it'll get a 200. Now this is to simulate effectively the application dropping one in 10 connections. What I'm then going to do is apply the chaos server profile and all that does is deploy into kube the the linkerd service mesh and then what I'm going to do is tail tail the logs and you can see how instantly its linkerd will handle the failed retries and give us an effective 100 success rate. So what I'm going to do is one that I prepared earlier is if you look get pods there's no resources available so what we'll simply do is apply the chaos client and we'll do the same for the chaos server. So this is a very simple client server application where the client's making a request and the server's returning the request. If we do kube control logs for the client hopefully that should be up it's not quite up yet give that a minute. In the meantime we'll do a get pods and you'll see that they're all starting or running and all available in there so yep there we go. So you can see here that number 200s is going up the number returned but you'll also see that the number of 500s is going up as well with a total number of connections. So if we do a watch on the linkerd vis routes you'll see as this comes up we've got an effective success of 89% and an actual success of 89%. So as you can see pretty much one in ten transactions are failing and this is pretty much from what we're seeing in production so you can see one in ten failing so by simply applying the the chaos server profile what that then gives us is is linkerd managing those connections so if we enable that you'll see straight away instantly the number of 500s is now pegged at 976 and then you'll start to see that the actual success by reload this you'll see the effective success is now actually going up to 91% 93% and you'll see it'll start growing as the percentage of requests starts passing so you'll see that it's so easy and effective you know up to 95 now it's so easy and effective to apply linkerd to give you that sort of that level of resilience and service retries. As I stated before there's another additional benefit to us that it also provides TLS encapsulation for every point to point connection as well so really really straightforward for us to be able to reach the level of compliance as we need without having to worry about you know application level security between all of the microservices as well so what this is now showing you is that one in ten requests are still effectively failing but because of linkerd doing the service retries we're now at 100 success now I appreciate this is you know kind of a crude simple demo but hopefully it'll illustrate just why linkerd was a no-brainer for us and how simple and effective it was as a service mesh on top of what we were doing with zero disruption to our application stack and you know zero disruption to what we were doing on top of that with the sort of newer buoyant cloud it gives us really good dashboard stats as well so I'm sure linkerd are well represented here at kubecon so what I'd recommend you do is go and have a look at linkerd booths have a look at buoyant cloud goes for us having metrics in the service mesh is one of the best ways of effectively seeing how applications are running and one of the things which is probably out of scope for this demo is to talk about how we can monitor SLOs and service level metrics using linkerd and the service mesh hopefully this is just quick and dirty and showing you what we can do if you've got any questions hopefully not too complicated ones then please ask otherwise yeah I know this you know sounds like a complete sales pitch for linkerd but honestly where we've used technology and open source technology which has really helped us I always feel that we should try and get something back and eulogize you know and sort of talk about the technology and and for me this is just just was a no-brainer from what we did and really affected to the business uh thank you very much cheers