 Okay, should we get started? Welcome. I won't bite you, so if you feel like it moved more closely, and we might even need a microphone, right? So, I'm not sure if you were attending the keynote this morning, if you were, bear with me, because I'm going to drag through the same material now, but do it with half the speed, and deep down a little bit more in certain topics that are interesting to discuss. And I'm sure there will be plenty of time after my rant to ask questions or bring up topics that we can discuss. So, lean back, if you have seen it before, you have seen it before, the pictures haven't changed, I think, because I have another presentation about it's probably the same content. Anyway, my name is Claes, and I am CIO of SBAB. SBAB is a Mortgage Bank in Sweden. We're about 500 people, plus a little bit, and have around 400,000 customers, a little bit less than that, so we're not huge, quite small when it comes to footprint, but our balance sheet is thick, with about 35 billion euros in the portfolio, which makes us number five in Sweden, when it comes to economical terms. We were founded in, do you hear me by the way? Back in the end, okay, good, good. Founded in 1985, which is not so long time ago, but during 30 years, you are able to build up a huge pile of legacy if you really, really work hard on it, and that's what has hit us. We have been online from start, no offices, no physical meetings, the customers called us in the 80s, and they still do, to get their loans, but a long time with internet and smartphones and stuff coming along. Of course, we offer services through websites, smart phone applications, social media, et cetera, as well. We have found that, or we are down to the bone shore, that as a small digital player in the digital market, which banking is the recipe for survival and victory in the end is to be damn fast. So, speed and flexibility is what gonna differ the losers from the winners in the end, and we can see that in other markets as well. Remember, Amazon, when they came, for example, they built up a tremendous way to push things to production really fast, Facebook as well in the early days, and so on, there are many examples of this. So, speed is the winning concept, as for us as a bank, an online bank in Sweden. Speed in our book is made up of two main ingredients. It's about architecture and it's about way of working, and they are about equally important, according to our experiences at least. So, I will touch those a little bit and jump into the work part first. As many digital players, digital companies, we try to learn every day to adopt to the agile toolkit in order to get fast. Many, four important areas, there are more areas around agile, of course, but these are the ones that we focus on and trying to learn really, really hard. First one is to build up courage and guts enough to set things in production very early, very early stage and very smallish to be able to try out what happens, how do the customers react on what you deploy? So, find small pieces of code or services, put it out, measure the outcome, measure the behavior, be sure you can withdraw it fast if it goes wrong. That grows confidence in deployment, according to our experiences at least. So, quick on the puck, as we say in ice hockey in Sweden. Get the stuff out, see how the customer react and navigate further on from there. Very important skill to grow and it, as I said, it's very nice since it brings confidence into your deployment and your development. Very important. It's also great importance to build sensible teams that can do their stuff to a great extent without having to intervene so much with others. So, what we typically do is we try to find a set of customer processes, value streams or whatever you call it, build teams around those that are fully able to take the customer journeys from idea to launch, right? We have corporate goals. These teams have their own goals. Very often customer oriented goals and they do whatever they can with their staff in the team to fulfill those goals, which means that the teams ideally are populated with UX, business experts, service expert, product expert, maybe hackers and all the stuff together working as a team to get things out as quickly as possible and then take care of it afterwards. Also a very important thing for us. Not hand over to somebody else, not hand over to maintenance or anything because hand over is not the primary enemy of speed but it's a very bad thing to do if you wanna be fast. First time you hand over something to maintenance, it will all be happy dandy and nice and easy and so second time maybe as well. The 10th time you do it, the maintenance department will grow several times as big as your development teams and they will start to become a bottleneck, at least in our experience. And I think this is the way that most modern companies actually try to run their business. So we have shot up the whole tech stack, not the bottom layer, I will get back to that but the things that mean something for the customers, we do everything ourselves in that when it comes to customer experience. Also very important that these teams are built in such a way that they are not mutually dependent. Of course it's not possible to get rid of every dependency but to as such a low degree as possible so they can run autonomously by themselves without having to ask somebody else on the way, without having to wait for somebody that fixes something for them next, next week or something, they should be able to run their own shop to the end. It's not 100% ideally working either, there is no ideal word. So to cope with that the red team need to fix something in the blue team's code, we have some kind of open source, in-house open source philosophy that it's okay to fill it around in your code as long as you're the one that accepts the pull request and take it into the branch. So autonomous teams, fully competent and independent, is something that is crucial to reach the speed we want. It looks really easy in theory and of course why don't everybody do that but if you have a business that has started off in a certain manner for like 20, 30 years ago, very product oriented normally, it's really, really hard to twist the people's minds to figure out that, hey, I'm in this team, this is what I'm gonna do the rest of, well, most of the rest of my life but most of the time at work, I work in this team and I'm not running around everywhere else involved in a thousand projects, it's the line and the team that really matters. So that might even result in that people are not 100% jammed all the time which is quite nice actually but they are at hand when needed which is crucial for lead time minimization. Third one is for me and other leaders to learn to lead by goals and not throw activities onto these teams, be very clear and elaborate around the goals, make sure that we have the KPIs in place to measure the progress towards those goals and that they are continuously measurable. I mean, typically bad KPIs, okay, we put this thing in production in four months, that's not a good KPI. A good KPI is something that you're able to follow on a daily basis and can discuss with the team around and see if things start to go bad or if it starts to go even better than expected. It's a mean, a tool for both assistants, from us as leaders as celebration when the KPIs actually overwhelm the trajectory or the derivative. So setting goals, make the team buy into the goals, get the acceptance and then the goal is holy. That's where you're running, right? And you do your utmost as a team to reach it. This also creates innovation. It's not given from the beginning how to reach the goal, maybe say that we want to onboard twice as much customers as today with the same staffing and customer service. That's a good KPI, which is continuously measurable and you can immediately see if we are on track or off track and if the team gets off track, they have to collaborate to find better ways to go back on track. So it's a good creative tool as well. For us as managers, it's quite hard sometimes as maybe some of you know to keep out of the team's doings, how to do it and what to do. The hippoing factor, I'm sure you've heard about hippo, right? Somebody who doesn't know what a hippo is. Okay, cool. So avoiding hippoing, ban hippoing from happening is also a good cultivator for speed. Finally, it's also of great importance to learn to deliver value early in the process, right? If you do this big initiative, it's not as good to start over here and deliver something here two years later, but instead focus on learning how to deliver things here, here, here and here that the customers or the users can use. Extremely important because otherwise many projects go down half time because they're not relevant anymore. Onto the architecture part. As many other places, okay, we're quite a young bank, but if you could imagine the amount of legacy that is buildable in 32 years or 33 years or whatever it is, it's surprisingly much. So we as everybody else in the banking business, the old banking business have large monolithic systems where people have put lots of functionality into with cross dependencies, stuff that doesn't really belong in these systems, but hey, what to do? There were no other systems back in 20 years back maybe. So innovation has grown and has grown wild, which has resulted in that we have huge chunks of code with maybe six, 700 methods nested in evil dependencies. And it's a monolithic deploy structure as well. You can't choose to deploy parts of this. You have to throw it out in production at the same time all the time. And when several teams are in working in different areas in different corners of these big monoliths, it's almost certain that things will go wrong when you put them in production. Then rework weights, chaos weights, and also a bad customer experience is around the corner. So instead of that, we strive to build something that is decoupled, small, neat, nice micro-services who are independently deployable and where teams can take ownership. So decoupling is the word of God that can be said as many times as possible. It's true. So for our sake, what we're trying to do is to, first of all, determine where we want to be fast. It's towards the customer. It's in the integration layers, in the customer interfaces, and also in the customer service tools since we're still very heavy on telephones. The Mortage Business in Sweden is telephone heavy all over the line because it's the biggest deal people make in their lives, obviously. At least these generations want to speak to somebody else but just the chatbot before they take these several million crowns of loan. Still, customer experience and customer service tools, that's where we focus to be fast. All digital channels, for example, and all integrations that glue these things together to ensure that we have 360 customer experience and such on the channel thinking. While we leave the stuff in the basement to vendors and partners that take care of them for us like basic bank functionality, for example, there are several large vendors that do that better than we can and we don't need to change it so much. So if you change it, sell them. It's standard that somebody handle it and focus your own power and creativity onto what really matters to you. That's the way we resonate around this. So a little bit more than a year ago, we started to use an OpenStack platform to be able to deploy our decoupled services, our microservices, with a great extent of automation and by ourselves and having control over what we do in a way that we haven't really had before. In order to be able to be fast and we try to get out of our old legacy systems and build new, sensible, sensitive, sensible microservices alongside, reroute the traffic piece by piece by piece and so slowly getting out of the old and into the new. As opposed to Big Bang releases, which we kind of avoid and fear and hate, Big Bangs are seldomly the recipe for success in my book at least. Important when coming to a decoupled infrastructure and decoupled software is to enable for teams to work independently on their components. You remember we talked about, I talked about customer journeys that each team should concentrate around their value streams, their customer journeys and the same goes for components. So the components supporting the brown customer value stream is handled by the brown team and the purple ones handled by the purple team and they should not have to go to each other every day to collaborate but instead run their own business to a great extent as possible. So ownership of components, independence between components and independence between teams are good ideas when you wanna be fast. Then you can have 20 teams running in parallel and maybe four of them are facing trouble but the remaining 16 are doing really good. So all in all, the speed goes up, right? And the throughput accelerates as well. Important details in the coding or in the development processes to automate everything that is automatable. Soon, probably in five years or so, the code itself, the coding might be, might be optimized to some extent but we still only use deployment automation. We build up Jenkins pipelines, auto build everything, auto test almost everything, have automated security via sonar, building Docker containers, deploying them automatically onto our test, production stage, pre-prod environments and also automate the alarm setting with sonar and Grafana around these services. So from that, the developer presses commit, commits it to Bitbucket and merges the stuff to a development branch, everything goes automatically after that. Very important because the weights in manual deployment is really, really not bringing any value. On the contrary, the more the weight, the slower you get obviously and weight is the number one enemy of speed, waiting for somebody else, somebody to put your code in production or some expert that is not at hand when you need him or her, really, really bad for speed. So this is basically what we do. We orchestrate the stuff, the Docker containers as well with Swarm, which we find is good for us. It's quite nice and easy, not so bloated and techy. So results from these exercises that I've showed in the keynote as well is really for us at least appallingly well, I would say. Up till September last year, we made these monthly releases where gathering up code over a whole month, baking it together over a weekend, taking down the systems, customers were sad or they counted on, okay, yes, be obvious, not up during the weekend anyway. So shuffling it all into production, pray to God, turn the lights on and all the time things went bad. We broke the internet site, broke the customer service tools, broke everything every time and people kind of counted on that. So we extended the testing period for five hours, 10 hours every fourth weekend. We couldn't go on like that in order to be fast. So we banned that in October last year and forced ourselves to deploy things at a time when they were ready independently. Over the last quarter last year, we made this benchmark of how much did we really push out and measured in the amount of user stories and found it to be around 140 per month in October, November, December. So we set a goal and double that during this year and that goal was almost smashed already in March because people really like to do this. The development teams can do the stuff themselves. There's no man or woman in the middle making it happen for them. Push it out in production, see what happens, get the alarms in place, see how it reacts, see how the customers react. That's really a driver of creativity. So we established a new level around March, April, around 250, sub 300 something and then after summer the team started to excel and got more stuff out. So I had to raise the goal. I pushed 500 on to them but I had to back off from that. So we negotiated around 400 by the end of the year, 400, not releases, that's not completely true but user stories by the end of the year. So hey, 400, cool, buy in on that. And in October they smashed it 421 which is in our terms like tripling the delivery capacity over one year without adding so many more hands on deck which is fantastic in my opinion, really, really nice. Now throughput is not the ideal KPI of course. So we are collaborating around finding a good way to define lead time and to measure lead time in a meaningful way because in my book at least, lead time is the ultimate KPI and to find either median lead time or average lead time with a small standard deviation as possible, that's the tool that you really need to drive speed from idea to launch, from initiative to present it to the customers. Once we have learned this, we can scale, right? The more legacy we get out, the more microservices we produce, the more ownership we can portion out, the more we can scale, the more teams we can onboard because you know as well as me that just throwing in 100 new developers won't help the ship to go faster probably, most likely on the contrary. So scaling with components as small as possible, the ownership and a good organization along the customer journeys are probably, is not the worst way to do it anyway to reach the speed that we wanna achieve. So I think that's about it and time, we still have quite a bit of time. Obstacles, I can take a few real examples of when things, it hasn't been kind of walking the park doing this. So for example, when speed really took off in August and September, we could also see an increasing speed in the number of incidents that we produced in production. So the bugs were kind of skyrocketing as well in line with the deployment speed. So in order to cope with that, we introduced believe it or not, an old fashioned cab change advisory board for the legacy components that we put in production. We have also automated the legacy, not an open stack, but in older platforms to be able to for the teams to do it by themselves. But it's very dangerous and very, very tricky to do it. So with the change advisory board, I know it's boring as hell, but still, we got the incidents down and the funny thing is that the October figures are with this change advisory board being in operation. Now we don't wanna stay there, of course. So the day that we don't have anything to discuss anymore in the cab, we will take it away and let the teams continue to go completely by themselves. But until we see that, we will still have that kind of control because as soon as we put something out that doesn't work, it affects the customer service and our telephone queues are just skyrocketing, which is bad customer experience. Other funny learnings along the ways, as I mentioned in the beginning, getting people that are not working agile from the beginning or are not used to it. It's really, really hard sometimes to make these fantastic experts on products or within their business areas to get the grip on how to do this. Should I work with these engineers every day in the rest of my working career? Yeah, pretty much so. Instead of being the guru in all different business areas, that is a challenge as well. And avoid thinking around departments. I mean, we build our teams cross-department. The hackers are in my different teams and then the UX and business experts are in other departments, customer service representatives are getting the third department to get people to understand that the department boundaries are there to, you know, they're not meaningful. Work together as a team like this, not in your system-oriented way, is also quite a bit of a challenge mentally for people. But we're getting there, getting better every day and it's a fantastic journey to be able to be a part of, to see what's easy and what's not easy, what's bad ideas, what's good ideas, what's disastrous and what is key for success. I think I will pause there. Yeah, thanks. So if there are topics that you wanna discuss or questions or anything, just feel free. Sorry, how we moderate. I'm monitoring, yes, yes. I'm monitoring, you have to have kick ass monitoring for something like this to work. What is your experience around that? Yeah, good question. The question is, yeah, how to monitor that stuff really works, right? First of all, as I said, we automate the alarm settings with Sonar and Grafana and all services that we put out in production. So services need to have certain kind of pings and beeps and stuff to make sure that they are alive and operating the way they want. So that's the basis for the monitoring. Alongside with that, we have huge heaps of legacy that do not support this yet. So there, of course, we have to introduce or have in place actually old style, try to keep alive pings and stuff for those which are not in the Sonar world or anything. So we live in two or three different monitoring worlds when it comes to service quality and uptime. Good question though, it's really, really tricky. And to get rid of old meaningless stuff is also a challenge, right? You get kind of false positives in the face all the time. Yeah, please. Can you tell us a little bit about the problems you've had with the legal requirements in your country? Yeah, for sure. That's my favorite topic. Now, but being a bank, the banking market is heavily regulated, as we all know, and it's a little bit different in different European countries on how the local FSAs interpret that. So for Swedish banks, it's still very tricky to jump out in the Amazon Cloud, for example. I think there is one bank who claim they are now, but they have fought their knuckles bloody against the FSA to make this happen. So that's an interesting thing for us to observe but the fines and the penalties for not being really thorough and have all the, what do you say, agreements in place that saves you up is really gonna hurt. So what we do, that's one of the reasons actually that we chose OpenStack because we got the opportunity to run our OpenStack installations in a private cloud hosted by City Network in Sweden. So they could guarantee all the guarantees that we and the FSA needed to stay in Sweden or stay in Europe. Nobody in USA can see this stuff and data leakage and security questions. Everything there was in place so we could just jump into the platform and run. We are, of course, as every other bank, continuously looking at what would it take now when Amazon, Google and the other one, Salesforce, have realized that there is a huge banking market in Europe if we only do these small tweaks in our agreements. So we'll see what happens. Probably it will open up in a year or two, I guess. So around the cloud thing, that's the regulatory, my regulatory perspective is, make sure that you have a vendor that you can really or a partner that you can sign up, save contracts with. From the security aspect, I'm not worried at all because all the big ones, all the cloud providers, that's what they actually live on. That's their bread and butter being safer than what I could actually accomplish myself in my basement data center. So that's not such a big worry but the compliance risk is there. It's quite tangible. You said that you have organized your teams over your entire top-down from business down over development to operations. And that's a focus just on their product, what they can do best. How do we make sure that the different teams don't go off on a tangent and do totally incompatible stuff or re-implement stuff if they're fully independent? Good question, good question. Yeah, this is some kind of ideal theoretical picture that it's kind of pipes that the teams and the business work within. It's very much down to collaboration even between teams. So it's just that the majority of the stuff that the teams do, they can do by themselves but still you need to be very transparent with what you do. And we have frames around what you can actually do from a technical perspective as well. I mean, it's not 100% okay in my book as every team works with and deploys Java and React JavaScripts, Node and Go actually on the plate right now to start off building Microsoft services, for example. That's a bit stepping out of our tech sphere, so to say. So there are boundaries around what the teams can do. Addressing the question, what happens if two teams are doing more or less the same thing? I ask the same, or the same question that we were passed to Amazon when I was there a year ago with some tech guru and he went, okay, what's the problem? Two is better than zero, so. But of course, a close collaboration, information exchange between the teams and cross team for us to meet up and see, hey, we're doing this now. Okay, we're thought about the same thing, can we use your service, et cetera. It's of course really important. For us, it's still quite easy since we are only, what is it, 80 developers or something like that. So everybody sits almost in each other's knees anyway. So, but it can be a problem when you scale up, absolutely. Please. Do you foresee yourself moving this five years down the road on a public cloud? Is it even possible? Absolutely, yeah. As soon as the regulatory staff is in place and the compliance risk is not there anymore, yes. Then of course, it is a cost perspective, aspect to it as well. I mean, CPU and capacity is expensive today, storage is cheap. So, it's also a balance between what you wanna put in the cloud and what you wanna run some, maybe in some other cloud with another pricing model, which is a jungle of its own, but we're definitely going to be more cloudish, yes. Please. Good question. We have, how many teams do we have and what's the typical size? I would say we have probably nine or 10 service-oriented teams right now. And the size of those, some of these teams contain two development teams. So, it's a kind of product area teams that we call them. The development teams usually four to five people in each team and one of these product area teams might contain 10 to 15 people, which is a bit much, but still it's as far as we have gone in our agile trip. I mean, ideally, probably the, I would say the ideal, if there is ideal size, it's probably sub 10 and where people are broadening their competencies to overlap each other. So, the UX guy or girl is not purely UX and the backend Java hacker is not only hacking backend, but they kind of cross-mate over competence borders. So, and that's also quite a challenging thing to learn. But I think people will evolve over time and see, hey, it's quite fun to do the JavaScript stuff as well. So, broadening the competence and career in that case. Did we follow any specifics framework like safe? The answer to that is no, because we're quite a few, from the tech side, everybody has worked agile, more or less, from starters or from where people, all the people we hire, and safe wouldn't really add on any benefit for us, since we're not you might be big, we're sub 100, and it introduces things like enforced rhythm, for example, that might not be the ultimate key for speed. So, we try to cultivate the teams independently very, very strongly, so they can run as fast as they can without syncing with others so much. Getting back to the regulatory part, love that as well. How is your experience with external auditors in a quickly changing environment, an agile environment? Because in the traditional world, it's often very difficult to explain external auditors why you do things the way you do and what your idea was, and now with an agile development where you change everything every other month or so, it's even more difficult. Yeah, good point, absolutely. Auditors are always adding on to the challenge a little bit. Now, what I found out with the auditors, both in this role and in my forum role in the stock trading company, Bank as well, is that the auditors are happy as long as you can describe how you do things, right, the processes around it. And I mean, we are not process-less in this world either. It's rather the contrary. It's more and more automated. The kind of handling that people do in their daily job is further from production, which is a paradox. But since we have automated step all the way from commit to deploy, more or less at least with some kind of duality recommendations and stuff like that, we can describe it quite easily. And then the auditors check out, are you really working as you say that you are? Yes, yes, obviously you are. So I find it almost easier to satisfy the auditors' needs now, as opposed to when we had a huge, or not huge, but an operation, the department that put things manually in production every week and it was untraceable and everything. So it's easier with automation to fulfill the requirements, I'd say. Is there anything more I should have told you, please? What did we do to achieve a higher speed in the deployment process? What we did is, if we looked back one year, we, as I said, we gathered up heaps of code and then we had lots of teams involved and experts on how the system worked together trying to figure out if it's gonna work or not, putting it in production and it always went boom, which imposed lots of rework which took a capacity away from developing further, right? So we were always on our heels a week after deployment. So one week per month was just a way. So what we have done now is to automate the deployment process in such a way that we have, I think, 120 pipelines. Every service have their own deployment pipeline with automated testing, with automatic surveillance, with automated security and stuff like that, which makes it a lot quicker for each team to do the stuff themselves. They push a button, a product owner comes and push another button and there it goes. So the steps in this involves, I guess, standard tools for everybody, but Jenkins is the driver behind each pipeline, so automatic builds, automatic deploys to development and tests and then merge to the trunk in each of these pipelines and further on out to pre-prod and production. So it's fairly, once we have built it, it's fairly easy to use. And very quick, I would say. A deployment normally takes three to five minutes, including all test steps and all builds, I would say. There are exceptions that take maybe 15 minutes and things that take two seconds, but like an average, at least. And since we practice this several times a day, we get better and better on using the tools, avoiding pitfalls and such along the road as well and even sharpening the deployment mechanism. I'll get back to you in a second. To make this work, the teams are not building this, the deployment infrastructure themselves. We have an expert team in the bottom of tools, a technology team or a platform team if you want, that whose mission is to make life as good for the developers as possible for the development teams. So these front runners are the ones who are building the deployment pipelines and the processes around that and helping the teams to be really, really not having to care about the production setting at all more or less. Please. On OpenStack, which distribution? We are currently using Pyke and we are with one foot in Queens, depending on our operator who are extending their service to several data centers where Queens is installed. So, but Pyke currently, our next question. I don't have that degree of detail. Sorry, sorry. If you come up with some more thoughts and wanderings and questions, please just beep me on LinkedIn. I will get back to you, I promise that. Thanks a lot for having me. Nice to meet you. Thank you.