 Dwi'n ddweud, fel ydych chi'n gweithio'r gweithio. Mae'n rhai oed yn hynny. Felly, rydyn ni'n gweithio'r arhau sydd wedi'i cyffredinol ym Lleidgellau Rhaeddaeth a dyna'r UK. Rydyn ni'n gweithio Tom fel El Svir. Rydyn ni'n gweithio'r gweithio Tom ac mae'n gweithio'r gweithio'r gweithio. Mae'r gweithio'r gweithio'r idea ac mae'n gweithio'r gweithio. Rydyn ni wedi wedidatant ein barai gweithio Tom i'r Pnoffref ar oswr dymes ac mae'r gweithio'r gweithio mae homt yn teimloion Felly, we've taken this very much from what we want to achieve as a business, how we take it through that technology landscape. For those who don't know about Elsevair, Elsevair are part of the Relics Group. We're a pretty large group consisting of a number of different companies. We have revenue of over 10 billion, over 30,000 employees overall. We're a data and analytics company, so data is at the heart of everything we do. We've transformed over the last few years from a publisher to a data and analytics company, so everything we're doing as part of a business now is geared towards data. With Elsevair, what are we trying to achieve? I'm taking a slightly different angle to some of the other presentations today. Rather than going straight into what have we done in OpenShift and what we're doing there, more of what we're doing as a business, how do we move fast as a business and deliver technology fast to enable business outcomes? That's what we're trying to achieve with this. We've got our microservice and integration platforms that help us enable that. To enable our different products and our different internal and external systems and data platforms really make use of data. We have some clear goals already. How do we make development faster? How do we get through this development life cycle, get services into production as fast as possible at a high quality as well that's cost-effective to the business? How do we use our cloud platforms? We migrated everything to the cloud. When we started this journey, we started the hybrid solution. We had part in-house, part in the cloud and part SAS. Since then, we've actually closed our internal data centre. We've now got all of our platforms cloud-based. We're also using a number of different SAS providers. Our integration platforms enable this between our different cloud services and different SAS services that we utilise. One of the key aims as well was to use open technologies. One thing we wanted to avoid is locking to certain types of proprietary vendors and proprietary technologies. We wanted to be able to adapt as this market is fast-evolving all the time within technology, we want to be able to move with that and get the right kind of skills in from the market to help us get there as well. We took a phased approach. Phil mentioned we started working together about three years ago so now. We started off with our enterprise service bus, the old SOA platforms, everything site-based and quite heavyweight. What we found when we started looking at this is it was very challenging. We had large monoliths. Development took a long, long time to do and everything was very proprietary. When we were trying to find people to help move us along with this, if you haven't got people those right skills, either they're expensive or difficult to find all the time as well. The first part of that journey is we engaged with Red Hat to look at how do we start to set up our integration platform. We initially decided to go down to the fuse route with JBoss fuse where really what I call it is more macro services where we got to there. Not fully independent but quite aware along that journey. When we looked at this, there was OpenShift V2 at the time, but my opinion there was, I think it was too early. I don't think the technology then was really mature enough to consider that as the way forward at that point in time. But where we are now, it comes to that maturity level and I think it shows from the size of the group here today. Over the last 12 months, it's really gained that traction and it's a mature product out in the market. By going to our fuse platform, it enabled us to package up small services and put them as OSGI containers. It was a middle ground with containers, but we were packaging multiple services into those different containers and utilising the open source technology stack as well. We built a lot of our services around a Camel architecture. Apache Camel developed under Java and packaged into OSGI. We found, in terms of core development, this is really good, but the challenges we found, we still had fairly long development cycles. What we found there was the test and deployment model was still quite difficult to manage and that ended up taking quite a bit of time along that journey still. So about 14 months ago, we did a week's proof of concept with Phil and the team. We decided to look right. Is OpenShift the right way to go now? Let's try and prove this out. This really worked for us. So we took a few of our existing capabilities. We look at what would it take to migrate them to OpenShift. So where we are now? We're actually in production with our OpenShift about late last year and we're also migrating a number of our services from our fuse platform into OpenShift and what we're seeing now is demand across the business is actually starting to decrease as well as people see what we're doing and we've got some quite big gains by doing this as well. Our services are now completely independent so each container is completely independent so if one breaks, it doesn't break other things as well which is a real business gain. We're moving everything away from soap where it comes to integration as well. We've got some legacy that still uses it but now REST is a standard. As you see of a lot of newer software vendors that you integrate with, most of them are all going down the REST method and REST helps you build things a lot faster and testing is a lot faster as well. One of the advantages by going down this route as well is the build test deploy cycle is extremely fast so that gives us real big gains on really going at the speed we want across our different business. When we started on a microservices strategy what we wanted to avoid was just running in there really fast getting things out there fast but then the challenge you've got is actually running people back because you do it fast but you end up in the Wild West. Everybody does their own thing, it gets very difficult to manage you build lots of technical debt and you'll never catch up again. You never get that opportunity to come back. Before we actually migrated it out to production what we wanted to do had the right building blocks in place so we took some time to do that. We wanted some clear development standards so we got some clear development standards how do people develop their services? What deployment model? What quality criteria do they need across this? We put all that tooling in place. Our DevOps processes. In DevOps we need to ensure we've got environments provisioned and they can be provisioned quickly. We have a couple of clusters we have a non-prod cluster and a prod cluster and we have all those automated pipelines that help us deploy to those very fast as well. Developers can actually build, deploy, test and then promote to production when they're ready. Once you've got things in production you need to ensure you've got something robust. The business are not going to be happy if you put something out there and then it ends up fairly flaky, falling over, outages. You've got to have the right monitoring in place to make sure you're covering everything off and can meet SLAs that the business needs for their different services at the end of the day. One of the other challenges that we found was as this started gaining demand, at first we set up a centre of excellence to actually deliver integration services and microservices but what we've found over the last two years or so this has actually scaled way beyond our control. Everybody is building different services. Everybody wants to deploy to the kind of platform. So what we've done is adapted our approach to delivery where now we've got all these standards and building blocks in place. The distributed delivery model. We can go out to other areas of the business and say, here you go, you can develop and deploy onto this platform. All you need to do is adhere to these standards. If you follow the standards things will go through. So that way you're not going to be the bottleneck in the middle and actually slowing people down to what we want to achieve. So we're talking about OpenShift here but what I wanted to do is actually paint this as part of our wider strategy. So my remit within Elsevier is wider than just OpenShift and those aspects. I look after all our data platforms, our business intelligence and analytics and the integration aspects. So OpenShift itself is part of that wider strategy. This enables our core data APIs and microservices. What we also put in place was an API gateway in front of this. So that allows us to deploy our core APIs from our services as well and simplify that way of integration. What we also have is a data lake. So we have a data lake where lots of different people can publish data into that data lake. Some of those will get deployed via APIs through our OpenShift platform. Some will be there purely for analytics purposes. We have our traditional data warehouses as well. So core visualisation analytics happened to get data in the data warehouses and what we've also started putting in place is a lot of knowledge graphs across our data lake as well. So we have a lot of different data capabilities across different data sets in the organisation and different identifiers that allow these to come through. So what we do is put in a set of knowledge graphs in front of this so you know where to go to different data sets to join in those dots, bring that data together and then you can expose it out for analytics, visualisation or APIs. So there are different ways to consume data across the organisation. You need to make it as easy as possible for different consumers whether or not it's system to system communication or user access to that for performing analytics, machine learning, big data and also for presentation and visualisation of data as well. So it's moving away from traditional reporting to insights from data. I want to talk a bit through our roadmap where we started and where we are now. We started in 2015. We implemented Jboss Views, put our initial strategy in place and we set up a centre of excellence for integration. This worked very well. It gave us a really good way to get in and we migrated everything off of our existing ESBs into Jboss Views. That platform's been running now since late 2015 but then what we did was decommissioned our ESB and then in 2017 it was the right time to say, right, OpenShift, this is coming to a really good maturity level. It's starting to get a lot of traction out in the market. We feel it's the right time to look at this. We did our proof of concept. This proof of concept proved to be successful. We thought, yes, this now seems the right way to go. This helps us from where we were initially. We came to this halfway house and now it helps us to move to the future to where we need to be for the long term. So we did this, then we started to engage around Red Hat and said, right, let's get this platform into production. We put all our processes in everything place. We got this into production and then at the moment, which were due to finish by the middle of this year, we started migrating our Fuse services across. We've got new services coming in. We're migrating our Jboss Fuse. One of the big advantages here is we haven't had to redevelop our services. When we started off, we chose Apache Camel as the right way to go. Jboss Fuse packaged these up as OSGI within those containers. But to deploy these to OpenShift, it wasn't a redevelopment. It's more of a repackaging. We've taken what we've had in OSGI and repackaged these as Spring Boot, which allows you that fast cycle to build, test, deployed and you've tested a lot of this in the past. A lot of it works and it's proven out there already, so you're not starting again. We have taken some opportunities to simplify. As we've put an API gateway in front of this, one of the things we've really used this opportunity to do is simplify some of our security models. What we had when we had our initial integration services, we had all different types of security. We had basic auth. We had WS security. We had certificate-based security. We had all these different types. So we decided to go through a low-walls model. We deployed that in the gateway and then we can have our simple security model between the gateway and our microservices. So our microservice platforms can then have a common method of security rather than having 20 different ones and it's the gateway that then manages that. You get a lot of different metrics across the gateway as well. You can manage swatlin and a lot of those different aspects as well. So since we went live with this platform, things have started evolving. The next big challenge that we're seeing in our business now is how do we scale this? Everybody's come into us. Everybody's looking at containers. We've got to make sure we can keep up with this demand, make sure we manage demand in the right way. One of the other key things as well is everybody's jumping on containers as the next big thing, but we want to make sure containers are used for the right thing as well. It's probably not the right choice of putting everything on it and it's trying to rein people back a bit so we do use containers the right thing and we're not putting massive databases, massive application servers on to there. I think at this point in time, they're best to be managed off-contain the way they are at the moment. So throughout the rest of 2018 and 2019, that's our clear goal now. Let's scale this throughout the business, let's work out how we can do this, how we can keep and make sure this will support our business for the future. Since we put this in, one of the things we want to do is take a step back and say, right, have we achieved the business value we set out for really? In most circumstances, I would say absolutely. If I look at it, our development is so much faster now. We're seeing probably between 25% and 50% of development time now compared to what we used to within JBossFuse and most of this is the simplified build test deploy model I mentioned earlier. We've allowed simpler testing and automated testing around some of this as well. One of the key things that we've done around testing which really helps us is every time we develop something new across integration, a lot of the business partners wanted end-to-end testing across all these different systems. This used to take a lot of time, a lot of external dependencies on people and processes which didn't help us meet that. If we've got clearly defined data contracts across that, now what we do is test a contract. We actually have microservices that provide simulators and stubs for external systems. We can actually test against that and you can test the data output that's stored against it as well which gives you that fast turnaround and that's been a real big gain as well. We've got lower cost of infrastructure by doing this as well. One thing you've got to be very careful of is when you've got more containers, you've got a bigger memory footprint. Overall, you're not running lots and lots of different sets of infrastructure. You've got one set of infrastructure across your cluster and it allows you to scale that down once you've got that maturity. The other thing is better business engagement. We have some clear SLAs we've got to meet for our business. By doing this, what we have done is we've got auto-scaling now. If some services are getting hammered with lots and lots of demand, we can auto-scale to meet that. What we used to find is a container would fail. Somebody has to be called out to actually write, can you restart that container? Now we've got the auto recovery of these. You say I want three pods. If one dies, it gets thrown away, up comes another one. You're never going to be in that situation now where people need to come out and actually restart containers as and when they're needed. For all, we've found it's been a very good journey. We've had some challenges along the way, which you always will do, but from what we were looking to achieve, I think we've come a long way towards that. I know there isn't time for any questions, but if you want to reach out to myself or Phil, we're around here for the rest of the week and feel free to reach us on our contact details or come and have a chat if there's anything you want to learn. Thank you very much.