 Mae'r cyfnod graf am ychydig. Mae'n SDSM yn IBM, yn y cyfnod arall, ac mae'n cyfnod o'r llangwydau o'r cyfnod gyda'r javau, oedd, swyfft a chyfnod. Mae'r cyfnod o'r cyfnod cyfnod, oeddol yn cyfnod, oedd yn cyfnod, ac oedd yn cyfnod, oedd yn cyfnod o'r cyfnod. Rydyn ni'n achiam y gael, rydyn ni, rydyn ni'n earnsion gen i gyfnod bydd涰annaf a rlyn imни llaw o gyllaren i'r cre Enclynu'r Yswreith Fy恭'on Adams, unrhyw tu tŷ, gospêdd ychwanegion hyn argymau yn cynnig o'rgyn ei gyllideb a wnaeth i gwybod a'r cyfrifio'r ddwybod o'ch gweithio'r awtod i'w gweithio'r gweithio'r gweithio. Felly, sef ydych chi'n gynyddiadau, wrth gwrs, a'r ddweud yn fwy o'r api gael ei wneud yw'r iawn i'w clwladau yn ysgolwydig anodol. Rhywb i'n eu cyfrifio'r amser o'r ystod yn ysgolwydig, ond y ddweud o'r cyflwythau, o'r cyfrifio'r cael ei wneud yn ysgrifos. lŵn ifanc, ac ydw ni yn falch i'r busch a'r gaen. I'r ddwybod, yn y cwmau llwyddar pan mae nhw'n rwy'n cael ei ddim. Fawr cymaint o Gwlad pwlaidnaid. Felly bod ddim yn gwybod â'r cyntaf, rwy'n rydym ni'n gwybod anghymru fel y ddweud o'i gael. Felly, gallwn ni'n entio ddim yn dwyfyn ei gwlad pwlaid na'r gwaith – gallwn y cymaint o Gwlad Pwlaidnaid, ond fydw i chwilio'n gwybod ylligeu'r cyfnod o'i gwych i ffant ymwneud, Rwy'n gynnwch, wrth gwrdd, y gwyllfa'r ysgolwyddiant gweithio'n amser yn mynd i dweud. Rwy'n gofyn o'r ffysg hyfforddiant. Mae'r ddod o'r ddod o'i ddod o'r ffysg hyfforddiant. Rwy'n gofyn. Rwy'n gwylltebu hyfforddiant. Rwy'n gofyn, ond rwy'n gweithio. Rwy'n gweithio hyfforddiant. Rwy'n gweithio'n gweithio'n gweithio i'r ddod. Mae'n gweithio'n gweithio ar gyfer gwyllteb, nid i, er mwyaf i enw i'r cyfnod, fel pethau arfer y cyfrifnwyr, aneud yw'r hyfffarn i. Rhaid i'r pethau own yw'r un os iawn i ff yrdd. Rwein i'r cyfrifnwyr hyn, bwy i'r cyfrifnwyr hyfffarn, mae yna bydd am gael gafodol N, sy'n dweud gafodol N니, ac yn ydyn nhw hwnnw'r cyfrifnwyr hwnnw – a rhaid i'n gael gafodol N i ymrydd, mae rhaid i'n gael gafodol N, ac mae hwnnw fydd wedi gweld ei fod i a roedden nhw'n gwirio'r cymserau am ymdagedd. Byddwn i'n meddwl hwn i'w bêl, ac roedden nhw'n meddwl hwn wedi gweld i mynd i'w dweud, a roedden nhw'n meddwl hwn i'w dweud i'w dweud, ac roedden nhw'n meddwl hwn i'w dweud, ac roedden nhw'n meddwl hwn i'w dweud. Rydyn ni'n neud o gwneud. Rydyn ni'n cael ymdweithio'r neurofynol? Rydyn ni'n cael y tîm ydw'r gweithio arall. when waterfall was all the rage, there was a bit of a revolt so it's quite nice that I'm talking about this in Boston and I'm a Brit. Where we'd been doing waterfall for many years, I was working on projects that were doing waterfall deliveries and there were two main issues with that. One was you could spend years and we quite literally did not delivering any value to the business, and two you never knew where you were. So we would, we'd do a release and then we'd say the next one's going to be a year later and then we'd get to kind of 11 months in and go, well actually it's not, we're going to move it out so we'd move it out three months and then three months and then three months and then we'd end up shipping it to about two years late. So Agile said well let's break that down into manageable chunks, let's accept that there are humans involved and design a process that works well for humans. And they came up with Agile as a process and being able to deliver a value on a regular basis which might be fortnightly or if you're in the US it's bi-weekly. And that was great and so now we could deliver new capabilities on a regular basis but you then bumped up against the operational side of things which couldn't get that value into production and therefore again deliver that value to the business. And so DevOps came along as this concept of let's do, automate everything, do continuous integration, continuous delivery and now we can deliver that capability into production as soon as it's ready and deliver business value. But there was another problem which was how do we actually, sorry, so that meant that things were ready to go into production but there was a little bit of a process which was not always a problem but quite often a problem. So if you've got new capabilities going out, is there capacity in your data centre to actually support that new delivery? And so this is where cloud comes in because now you don't have to rely on your data centre, you don't have to go and acquire new hardware, get it delivered a few months later, get it provisioned a few months after that and then eventually you can put the new thing into production. You can now, based on your needs, up and down scale your capabilities by just using cloud capabilities. And so that was the kind of, you got the, now a rapid way of going from developing some capability to getting into production on new infrastructure, but there was still one problem. You hadn't done anything about the architecture of the thing you were delivering so typically people were still doing monolithic applications and there was a need to break that down into manageable chunks and this is where microservices come in. And if you're doing microservices as described by people like Spotify then you also need to do organisational changes and then split your organisation ups and align those with the capabilities that you're delivering. And so you end up with squads or teams that own the microservices that you want to create. So that's the kind of background to, at least from my perspective, the sorts of things you need to be thinking about in order to do cloud native and be successful with cloud native. You don't have to do all of those, but I think the more of them you do, the more likely you are to be successful. So let's have a look at cloud native environments then and what the kind of capabilities are you should be looking for from a cloud native environment. So if you're doing microservices then you're going to want to have an environment that actually provides microservice capabilities to you. So this is APIs and so on that help you develop and provide microservices but also consume microservices. You want an environment that's going to start fast and shut down clean. So what we mean here is you're going to be deploying a lot more of these things. You want them to be able to start quickly, shut down clean, the adage goes treat them like cattle not pets, you want to be able to shoot them. And if you shoot one of these things, if they get sick you want to be able to shoot them. And if you shoot one of these things you don't want to have stuff lying around that causes you then problems when you start up a new one. Another characteristic you want is, I say proportionately, small footprint. So you want the things that you are standing up to require just enough disk space, just enough CPU, just enough memory to fit the things that you're trying to do. So for example, if you've got just a small microservice that's just exposing a rest end point and doing some calculation for you, you don't want to have to pay the cost for a full maybe javary application server in order to deliver that. Because if you're then scaling up and creating tens, hundreds, thousands of instances of those things, each one is going to be a tax. Last, not the last one, next one, facilitate dev-prod parity, including through externalised config, a bit of a mouthful. But what we're saying here is, if you think about what I said earlier about the DevOps pipeline, where you're doing continuous integration, continuous delivery, you want to build the artifact that's going to be put into production as soon as possible and not change that as you go through the pipeline. Because if you're rebuilding that in the last stage and then putting it into production, you're effectively invalidating the testing that's gone before. And one of the mechanisms to do that is by externalising configuration, for example, so things that are going to vary as you go through your pipeline. If you put those into external configuration that can then be injected into the artifact that you're going to put in production, then that means that you don't have to rebuild that artifact. So, if you want to change the port, for example, you just provide that as external configuration or change security configuration and location of services and so on, you externalise all those capabilities. And lastly, can be easily containerised. So, you may be using Cloud Foundry, you may be using other cloud environments. You don't want the choice of environment that you're using to develop your microservice or your cloud native application to limit where you can deploy to, so you want to make sure it can be containerised. All right, so that's kind of background to environments and characteristics of environments you should be looking for. Generators are actually a popular way to get started. So, you decided you're going to do a cloud native application. I want to start writing something and a lot of people use generators. There are a number of generator options and I'll go through a few of those now. So, if you're wanting to do a Spring Boot application, then you can start off with Jay Hipster. So, Jay Hipster supports a couple of application types. You can do web applications, they refer to them as monolithic applications and you can do microservices. And what Jay Hipster does is it uses different generators to produce essentially a full application using Spring Boot, Angular and Bootstrap. Another option is one that's provided by the company I work for, IBM, which is a CLI command line interface called BX Dev. This one will generate for many different languages, so it will generate Swift, Node, Java, Groovy, I think it does as well. And it will do different application types, so web applications, microservices, which can be using micro-profile or Spring, and BFF, which is back-end for front-end applications. So, this is an application pattern or a pattern for developing mobile applications, for example, where it generates a back-end service in support of the front-end and you use the same implementation technologies or the same language technologies to implement both of those pieces. Next up, Spring Initializer. So, this is a very popular way to get started with Spring applications. If you go to start.spring.io, you can go and choose the technologies you want to use. Choose the version of Spring Boot and so on. Choose whether you want a Maven build or a Gradle build, and then it will do a generation of a starter project, and you can download a zip file of the project, unzip it, and then build it. It primarily gives you a kind of skeleton Spring Boot app and a Maven build, which will then pull down the dependencies. Next one, Maven archetypes. So, Maven archetypes are a kind of generic generation capability. So, but there are also a number of Maven archetypes that have been written to generate different project types. So, for example, web applications or microservices and so on. If you want to write your own generators, then you can also use Maven archetypes to do that. So, you just pick one that's out in open source and modify it to generate whatever you'd like. And then lastly, Yeoman. So, Yeoman, again, similar to Maven archetypes is a generic generator capability, but there are also a suite of many, many generators provided by Yeoman. Yeoman is actually used under the covers by Jay Hipster, and it's also used by BX Dev. The advantage, I think, of using things like BX Dev or Spring Initialiser is they do their generation in the cloud, so you don't have to go and install Yeoman locally. OK. So, you've got started generating a project. Let's have a look at some of the, what it means to provide microservice technology. So, I said earlier on about some of the things to look for. You want to have good microservice capabilities. So, I'm going to use MicroProfile as an example, but as I said, Spring Boot provides similar capabilities through things like history and so on. And I'm going to describe it in kind of three classes of capabilities. So, the starting point is you're probably going to want to write some kind of REST APIs, exposing REST APIs or consuming REST APIs. So, Eclipse MicroProfile. What MicroProfile did is it said to do cloud-native applications, you don't want all these javari technologies, all these javari capabilities. We'll just cherry pick a few capabilities similar to how Spring has done things and pick the ones that are going to be best for doing cloud-native microservices. So, it picked CDI as a component model, JSONP for doing processing of JSON payloads, JAXRS2 for doing writing REST services, but also for calling REST services. But the JAXRS2 client isn't particularly nice. So, the collaboration, the Eclipse MicroProfile collaboration also defined REST client, which is a typesave client for quite a nice, neat typesave client for calling microservices. All right. So, you're now able to write microservices, REST services and consume REST services, but if you're putting these into production, you're going to have hundreds of these things collaborating, maybe thousands, and they're going to be frequently evolving if you're doing your agile delivery of these things. And that introduces requirements for new APIs. So, for example, if you've organised your teams around these microservices, you want to be able to define APIs, API boundaries between those microservices. Open API, is anybody here familiar with Open API? Got one. Anyone familiar with Swagger? Oh, a little bit more, right. Okay. So, Open API is just an evolution of Swagger. It's a standardisation of the Swagger capabilities. So, we're up to Open API v3, which describes some more capabilities for how to describe APIs. So, what this does is it lets you... There's default generation for the definition of the API, and you get the API definition in YAML. I think you can also get it in JSON format. And you can... If you want to have a more advanced description or more advanced definition of the API, then you can use annotations to augment the generation capabilities. Fault tolerance. What fault tolerance does is it allows you to... If you're familiar with things like history and so on, it gives you the capabilities to do things like timeouts, retries, fallback, bulkhead patterns, circuit breaker and so on. JWT. So, what JWT... So, fault tolerance is important if you've got lots of microservices that are collaborating and maybe one of them goes down briefly or becomes unreliable for a period of time, then you want to be able to handle those failures gracefully. So, you need some kind of fault tolerance capabilities. JWT gives you a security capability so you can flow JSON web tokens between microservices and then extract role or group information and user information out of those and then use that to decide whether you're going to allow that person or if that person is in a particular group to actually call some of the operations within your microservice. So, it's a way of securing your microservices. And then lastly, in this layer, config. As I mentioned earlier, externalising your configurations is important as you push things through the pipeline. And what config 1.2 does or what config does... Microfile config does is it lets you have static config or dynamic config. Static config is read on startup. Dynamic is read each time you access it. Static is probably good enough for most use cases in microservices. And it also lets you define different config providers and extend those config providers and have a layering of those configs so you can have overrides and so on. So lastly, if you've got tens, hundreds of thousands or whatever, a large number of collaborating services, you need a strong operations focus. And so the last layer, if you like, is there are three specifications and implementations in Eclipse Microfile 1 is Open Tracing, which lets you trace requests as they go through your microservices so you can understand the flow of the request. There's Health Check, which lets you define what it means for your microservice to be healthy. So that might include can I connect to a database that I'm using or can I connect and call a service that I depend on and so on. So you can do quite fine-grained tests in there and then report health back. The health, if it fails, if it says that it's unhealthy, it returns a 503 so you can then point, for example, your Kubernetes probes, liveness or readiness probes at the endpoint and use that to determine whether your container's healthy. And then lastly, metrics. Metrics lets you get hold of and understand inside the runtime what's my JVM looking like, what's my heap, garbage collector and so on, so you can get metrics information out. You can also use metrics to define application metrics which will then be returned as well. By default, they come out in Prometheus format but you can also get a JSON format. All right, so I'll do a quick demo of some of those capabilities. Actually, I didn't need to close that. All right, so what I'm going to do is I'm going to start two services up. So that's the first one and don't ask why there's no service B. I think it's historical reasons. And that's the second one. So basically, I want two services, A and C. C, A is called by C. C will return the system properties or a system property that you request for the runtime it's running in. And A has some fault tolerance capabilities. So let's have a look. Did I, yeah, I did start that one up. So just a quick look at service C. So what service C does is it's going to return a response with the system property in that you've requested. It also has this mode. So what we've done is in this service we've coded some behaviors to exhibit things like slow responses or failure responses and so on. And so service A can actually request what type of Bay if you want it wants from it just to help with demo purposes. And so we'll also have a look at service at what I've done on service A side. All right, so service A, here's the service, it's actually going to call this helper class. And inside this helper class we've got methods that are going to exhibit various different types of fault tolerance capabilities. So for example, time out capabilities, retry capabilities and so on. So first thing I'll do is I'll just on service A, that's big enough. Can you see that? Nope, make it a little bit bigger. Okay, so that's requesting the WLP installer which is a system property that WebSphere, or sorry, Liberty or OpenLiberty in this case puts in into the system properties. So that's the location that the installation is running out of and you can see it's service C. So service A has managed to call service C and it's returned the appropriate value. So that was calling the method that just we know works. So now I'll call a different one which is going to try, which is essentially I'm going to do no retry, but I'm also going to, what no retry does is actually tell the target service to fail every other request. So you can see now that every other request is failing. So now, and you can see in the helper how no retry is done. So it's just a standard method, but it's passing in a thing that tells it to fail. So now with this retry annotation, I'll call the with retry and we'll see it succeeds every time. So internally what happens, it makes the request, it fails and then it retries it and on the second request each time it's succeeding. So everything's working fine. So now we'll try one other scenario. So we'll do with timeout and what this is going to do is it's going to tell it to delay the response and actually initially it's going to timeout. So there's a bit of a pause and then we get a timeout. So it's failing every time now after this brief pause and what we're going to do is with timeout and fallback and we're going to add in a fallback now. So it's going to call the target service and fail because of the timeout and then it's going to call the local service to get the fallback and we know it's called the local service because we can see now that the installation directory is actually the installation for service A. So that's just showing simply how you can, let me just show you the code for that one. So we can code a timeout, we say after 500 milliseconds I want to timeout the request and if I timeout the request then I want to call this fallback method and as I said this fallback method just returns the system property that we were requesting but for the local service. Okay. The other thing I was going to show you was health. So I'm going to go to the health end point for service A and you can see everything's happy. It's doing a test on service C because in order for service A to be considered healthy it needs to be able to use service C and what I can do now is just take service C down and when we do the request we'll see it says it's done a test against service C. Seen that service C is down and therefore the outcome, the total outcome overall test is that this service is actually down and that can be reported back on probes as I mentioned for Kubernetes for example. Okay, how are we doing for time? Okay, so last topic I want to cover is packaging for deployment. So one of the things you need to consider when you're choosing what you're going to build and create is what your target environment is and what kind of artifact you want to create. And it's quite common nowadays. I'm going to talk about a couple of artifact types to create runnable jars and as I mentioned you want those things to be appropriately sized for what you're trying to do so all right sized as I'm talking about here. And I'll just talk about a few examples of how to get right sized runnable jars. So I mentioned spring initializer earlier on. So if you use spring initializer and here in this case I've selected Jersey for Jack's RS and I've selected Pivotal Cloud Foundry circuit breaker capabilities. What that does is it generates a project with a POM file because I've chosen Maven yet. And that POM file has dependencies on the things that you've said you need. So when you do that build it pulls down those dependencies and then the runnable jar you get only includes the capabilities that you said you've required in the build. There are other environments that do similar capabilities. So in this particular example this is the Liberty server. You can specify what features you want to use or what capabilities you want to use. So in this case I'm saying I want to use micro profile 1.3 which is the set of capabilities I described earlier. In the build you can say I want it to be minimal or minify it. So only include those capabilities and I want it to be runnable. So I'll get a runnable jar out of it as a result. In this particular example you end up with a runtime that's about 45 megs. And then last one just to include another one. Wildfly swarm fractions. So if you use JBoss technologies, the wildfly server, you can in your POM dependencies define dependencies on what they call fractions, swarm fractions and then it will build a runnable jar just including those capabilities. So as I mentioned containerization or Docker, a lot of cloud environment, well you'd be hard pushed to find a cloud environment that doesn't support Docker. So you want to also consider what you might want to do for Docker. It's quite interesting when you get to looking at Docker there's a bit of a kind of contention between building runnable jars and doing Docker containers. And so what I mean is that when if you build a runnable jar, your jar file essentially contains runtime capabilities. It contains, it might contain your servlet container, it might contain the Jaxrs implementation jersey in the example I showed earlier for spring initializer. To get good performance out of Docker you want to define your layers, your Docker image layers in such a way that your last layer only contains the things that are going to change most frequently which is your application code. But if in your layers you're actually building in the entire runnable jar then that last layer is going to contain a lot of essentially middleware or libraries and things which don't change frequently but they are going to essentially be changed all the time each time you do a rebuild. And that means that you're not taking a good advantage of all the capabilities that Docker has around caching of images. And so going for maybe a thin war approach or you'll see there's talk about in places about what they call hollow jars. I'm not convinced that hollow jar is necessarily a good way to go but it's something worth considering. Okay, so last demo. I've got one and a half minutes I think. So I talked a little bit about dev prod parity and you're wanting your essentially to be able to build something as soon as possible in the pipeline that is eventually the artifact you've put into production. But what you can also do, and I'll give a quick demo is this, is you can actually do your development using Docker containers. So you can, for example, have a Docker container that's built on, that depends on the image that your production container is going to use and then in this particular example, so if I just show you what I'm doing here. So I'm going to run a Docker container that I've pre-built that depends on the server environment that I'm going to use in production so it's using the same image for the server environment I'm using in production but I'm going to map in all the things that my runtime is going to look for that correspond to my application capabilities. So those are coming into the Docker container through volumes. So if I just run that, so I've done a build of a project and I've built my Docker image and then so I have an application. Oops, I need the code. So I have a simple greeting application and because of the way the server will pick up and pick up the artifacts and run those as part of the application and because the Java language server in VS Code eclips and IntelliJ do similar things, because the language server will compile things when I change the files, what I can do is, although it's running in a container, I can, so I'll show you the application. So it's just returning a greeting based on a name passed in. I can actually make changes to the code and have those picked up. So essentially the use of the container isn't inhibiting me and it's giving me an environment, a development environment that's going to be very close to what I'm doing in production. So there, I've just made a code change and the language server's compiled, it put the class file in the right location and the server's picked that up and just shown the change. So that's a way of getting better dev pod parity as you're doing your development. Okay, so slightly over. I'll just finish on the last slide. Okay, so in summary, if you're going to go down a cloud native route, it's best to consider both the organizational and technological changes you need to make to increase your likelihood of success. So I talked about agile DevOps, cloud and microservices and alignment of your teams and so on. You can get a head start by using generators. It's a nice way to get started projects, get going and choosing. You need to choose a generator that's appropriate for the technologies you're going to use. You want to leverage microservice technologies as we saw with the micro profile examples for doing development of your microservices but also more of the management side of your microservices. Choose an appropriate packaging for your cloud which might be a runnable jar, might be containers. You want to maximize consistency through your delivery pipeline so you're not invalidating the testing that goes on earlier on in the pipeline. If you're deploying into Docker, you want strife for thin application layers to take advantage of the Docker caching capabilities and lastly, don't forget the hyphen. So any questions? Okay, thank you very much.