 Mae'r Prifysgol yn ei gwasanaeth. Mae'n hyn yn 40 o'ch cyfle o'r 15 o bobl, yma'r hyn o'r cyfle o'r wyf, ar y gwasanaeth USDT. Yn y cwylwm, mae yna'n gwneud ar y cyfle yw'r cyfle. Mae'r cyfle o'r cyfle o'r api o'r api o'r api o'r api. Felly mae wedi'n gwneud yn cael rai. Ac rwy'n gwybod, cyfle o'r api o'r api o'r api o'r api o'r api o'r cyfle, Ac yn goffball ar gyfer i'r tyfnod i'n cael ei cael eu cyfrannu. Ddoddodd yr argyffredi, wedi cyfrannu a'r cyfrannu, roi'n meddwl. Roi'n meddwl, roi'n meddwl. Dwi'n dod yn ni— Dwi'n rhaid i'r rhannu bod ddau, chlaen. Ddiolch i'n meddwl. Roeddwn ni i'n meddwl i'r newydd, ac mae'r newid yr rhai sydd wedi ar gyfer ei gyfrannu, a blή y cefnodau yna, roeddwn i'r rhaid am y cyfrannu. Tych yn y ddweud fawr yn gyfrwd. Wrth gwrs hynny, diolch yn gwybod ni'n gwirlo diddyn nhw'n brifffodol. Felly, byddwn yn sefydeg yn y brifffodol ac yn siaradig, fe allwch. Bun methu o bod chi o'r newid gyda'u perilol i'r cyhoeddag ar y cyhoedd Yentydd, i'r cyhoedd a'r cyhoeddau ac i'r cyhoeddau Beth. I ddim chael bod hi yn y datblygu? Mae'r cyhoeddau pleidol ar y cyhoeddau ymyrlen i'r cyhoeddau. Rydw i'n bwysig ar yr un o'r cyhoeddau nhw'n cyhoeddur. I think we tend to use them. Authentication with third parties has started becoming quite a key one because nobody likes writing new authentication systems. Obviously Drupal has one built in so that makes your life so much easier. For the rest of us, we don't like writing new authentication systems and we will use someone else's. I've seen people put in Drupal or WordPress or something like that and just use the user management system within those CMSs and talk to it over an API because most of the leading CMSs that are worth dealing with have got APIs now. Social media interactions is another common one. Marketing loves engagement so we have to facilitate that and they tend to have APIs. If you want to talk to Twitter, you get an API. Facebook is the same with their Graph API. We tend to deal with those as well. General remote APIs and enterprise stuff, grabbing content out of Drupal is just an API call away. I deal with a lot of banking and finance work and insurance work and they're all API based so that we can put something modern in front of the 1980s software. APIs are very common and very useful and I write most of my code in PHP so I tend to integrate with PHP. We've got three ways of doing that within PHP. The file get contents method does more than just retrieve contents of file on disk. It's a full HTTP client. It's not the easiest one to use in the world by any stretch of your imagination but it is remarkably fully featured if you can be bothered to work out how it works. A documentation is okay but it suffers a bit when you start trying to get into the options of using file get contents. It's also a bit weird that if you want to do a post request, use file get contents. There's a slight disconnect in your head and maintenance level so that's a little bit complex. We end up using Curl a lot. Curl is the gold standard for HTTP clients. The command line tool is great. The library back in the command line tool is also really really good. It is used everywhere. There are bindings to every single language ever invented I think. It's fantastic. The PHP interface to Curl is very minimal. One of the really really good things about PHP is that it's really easy to integrate a C API, a C library into the PHP space. That's why we have so many extensions and people call PHP a blue language because we can integrate with so many C APIs. The Curl integration is very much, let's take the Curl C API and put it in the PHP space. It is not easy to use at all. Does anyone here use the Curl functions in PHP to do API calls? You know what I'm talking about. Curl underscored this, underscored that. Curl set up this and the other. It's not easy. There's a lot of boilerplate code that you end up having to write and look up and wonder what it does when you come back to it later. Because nobody remembers to comment anything in their code bases. Then there's a PHP library. Enterprising people have come along and taken fargate contents, taken Curl, etc. Written wrapper libraries around them. There's a lot of them. Lots of people have written HTTP libraries. Every common framework have got their own one. Then there are a number of independent ones. What I'm going to talk about is Guzzle, which is a PHP library. Guzzle was written by the people who write the AWS PHP SDK. They wanted to write an SDK for dealing with AWS's API. They weren't happy with any of the options, so they wrote a very good API client. If you're going to use AWS and their SDK, then you're going to be using Guzzle underneath. I'm pretty sure that Guzzle is integrated into Drupal 8 as well. I'm not sure. I'm pretty sure it is. Someone's nodding, so that's a good chance I'm trying. If it isn't, then it's a simple composer coming on the way. It's always Guzzle. It's an ACV client, so it uses Curl or the PHP stream handler under the hood, so you don't have to do any of that. It has persistent connection support. It has concurrent and asynchronous requests, which I love. Absolutely fantastic feature. Again, it uses the underlying libraries, but it exposes them in a way that normal people like me can actually understand how to make concurrent or asynchronous requests. Guzzle is extensible. It has a concept of middleware built into it, so you can modify the data flow going to and from your API. Finally, it uses something called PSR7. PSR7 is one of the PHP standard recommendations from the FIG group. There's a talk on FIG by Larry Garfield probably yesterday. It might have been the day before. One of it was. Has anyone heard of the FIG and the PSRs? A few of you. It's a committee. They generate standards by committee. Some of them are good. Some of them are not so useful. PSR0 was the famous one at loud auto-loading, which had a fantastic improvement on the entire PHP ecosystem. We could start sharing libraries without having to go to phpclasses.org and download them as a file, because packages exists and compoether exists solely because of the framework interoperability groups auto-loading PSRs. Then there's a whole load of other PSRs which are of no interest to anyone, and then we get to PSR7. PSR7 is HTTP request and response messages standardisation. So that's really, really good for interoperability. Why guzzle? Why do I use guzzle over just using code directly? Mainly for these reasons. Firstly, it is so much easier to use. The auto-complete within PHP storm, or net beans, or whatever idea you're using. Surely someone is using this in studio. Much, much easier to use because you can auto-complete it. It just works. It's lovely. Asynchronous requests in a way that you can understand makes a material performance gain to your applications if you have lots of API calls to do. So that's really, really nice, and guzzle exposes that in a very intuitive way. And say PSR7, I think interoperability is really important. So the fact that guzzle is a PSR7 compliant library and component is important to me. It's much easier to test your code if you can mock out your API calls. It's really hard to test something that using CURL calls directly. It's really difficult. If you've got guzzle or any other PHP client library in the way, it's much, much easier to test your code. And that's quite important for a long-lived and well-maintained applications. Finally, guzzle is popular. It's downloaded a lot. It is in a lot of different projects. And popularity isn't important by itself, but popularity means that other people have found the bugs before you have and they've probably been fixed. Popularity also means there's probably an optional stack overflow. Both loads are quite important to me. It is fine to be on the leading edge and right on the leading edge of any given technology, but it's much harder there and much harder to deliver on time and within sprints without doing lots of spikes. So I quite like well-tested, well-debugged, well-supported tools. And guzzle is one of those. So API is just an HTTP client. Let's talk about HTTP. I don't know much about HTTP. I'm going to be teaching you to suck eggs. HTTP has been around a very, very long while. 1990 was when it was invented. Tim Berners-Lee had invented HTTP in order so we could do the web very, very roughly. Awesome. Fantastic. Took about six years before someone bothered to write down how it worked. I don't know how that happened. It's the underpinning of the web and they did 1,944 other RFCs before they got around to writing down how a web server worked, which was really interesting. So in May 1996, they finally documented HTTP 1.0, which was a back specification. They basically documented what was actually happening out in the real world, which is why, a mere six months later or so, HTTP 1.1 came out, because they did the two in parallel. So they were writing up how the world was currently writing using HTTP, and then they wrote down how they should be used and make it a little bit more consistent. So HTTP 1.1 came out in January 1997 and has been the best protocol ever invented. And I say that with back in 1997, it is now 2016 and we all still use HTTP 1.1. That's a 20-year-old protocol that has worked really, really reliably. That's pretty fantastic. It's so good they had to document it three times. I don't know why this happened, but they improved it in RFC 2616. They actually documented it a bit better. And then again, they rewrote all the specs in the RFC 723 series. The 7230, 7231, 7232, et cetera, which document different bits of HTTP in great detail. I guess in that you people don't actually read RFCs for fun. Just a guess. I'm one of those people that quite like RFCs. I read them a lot. 7231 should be read by every web developer on the planet. It would make life much, much easier if everyone read that RFC. 7231. It's the one that documents the HTTP methods and documents all the status codes. I would love it if every web developer out there knew what the status codes were and used them in the right places. You don't have to read the whole specs. They're not that interesting, but 7231 is probably worth a read. Finally, we got to HTTP2. HTTP2 came around in May 2015 and it's slowly gaining adoption. I've seen 7520. As you can see, it's fairly recent. Not that much of the web is using it. It's SSL only to all practical purposes. But it's important. So has anyone here used any HTTP2? Okay. We're not going to talk about it much because it's not that interesting. But HTTP2 is binary, not text. That is a key difference from HTTP1 and 1.1. The really, really nice thing about HTTP2 is that it was designed and evolved from the speedy protocol by Google, which means that it's got multiplexing. So you can do multiple requests across the same TCP-IP connection, which is really helpful when you've got lots of little images, for instance. I'm not a friendly person, but even I've heard of that idea of creating one big image with lots of little images inside it and it's CSS. It's probably a name for that. It's probably a pattern. Sorry? Sprites. There you go. It has a name. Sprites. I'm not a friendly person. That goes away with HTTP2. For HTTP2, just throw all those gif files, the little ping files, wherever they are, straight down the wire because we can hold the TCP-IP connection open, which makes it really quick. Because it's a set up and teared down servers can push things directly to the clients with HTTP2. That's quite cool. If someone requests a HTML page, it's fairly likely they're going to be requesting the CSS file pretty soon. With HTTP2, the server can say, by the way, you probably want this soon, and start sending it down the wire whilst the browser is still trying to decode the HTML to work out what the name of the CSS file is. When the browser catches up and says, half of main.css is already down the wire on the computer. That's quite a cool feature. There's a lot of nice things in HTTP2. One of the important ones there, it uses the same HTTP status codes. All the stuff you know about HTTP1.1 hasn't been superseded. I really, really mean that. For this talk, we do not care at all about HTTP2 because at the PHP level, it looks identical to HTTP1.1. It looks the same. It is a text-based protocol at our level, which is wonderful. I'm not sure it would have had any adoption whatsoever if they had broken every single website in the world. You can see why it went that way. So, HTTP messages. RFC7230 says this, HTTP is a stateless request response protocol that operates by exchanging messages. This is why this protocol has lasted so long. It is really simple. It's stateless. There's no concept of state. So that makes it very easy to implement. It simply sends a request and you get a response. That's it. That's the entire protocol. This is why it's so popular. It's so easy to use. The request and the response are just plain text. Which makes life really, really easy and makes HTTP2 a right old pain. Because that's gone binary. So you now need special tools to actually see what's happening on the wire when we didn't need to need them. Because it all used to be plain text. So an HTTP request has got, the first line has got a method, like get, put, post or whatever. Then you've got the URI and then you have the version number, HTTP1.1 in this example. Then you have a set of headers. Headers are simply key value pairs. And they are completely extensible. You can just invent new ones. That makes very helpful. Instantly, the terminator between different values of a given HTTP header is determined by the RFC for that header. It is not a common thing. So in my example there we've used a comma between value 1 and value 2. Not every header uses a comma between multiple values. Most do, but not every one. Because each header is defined separately. Then we have a blank line and then we have the message body. We can put whatever we like there. Nine times out of 10, we put h2nL. Easy. Response goes the other way. So we have a status line. We put the version number at the front this time just to make it different. Then we put the status code. We are all aware of status codes. Obviously the most famous one is 404. The one we all hate is 500. Occasionally we see a 200. Then we go, yes. So we all know our status codes and then there's a reason phrase which tells you the text meaning of the status code. So 404 is not found. 200 is okay. 500 internal server error, etc. Fun fact, that code, sorry, that text never ever changes. It is always the same text. In HTTP2, they don't bother sending it anymore. Because you can have a look up table. Sorry. Fun difference between HTTP1 and HTTP22 is that we don't send the text for the status messages anymore. It's a bit pointless. Headers and the body again say it's exactly the same sort of format just going the other way down the wire. Easy. It's a typical request. This is one where I've tried to use my Firefox to connect to my website. So the first line is the request line. It's a get request, HTTP1.1 and the URI is just a slash. Back when HTTP was invented, there were so many IP addresses and so few servers that nobody ran more than one website on a server. Obviously that changed over time. So we have the host header now to determine which particular website on that particular IP address is the one that we want to actually serve. And then you've got a whole load of other headers that do stuff. My response is going back the other way. There's my 200k in my status line and then I've got all my headers again like my service engine X in this case. It's version 1.11.2 so you can probably tell which version of Ubuntu I'm using. Keep alive, timeouts, things like that which content type it is. It's text HTML. Content types are a really important header and then I've got my body at the bottom of my HTML itself. My website actually has a bit more HTML than that but it's born to read and we're out to page anyway. Status codes. We're all familiar with status codes? Yes. We grouped them into five groups and they're always a three digit number. The first digit tells you to do type of error it is or the type of status it is more accurately. So the one series are informational. They're not practically useful. You've probably never come across one. 200 series are successful. So if you get a 2XX message your request succeeded with the server in some form or another. 300 series means that you need to go somewhere else to finish this request off. That doesn't mean that request wasn't successful it just means that to complete the process we must go somewhere else as well. Then you get the errors. The errors are in the 400 series and the 500 series. Very, very loosely the 400 series means that the client made a mistake and the 500 series means that the server made a mistake. As you're writing an API by the way nearly all the error status codes you will use are the four series because invariably you are protected against the clients from themselves. You very rarely write 500 error messages because nearly all of them are infrastructure based and hopefully you have Ops people to solve that problem. If you don't have Ops people then you should probably hire some. Headers. These are the popular headers and the concept is a list of media types. So, as a client I can tell the server I only understand that these particular formats. The media types and the formats are really important for having a web that actually works automatically without humans involved. It's a bit that allows a web browser to determine whether it's displaying a page or whether it's giving a file download. It's all done off the accept header and on the other side is a content type down the bottom which is sent back to tell the browser which particular type of format this data is coming in. Because it's just plain text, it could be anything. So we use the media type in the content type header to tell us that information. The whole load of cache control, once they're quite useful for speeding up the web. ACTB in general because it is stateless allows you to cache things really well and you can put different layers of caching in because of the way the headers work. So we can put a different application server in front of our web server and it will cache the relevant requests and not cache other relevant requests based on things like the method or the cache control and the tag headers and things like that. Varnish is obviously one of the more famous ones for doing that sort of work. It's all done via HTTP and very standard, well-defined headers. How do you do that in PHP? We all write PHP applications. We get websites, yes? Some of us you write PHP. A few of us don't. PHP is a bit of a mess when it comes to all this. You can tell PHP is growing up with the web. We very much are the web's language in a lot of ways. PHP is really, really good list stuff but it's showing its age. So you've got things like the Superglobos. You've all probably heard a $1 score get, $1 score post and things like that. They are awesome. They work really, really well. Nearly all the time. Post is only filled in if it's a post request that came from a form. If it's a post request that came with some JSON data post is empty. So you have to go to PHP inputs to find that JSON data. Nearly all the headers are in $1 score server but some of them aren't so you have to use Apache request headers. Which works with Engine X? Response. We use header to send a header. Header is great. Sends a header, no problem. It's got two exceptions. There are two wrinkles in the header method where it does something different based on the status code that you put in there. Nobody knows what they are because nobody has read the page. We've got header list, header send so we can manipulate headers. That's fine and then we use echo to send out the body because it's just plain text. It's quite easy. Right up until that point we get that message that says can't send headers that's already been started. So then we invented the output buffer to solve that problem. PSR7 is a set of object-oriented interfaces to solve all those problems. It brings PHP into an object-oriented world when it comes to dealing with HTTP messaging. That's really important. There's been a lot of HTTP request and response objects in the PHP world as we've tried to do this using the most famous one there is. Symphony's HTTP Foundation. It's a very, very good request and response object. It's a little bit bloated because it's been around a while but it's very, very well thought out and well-featured one. PSR7 is a standardised version of that but works completely differently because that's how standards work. This is what the set of interfaces looks like. They're interfaces. They are not classes because it's interoperability. We expect, or the FIG Group expects, people to write their own libraries that implement these interfaces and then we'll interoperate within each other. So there are a number of different cloud libraries that all implement the same set of interfaces. The message is obviously an encapsulation of the base message because the request message and the response message look basically the same. They've both got headers, they've both got a body so we can probably put the two together and just call it a message. And then we can have specialisms of the request and the response. Further specialism, we can have a server request to handle things like cookies and file uploads. Uploaded file is its own object because we want to have a nice object that we're interfaced to dealing with a file. Uploaded file. Dealing with it. And we also have an interface for a URI. You wouldn't have thought a URI was particularly complicated but there's a protocol in there, there's a port in there, there's authentication information in there, there's a domain, there's a path, there's a query, there's a fragment. There's a few bits in a URI. So there's a nice OO interface to it as well. It makes life easy. PSR7 is immutable. This is the big difference between PSR7 and HTTP Foundation. And this is important because Guzzel uses PSR7 and HTTP Foundation is really, really popular. Fortunately, there's a bridge between the two but immutability means that you cannot change the object once you have created it. That has got some advantages and some disadvantages. The obvious advantage is that once you've created the object you can trust what's inside it. It never changes underneath you. You can imagine how easy that is to test. Very, very easy to test, very easy to work with. But it turns out that we do need to change things inside objects. A user object we might want to change the username or change the date of birth or whatever. In terms of PSR7, if we have a URI object we might want to add a query to it. So the example there at the top I've created a URI object with a URI to the joined in API. The list of events is an events collection, it's a list of events. And I might only want the upcoming events. So joined in is an event rating system so there's lots of events in there. If I want to add a query to a URI, I have to use the with query method and the with query will return a new object because the objects are immutable. Nice and simple. We've now got URI too. Request works the same way. If I want to create my request object then I want to make it get request and I want to have URI for it. I want to set an accept header, I want to set an authorization header. I use them with methods and I get a new object every time. That's key feature one about PSR7. It is immutable, it is really, really awesome. It is not slow. If we're using PHP 5.5 or above this is really quick because clone is really fast in PHP and PHP is really, really good with object memory. So you'll find that this isn't actually particularly performance problem. They did lots and lots of tests before they went this way. That's fine. Secondly, your message body is a stream. Now I told you that message bodies are simply plain text but they've been modelled in PHP as a stream and that was a really clever move because streams allow us to keep the memory usage of our PHP application down when we are passing a really big data file through it from the browser to say S3 or something like that. I don't know about you but my client seems to create really, really big PowerPoint presentations and then they want them uploaded so that their users can then download them. I'm unclear what the users do with these PowerPoint presentations but I think the marketing department thinks that their users read them. Who knows? I don't want to have a memory limit that large just to get this file into memory in PHP so I can send it up to S3. That's just insane. A PHP stream enables me to pull that straight from the browser, straight through to S3 and only keep a small bit of it in memory. It's a really, really useful feature of PSR7 and obviously this is quite of interest to me because if I'm using Guzzle and I'm doing file uploads I'm normally sending them on somewhere and as I say I quite like to have control of my memory. So that's the other key feature of PSR7 which is really, really helpful. I've said this interoperable. I'm the developer of a framework called SLIM. It's a very small API focused framework. It's called a micro framework. It doesn't do very much. It's got its own implementation of PSR7. Guzzle is another implementation of PSR7 and as a result I can hook stuff, the request object that comes from SLIM straight over to Guzzle and the response object that comes back from Guzzle I can send straight back into the SLIM pipeline. They just interoperate between themselves and this is the promise of the PSRs in general. The idea that we can interoperate and use an object that came from one vendor in the context of another vendor or component or project in this case. So that's quite cool. PSR7 is really, really nice. So with that all the way. We now know all about HTTP. We now know how HTTP is modelled in PHP with PSR7 as far as Guzzle is concerned. So let's talk about Guzzle. Guzzle as I said before is really easy to use. So we instantiate a Guzzle client, line one and then we can call request method on that. We pass in the HTTP method so get in this case followed by our URI which again is my events collection on my joined in API and that gives me a response object. That's a PSR7 response object so I can call get body on it and I can JSON decode it and I've got an array. Cos the joined in API will give me JSON data. If you don't want to type out the word request followed by the HTTP method you've got a number of shortcut methods built into Guzzle so you've got get, post, put, patch and delete which do what they say on the team. So I can do client dash greater than get and I just need the URI and we'll do a get request to that URI. So very, very easy to use quite intuitive in some ways. As I've said joined in is a events rating website. So the whole point of joined in is that we have events on there we have talks on there we have speakers on there and people can comment and rate the talks that they see so that the speakers can get feedback. So obviously it's not used by Drupal Cym because you have your own system but it's used quite widely in a number of conferences. Use a profile page to look like this. Only interesting things about joined in is that it is a decoupled website. There is an API and there is a website. The website doesn't have a database. The database that the website uses is on the API. So all the calls on the website are API calls back to the API in order to get the data for display on the website. It's an open source project. It's built mostly by PHP people which is why it is not all in Ember or React or Angular or whatever. But in principle we can completely replace the website and not change any of our data or the way our data store works or our business logic works because we've already decoupled. So we started to see this quite a lot nowadays. This is the profile page. It sounds to be my profile page and there's a number of blocks that you can see on the screen these sort of squares. So at the top there's my name so we had to retrieve my user profile my user details and then there's a list of the talks that I've given there's a list of the events that I've attended there's a list of the comments that I've left for other people there's also another box there that you can't see which is the list of events that I have hosted but because I don't host events that box is empty. The code behind the scenes you can see on the right there it's five API calls so we go and get the user profile and then from the user profile we can then get the other data. So how would we do that in Guzzle? Firstly, we instantiate our Guzzle client and we add a new option to our Guzzle client called the base URI. If you're going to make a lot of calls to the same API you can save yourself some typing by putting the common bit into the constructor and then from that point on every call that you make with that client will automatically append the base URI onto the endpoint that you are trying to call. So here I'm calling the user's endpoint line 5 and it will automatically put in the base URI for me so it will hit the fully qualified endpoint. I've got some query parameters so the full URI is api.joinedin slash v2.1 slash users, question mark, username equals acrobat and verbose equals yes. So when I call get I put in the endpoint named the URI users and then I have an array of my second parameter where I can put additional options. For a get call I use a query. For a post call I'll probably use body. I can also set headers. Joinedin gives me JSON back so I tell it that I accept JSON. I could also tell it I accept XML or I accept HTML or something and Joinedin would either give me the data or it will tell me it is unable to do so and send me a 405. And then finally I get my response back I can call get status code check that it's a 200. If I've got 200 I can then decode my body using JSON decode and retrieve my user from the array and I will get that data back. So this is the data that comes back from the API and this is an hypermedia based API. So when I say it's a hypermedia based API I mean that the payload that I get back has additional URIs additional endpoints to provide further information. So you will see there that there's things like talks URI attended events URI. So my API client can discover additional information about this user which talks as this user given which events this user attended by simply reading the data within the payloads. So we can use that. Oh there we go slide about it and everything. So there are four interesting hypermedia APIs within that payload that we need for our profile page. So we now need to go and retrieve those four. Before we do that let's quickly talk about errors. I don't know about you but whenever I talk to an API the first thing that happens is I get an error back. It takes me a few goes to actually get a successful request from a new API because documentation of APIs is so treacherously bad. It's definitely a pain point in our industry and that's not PHP that's just across the whole web industry. Documentation is hard. GitHub's API documentation is not bad. It's one of the better ones but even there it still takes a couple of goes before you actually get a successful API response back. So in Gazel we use exceptions. So an error is turned into an exception and there is an entire hierarchy of exceptions that you can then catch so that you can choose the granularity at which you wish to respond to the error. Everything comes off the standard runtime exception that's built into PHP and we call it a transfer exception or a seek exception. The seek exception is only used for the stream handling. So if there's a problem reading the file off this that you're trying to upload then you'll probably get a seek exception. All the errors related to doing a request and a response are within the transfer exception and most of them are inside a request exception except for too many redirects exception. That's such a well-named exception isn't it? Too many redirects. Exactly what it says on the tin. So you can start writing code like this where you can pick your granularity and you can handle the error at the level you want to handle it. So here I've done my client get call, I've failed so I now need to catch rapid try, catch around the client call, sorry the get call and if I catch a client exception then a 400 type error existed happened. So I can probably do a retry or I'm going to inform the user that they need to be logged in or something like that. If I get a server exception then it's a 500 error. Realistically there's nothing I as a client can do to fix that problem except tell the client my user to come back later and hopefully someone will fix the other end. You live in hope. Connect exceptions will happen when you get networking errors. So if there are networking errors between the PHP and the outside world you'll get a connect exception from Guzzle so you can trap that independently and deal with it appropriately. Maybe you tell the ops team that they've screwed up something. If you're in DevOps of course you fix it yourself. And then lastly we have the transfer exception. Anything else happens, that one we will always catch an error. So that's a general format of handling errors within Guzzle. This is quite a common pattern that we're starting to see in the PHP world where we catch on type of exception to determine different handling responses. We're starting to see it around it's quite a nice pattern. So to generate that profile page I'm going to have multiple requests. So I get my user in the lines 1 and 2 and then I make four other API calls and I get the rest of my data and I can display my pretty page and everyone knows how awesome I am. Who knows? It's sequential. It's a bit slow. Was anyone in the JSON API talk yesterday? That was quite a good talk about this sort of stuff. Sequential is relatively slow. We have to fire up an HTTP request for every one of those. That's a separate TCP IP as well. It's not ideal. We can do a bit better than that. We're writing a lot of API first stuff nowadays. We need to do better than sequential stuff if we can. That has to hold and start up each thing individually. And that's asynchronous requests where we can set up multiple requests all at once and let Guzzle do the work. When I say Guzzle do the work I've actually been the Curl client library because Guzzle is merely a wrapper around Curl at the end of the day and Curl is awesome, I think I might mention that. Curl is so awesome that it handles this for us. So we just proxy down to Curl which is what PHP does. You get a difficult problem in PHP? We proxy it down to another layer or up to a different component like Vyge or whatever. Or we put it into a queue like Rabbit. If you're doing your PHP right you get really difficult bits to specialists specialist apps. So asynchronous requests in Guzzle are based on promises. Does everyone here do JavaScript? A few of you. So you've probably come across a concept of promises already because they're fairly big in the JavaScript world. I think ES6 does this a lot. As you can see I'm not a JavaScript developer. There's a website called promisesaplus.com which is the standard definition of how promises are handled in JavaScript. And on that website it says a promise represents the eventual result of an asynchronous operation. The prime way of interacting with a promise is through this then method which registers callbacks to receive either a promise's eventual value or the reason why the promise cannot be fulfilled. So it is simply an object that will sooner or later call a callback for you. That's it. They're not particularly complicated things but there's an awful lot of words around them and they feel quite complex. But they're really not. It's merely an object. So here we get instead of calling get or post we call getAsync or postAsync. So our change in terms of using Guzzle is that the method name changes to have the word async at the end of it. So now this now becomes an asynchronous request. The rest of the options for the method are exactly the same as we've used before. So there's no new learning involved. We're now just going asynchronous. And that will return to me a promise and not a response. So when I call client get I get my response back immediately. When I call getAsync I get back a promise which will become a response in the future. That's why they call them promises. They promise that sooner or later something interesting will happen with this object. And then I can call getState on it and it will tell me it's pending. So nothing, we are still waiting for this operation to occur. To resolve a pending promise it will either be fulfilled, successful or rejected with some sort of reason. So there's a method called then where you register how you want to deal with the results of this promise happening. So we have two callables an onFulfilled and an onRejected and we just have to fill them in with what we want to do. And it looks something like this. So there's a then and then we use simple PHP closures. They've been around since 5.3-ish now I think. So they've come in more common and none of those functions. I just register a couple of functions with my then method. One will have a response coming into it because it is a fulfilled promise. So when the first callable is called I have a response. And my second callable lines 5 through 7 will be a callable so it will be a closure with an exception coming into it because an error has occurred some form or another. Now a fulfilled promise doesn't mean necessarily a successful HTTP response. So the fulfilled promise will also end up getting client errors because it's not necessarily an error in fulfilling the request. The request succeeded is just that the client responded with a failure. So if you get validation error for instance that is a successful HTTP request. So be aware of that. Code would look something like this. It's fairly easy. Inside the first closure line 6 we do our JSON decode because that's where we get our response object. And lines 10, 11 we're going to do something with a failure. To fulfill the promise or to make the promise run we can call wait which will then go and run this request for us and then call our callables for us because it's a bit boring for just one. Realistically you wouldn't bother with all that for just one HTTP request. It gets more interesting when you've got multiple but I've only got 16 lines of code. We can do chaining with this system which gets interesting. This is where it starts being able to if this then that type stuff at an API level. So here I can get my users and then set it up in a promise that if the profit promise is fulfilled succeeds we can get the talks for that user. But if the promise fails or the request fails it doesn't even bother trying to get the talks. So I've just saved myself a list statement. I don't have to bother testing that because I've already covered that by putting it inside the promise. I can do concurrent requests. I can now get the talks the events I've attended the events I've hosted and all the comments I've made at the same time almost. So what I can do there is I can assign multiple get asyncs into an array and then I can pass those into the unwrap method which will iterate through them and return an array of responses for me. So I can get the responses back out of each one in turn. If you start doing some more interesting things at this point then you can do by thinking very simple about what you're doing with your API and your API integration. And you start trying to do this with Curl manually with PHP and you'll do it sequentially because it's so much easier. This is where Guzzle and PHP client libraries really start coming into their own. Paul's allow us to handle an unlimited number of requests where we don't know the number of requests that we need to deal with. Back to this one where I've got my concurrent requests I know that there are four requests that I want to do. Talks, attended, hosted, comments. It's a close set. It's quite easy, I can just run a concurrent request. But the poll, I don't know how many requests I want to make. It's a variable within my system. So in this particular example let's say we want to get the list of Twitter handles for all the speakers of our event. So the way we're going to do that is going to get the list of talks for a given event. So in this case it's event 6002. We'll have a list of talks. That's a simple API collection. That will give me a list of talks. I can then iterate over all the speakers of all the talks. Online 7. And if there is a URI for that speaker I can go and retrieve that speaker's profile. So I don't know how many I will get. It's a variable. So I forward each of them and create a list of requests which is simply an array. Now you notice on line 9 I have to create a request object. That request object is a PSR7 request. It's not an HTTP foundation request. So you have to remember we're back in our immutable world again in this case. But the Guzzle request object takes a method as its first parameter and your eyes its second parameter. 9 times out of 10 that's all you care about. If you did need to set a header on it you would then do with header because you need to use the immutable methods. Now I create a pool. So the way this works is I have my client object. I have my list of requests and then I have an array which contains the fulfilled and rejected callables that we saw in the promises section. So lines 4 through 7 are a callable that gets executed for every request if it succeeds. So I'm setting up what to do when all these speaker profiles come back at me. I haven't made anything yet and on any call I'll just set everything up. In this particular case I'm just going to read the Twitter handle and put it into an array. Nice and simple on line 6. Finally I can execute my pool using the promise function. I have no idea why Guzzle uses different words on a single different object they use. But we call pool promise and we get our promise object, promise wait, we actually run all the requests and then we get our list of Twitter handles. This really, really works well with APIs because I've made APIs all like this. We started off with a simple talks URI with one single API call and we needed to make n extra API calls but we don't have any of that because we've just wrapped it up into a concurrent request pool and it's just gone and done the work for us and we've got a list of Twitter handles out at the end. The way Guzzle has done this behind the scenes is it's opened up a number of connections to my API. That is a variable I can set. The default is 5 which works out pretty well for most people. Then as each request finishes it drops another one onto that thread. You've got 5 threads running and it just packs in all the requests that it needs to get done automatically for us. We don't have to worry about which queue is getting shorter or anything like that, it handles it all for us. If you've got 100 API calls to make, you've got 5 threads going some are going to be faster, some are going to be slower. Guzzle will pack them appropriately to minimise the amount of time it takes to do all those requests. I think that's really impressive. It's really, really nice. I like APIs. I think they're really cool. That's it. That's one of the really, really key features of Guzzle, this whole concurrent API request inside of Pulse. It's very, very much one of its key features. I appreciate there's an awful lot of code there and none of you read most of it because it's not that interesting to look at in a talk. All these slides are online so you can actually go and read them later. Just go to acrobat.com or search to speak a deck or something because they're there as well. So that's all retrievable so far. All we've looked at so far has been retrieving data because a lot of API integration is about getting data from the other place. But occasionally we do need to send data to an API so I would be remiss if I didn't at least show you how that worked in Guzzle. As you can imagine it is not particularly complicated and everything you have learnt about retrieving data with Guzzle works for posting data with Guzzle. Even down to promises and pulls you can set up a pull that will post 100 files up to S3 or something like that and it will slot them in for the most efficiency in terms of bandwidth. But at a simple level we simply create a client object we set up our base URI, we set up any common headers that we might need. In this case I've set up an accept header and I've set up an authorization header. If you're with an API, if you're going to send data to an API, they're going to want to know who you are. It's fairly rare that people accept data from some random person on the internet nowadays. Because that way leads to WordPress comment forms or discuss comment forms or Reddit. So you're going to have an authorization header of some form or another, this one's an OAuth 2.1. Then we have our request method. This time I'm doing a post, not a get on line 10. I've still got my URI. In this case I'm going to send a comment in so my URI talks one, three line comments. Send a body. This is simple HTTP. It's called a body, that block of text at the bottom of the HTTP message. So that's a keyword within Guzzle. For the joined in API we have to send a JSON body. Our payload has to be in JSON. So I have to set the content type to be application JSON in my header. Because I'm a good API citizen. If I'm going to send JSON to a server I'm going to tell that server that it's getting JSON. Because that's polite. And I'm British with polite. Sometimes. Occasionally we vote incorrectly about some different discussion. In this one I manually wrote that JSON in practice. Nobody manually writes that JSON because it's such a common requirement that we all write an array and call JSON in code. By definition. That's built into Guzzle for you. So instead of using the body as our filled within our array we can use JSON and Guzzle will automatically convert it to JSON for us and it will automatically set the correct header for us. So I don't need to set the content type header because Guzzle knows it's JSON. It's just JSON encoded it. You don't need to tell it something it already knows. That makes life a little bit easier, a little bit more common for the common use case. Upload files works just the same way. The only thing you need to be aware of here is that it's a stream. I mentioned this about PSR7 earlier. Body is a stream. Not a string. So you don't use file get contents. Then upload the file get contents use fopen because fopen will return you a file pointer, a handle to a PHP stream. So you just attach the file pointer or the handle directly to Guzzle and it knows what to do at that point and will do the right thing for you and you get all those memory efficiency benefits out of the box. That's how that all works. So that's where I want to stop today. Can't walk away from the mic. Sorry. So that's where I want to stop today. Guzzle, as I hope you've sort of got a flavour of what it can do for you from this talk it's a very, very powerful HSP client. It's built on standards that are becoming very common in the PHP community and I think will grow over time. HSP foundation has a bridge to PSR7. The new version of the Zen framework is PSR7. Slim framework is PSR7. Laravel has got a PSR7 bridge and will go PSR7. Cake is PSR7. We are starting to see a lot of PSR7 going forward. So I think that's really, really key feature of Guzzle and it's a useful thing for you to be learning. Guzzle itself with its ability to do asynchronous API calls and its ability to do multiple simultaneous calls. It's really, really helpful if you've got complex HSP requirements. And that's all I've got to say. Does anyone have any questions? I've got a slide for questions. There we go. Any questions? Is that too clear or are you ready for lunch? Ready for lunch is fine. I appreciate it. It's quarter two now. I had the hard time to understand the concept of the premises. The client still have to wait until all the HSP quests are done before something is displayed. So what's the game for promises? They're faster because it can put multiple together. There's two things that you get from promises that are interesting to me at least. One is if I've got multiple HSP requests to do, I've got more to do than the number of threads I've got, then I can push them together so I can have... If I've got five requests to do and I've only got two streams, then I've got request one goes out on stream one, request two goes out on stream two. I don't know which one will finish first. But the first one that finishes, I can immediately put request three onto it. So I don't have any delays between my two streams. So there's definitely a speed increase because the HTTP calls are no longer waiting for the call to finish for my PHP takes over and makes a next call because the CURL library underneath will do this for me. So there are multiple PHP running at the same time? No, it's all done in the CURL client library. There's one PHP, PHP is very single threaded. I think we've all noticed this. The CURL isn't. A really nice thing about this is it's all running live CURL. So it's not actually being done by the PHP it's being done in the C. So in the C library, it can run multiple concurrent requests simultaneously. So my PHP is running over here at the speed of PHP. My client requests are happening over here at the speed of C. Actually, it's the speed of HTTP because that's so much slower than everything else in the world because it's obviously connections across the internet. So yes, it is faster because we are doing multiple multithreading within a single threaded PHP context because we have a multi-threaded C client library. I'm not sure. Does that make sense? Understand the concept. Because you finally PHP have to finally wrap everything together. Correct. So you get all them back and then you do something with them. So yes, you are right in the sense that I have to wait for all my requests to come back before I can display my page because I'm still in the PHP context. I'm not in JavaScript context. So if you take my page which has multiple box, so I've got my user profile, I've got my talks, I've got my comments or whatever, I get all of them back before I display that page. How does that work? In what way? I add one time one PHP callback will be called and it can be in any order. Correct. They will come back in any order. This I have tested. In my particular joint in profile page there are five API requests that have to happen. They will come back in different orders every time I run it based upon the load on my server which I find quite interesting. They don't come back in the same order every time so sometimes the list of talks comes back before the list of events because the curl library underneath will just make the request and when they come back will send the next one for me. We can talk about that. How can PHP wrap everything together because it doesn't exist? Gusseld hooks into the Curl C Library underneath and when at the C level, when I set up each request in ModExec I can attach a callback to it so I attach a different promise a different on fulfilled callback to each one. When the PHP interface to Curl gets that callback it will then call my PHP closure for me. If you want to go into more detail we can probably take this outside because the conversation might go on a little bit longer and people are definitely waiting to come into this room so clearly there's a really popular talk about to happen so you've already got since. It takes me well to understand the concept. Yes, it's a very it's quite a powerful concept because it's something we don't do in PHP because it's hard without a C library and it's even harder if you don't have a good PHP library that makes it easy to integrate with that C library and that's what Gusseld does for us so well. So in the minute you've got multiple API requests you need to make before you can display your page Gusseld's asynchronous system works really well. If you don't have that problem if you don't have multiple API calls you need to make then it doesn't help. Let's talk later. So thanks very much for coming enjoy your lunch or the next talk if that's what you go to.