 We're a hundred so okay, so I've got the chat over another window here And I've got like one eye on that and what I'm the rest of the screen I hear a few little people saying there's no noise. I see noise now You didn't miss anything the bit that I was saying Whilst we had that little glitch there was that I just decided to fill out this talk title with as many words I could fit in the two lines and per the title here, I'm gonna be talking a lot about Azure and particularly around the way I'm managing have I been home which we're gonna talk about in the moment and What I really want to touch on here is I want to get into things like like talking about Paz I want to talk about the good things that I found Paz and then where I've run into the problems So this is a what's an old talk I want to talk a lot about functions or moving a lot of stuff into functions these days And I'm really really genuinely excited about that We're gonna talk about why this is cool particularly from scaling we're gonna talk about Goldilocks That will make sense in a moment Storage and I'm also gonna do a bit on Cloudflare as well because that makes your extra extra So let me then jump into it and I want to start by talking a little bit about have I been pwned Now a bunch of you will probably be familiar with this service already Not then you can go to have Plug in your email address And it will tell you fewer in any one of 307 data breaches, which we have seen in the past That was 307 I added 42 million more records last night and that's like the blown things out a little bit further So it has grown somewhat It will tell you if your data has been leaked Publicly and it does this by when you plug in your email address that goes through and Searches a whole bunch of data stored into your table storage back in Pulls it back super super fast and comes back and tells you where you've been exposed Now I'm not gonna sort of delve too much into that today because this is stuff that I've talked about many many years I want to talk about some of the newest stuff, and I'm gonna begin that by talking about scaling pads Now just to pause for a moment as well if you guys have questions as we go along Please ask that makes it more interesting otherwise Sort of sit here and talk to myself So ask them and Javier will either interrupt or if I see them pop up in the which window on the side I'll try and answer them on the fly as well So let's let's talk about and incidentally couch sniper. I can see your tweets. You're they're not tweets Are they twitches messages? I've got a headset on now, so you shouldn't be getting any any echo back This and what that means is is that it sits on logical servers that run up in I'm doing a quotes here the cloud Which is super super cool because it means I can have as much gore as little cloud as I want Now one of the things I did really really early on is I did a lot of work with auto scale And I want to talk a little bit auto scale about how it worked really really well And then some of the challenges that I've run into and how that's led me down a different path So that this is what my auto scale and Azure looks like today And the premise of auto scale is that you have a number of logical machines If those logical machines become overburdened you can add more logical machines Automagically and then when they've got too much resources way more than you need for the traffic you start to take them away So for example in my case here I said look if my CPU gets above 45% on average across however many machines There are give me another one and then if we go the other way if it starts to get too low below 25% take one away And then I just like maxed out the maximum I can have up to ten of them like just keep adding machines until my site actually runs like it should Now within these criteria we can then go and get very very specific So for example, I've said look I want to try and scale up pretty aggressively because it means that there's going to be a Bunch of traffic coming suddenly and I want to be able to handle that That's why my 45% threshold is reasonably low You'll see I've got a duration of five minutes in there And this is sort of the shortest duration you can have and what I'm saying here is that if I see an average of more than 45% traffic over five minutes Increase the instance count by one cool down for a few minutes So give it another few minutes before we go through this cycle again, and then I can increase it again And then when I back off when I go the other way I'm a lot less aggressive and really this sort of strategy is erring more towards having more Infrastructure than I need rather than less because I want to do one. Let's try that again. I wanted to be able to support the traffic So when you look at the back-off strategy, I'm saying hey look I've now got to get below 25% and it's got to be below 25% for 15 minutes And then I'm going to decrease the count by one and then I'm going to give it another half an hour Before I apply this strategy again So it was like let's just have more and more instances than what we need just to make sure I can deal with the traffic Now in theory this is good And in practice it has been pretty good certainly if you consider the I guess kind of the logical progression of going from Dedicated infrastructure running on prem where you had physical hardware to going to virtual machines To then going to this next step along the the paradigm of cloud, which is to go to pass this worked really good But what it meant is stuff like this Now this is from only a couple of weeks ago You can see the date down the bottom which says it's the 31st of August and this is an email I got because traffic started ramping up And you'll see here it says look my my default server farm has gone from capacity with with one unit to a capacity of two and Then you'll you'll notice also check the time as good as the interesting bit. So the time was 9 13 a.m And then I got another one. This is 9 17 a.m. So four minutes later. It's gone Hey, we got you another one, you know, like your average CPU utilization was still over 45 percent You need more cloud and then so What happens then is we go from 9 17 now to 9 21 four minutes later add more cloud again And then happens again for a last time. So I went from four up until five so now I'm running with five logical instances and They've scaled out in the space of what's that in total about 12 minutes and This sounds really good and and what actually happens is as we go along all we do if you just keep adding cost Because what I'm doing here is I'm using an s3 instance of an Azure app service. It costs 40 cents an hour So when I added that first instance, I started paying 40 cents an hour extra And then the second one came along and it's another 40 cents and so on and so forth And this is sort of that the beauty of commoditization where it's like look You can have as much cloud as you want and then you just pay for the bits that you have Now what end up happening here is I had this sudden influx of traffic and then inevitably the influx of traffic disappeared So we start going back the other way and I'm getting all of these alert emails So each one of these screen caps here is actually an email So it comes back and says alright look we're gonna take away an instance take away another instance Notice actually if I go back one the time here 9.56 in the morning The next one is now 9.27 Because my cool-down period is so much longer because I'm really really trying hard to make sure that I don't back off too quickly So 10.27 10.58 which of course is about half an hour later backs off another one and then backs off another one So in your mind and you'll actually see a graph of this in a minute But in your mind picture sort of adding more and more on top of each other and then starting to take them away And of course as we take them away Then the money comes back off and I start saving myself 40 cents an hour Now one of the things I really got excited about when I started building have I been pwned on a juror Incidentally because I'm from Australia. We say is your you guys will know what I mean if you're not from Australia And I will not change my Australian ways So what what sort of got me very excited about a juror when I started building out have I been pwned five years ago? Was this commoditization value proposition so that the whole idea of Let's just go and add cloud and we'll only pay for cloud we use and then it backs off and everything is awesome Now as I said earlier, this was awesome compared to the old way of doing things But let me show you what was actually happening with my CPU utilization When this scale up and then scale down happened So I'm going to pull a graph here and the graph we're seeing here is straight out of the Azure portal reports on CPU utilization and You can see here the way that utilization ramps up really quickly and then ramps back down and This is just a sudden influx of traffic. It could have been someone hammering the API It could have been some sudden popularity of the service. I'm going to give you an example of how that happens later on And then now let's compare that to the instances of the app service used So this is what my instances graph looks like and this is effectively the graphical representation of the emails We just saw so it scales up from one all the way up to five very quickly and then it starts backing off again Now as I mentioned, these are all instances So these are logical instances costing 40 cents each for this s3 app service instance But there's a really interesting pattern here. I want to try and point out If we draw a line Vertically down here at sort of the peak of the CPU utilization Everything on the left of the line is not enough cloud So I really didn't have enough cloud there I had to keep adding instances in order to deal for the load and and even though this load sort of ramped up You can see it sort of it goes from just about nothing up to very very high I had to keep adding instances to deal with it And the reason I had to keep adding instances is that CPU utilization started to increase beyond my comfort level But then you look at the other side of it as well and you go well that Utilization actually disappears very quickly by about 726 there on the timeline. It's coming way way down So really on the right-hand side, I've got too much cloud so I'm always sort of dealing with this situation of not having enough cloud and That means I'm going to be dealing with traffic problems or rather performance problems Or I'm having too much cloud and that means I'm going to be paying money that I don't need to pay Now I'll give you a really good example of just how difficult this made things and I'm gonna start with this guy So this I didn't know who this guy was originally But this guy is a guy called Martin Lewis and he runs a show called the Martin Lewis money show in the UK and In November 2016 the the show reached out to me and they said look We're going to put have I been pwned on this show and we have a habit of crashing websites And I'm like I got cloud man like my I'll just keep adding cloud and you won't crash me. No problems I got autoscale and When the show was on it was I remember actually I was sitting in my kitchen It was early in the day for me because they're in the UK they're on the other side of the world and I'm looking at my my Google analytics and I'm seeing something like 200 people on the site and I see a little uptick. Yeah, like a couple of dozen extra people and I'm like, well, you know That wasn't a really big deal then was it and then suddenly I go from 200 people To over 12,000 and this happened in the space of seconds. It was less than a minute. It was really really super fast Now this causes interesting problems because if you recall the whole premise of autoscale and Azure was Let's keep adding instances as the traffic starts to ramp up Now this is what my traffic looked like So when you look at that blue CPU graph, it went from like nothing to a hundred percent Really really really fast Now think about what that means in terms of adding instances Can you add enough instances fast enough to deal with traffic that changes that quickly? And the short answer is no I couldn't So this is what my ACT-CDP request looked like. I Lost about 33,000 requests. So I got absolutely smashed Now incidentally one of the challenges here with this particular type of traffic is that when have I been pwned ends up on a TV program You get all of these people sitting around in the UK and it turns out it's actually very popular show and they obviously see the program and they see the have I been pwned a dress pop up on the screen and 20 30 40 100,000 however many British people all pick up their phone at once And everyone puts the address in at the same time. So this is not a nice organic flow of traffic This made my life very very hard. So Let's move on from there and start talking about serverless because serverless actually starts to solve the underlying problem that I kept running into with Paz and They're sort of a nice way of picturing what that underlying problem is and it's called the Goldilocks principle and The Goldilocks principle is about having just the right amount So not too little not too much. Just right not too hot not too cold. You'll know the story So what we really want to do with with Architecture in general is try and get that balance right because if you remember before I always had either not enough And I was losing traffic or I had too much and I was paying for things. I didn't need to pay for So that brings me to a demo I'm actually gonna pull a browser over and give you a little bit of a demo of how this works And I want to talk about a service within have I been pwned called pwned passwords so I'm gonna pop have I been pwned up on the screen over here and Pwned passwords is a feature that I added Originally in August last year, and then it's gone through a few little additions And if you go up to the passwords link and everyone can play along with this if you want There's a password field here So you can plug a password in here and it will go into the database and it will see if that password has been exposed before So for example, I'll pick something that's going to be absolutely terrible Go through submit that and it says hey this password has been seen 49,938 times Now when you do this search, it's hitting an Azure function And I'm gonna talk about functions in a moment, but I want to talk briefly about the mechanics of how this search works Because on the surface of it. It looks like you've literally just handed your password over to some arbitrary third-party So if I pop open the dev tools and I'll plug this in a little bit here We'll clear those errors. I don't know what they are. Let's imagine they weren't there And I'll look at them later. Go to the network tab and I'm gonna run a search here Now what you'll notice here is that when we look at this request The path that's been requested and it is a get request is not sending anything in the body The path that's been requested is just these five characters The password I entered with was password with a capital P and an at symbol so the hackers don't know what it is And incidentally they do they've worked that out And a zero instead of an O is just character substitution The way this works is you put in the password and then client side What happens is the password gets hashed it gets shower one hashed The first five characters of the shower one hash are then taken and that's the first five characters You can see in the path here and it gets passed over to the service The service in its response if I scroll this down a little bit comes back With all the suffixes of shower one hashes of passwords that are in the system Now if I scroll to the bottom here, what you're going to see is there's somewhere just under 500 497 Different shower one password hashes that begin with the first five characters of the hash that I searched for Now this model is called k anonymity So it means providing just a little piece of information to the service not enough to identify what it actually is And then the service comes back and then says hey, I've got all of these things That may match the thing that you looked for you can now compare your whole thing to one of these results And what we're seeing here is the end of a shower one hash So this is the suffix we already know the prefix we send that in the request and after that is how many times That password has been seen before and Somewhere in this result set here. There's going to be one number, which is very large Which is the one that I just searched for So I wanted to just make that clear so you're not actually sending me any passwords now Of course, you need to trust the web app because you are entering it into the web app But a lot of people hit this API So a really good example is even line. So the multiplayer game even line they hit this API Tens of thousands of times a day to make sure that when their subscribers are actually logging into the system They're not using a password that's been seen before Because that could put them at risk of account takeover So this is really awesomely cool It's about five million requests a day to this service And now I want to jump back to the slides for a moment and just talk about the mechanics of how this actually runs and Why it makes sense to run it on Azure functions So here's a good little example This graph in some ways is a little bit similar to the graphs we saw before with Paz where there's a sudden influx of traffic So this is function execution counts. It's literally just how many times the service has been requested and The obvious pattern here is that it's very flat at a very very low amount And then suddenly it just gets absolutely smashed and we go from Probably like 20 requests or something per unit of time here up to 1400 requests. Yeah almost instantaneously over the space for a few minutes and then it backs off again really really really quickly So what we're going to do here is we're going to talk about how this differs from Paz in terms of the advantage That gives me in running the service So this is one metric we're going to look at the execution count. So how many times it's run This next graph is a different one Now if that sort of went a little bit faster in the transition than what the bandwidth allows This is actually a different graph Now this is execution units and execution units are really interesting because they're a different way of measuring the effort The infrastructure has to invest in order to service the request So execution units are measured in an amount of memory over a period of time This particular graph you'll see there's one point one four billion down the bottom left hand corner And incidentally every time I see something expressed in billions and it's something that's going to hit my wallet I do have a bit of a heart and mouth moment, but but let's talk about what this actually translates through into dollar terms This is one point one four billion megabyte seconds Megabyte milliseconds actually One point four billion megabyte milliseconds. So this is how much memory has been used over how long in order to service those requests It's going to look very similar to the execution count the previous graph because there's a pretty constant amount of Resources used for every single call of the request Now whether I've got a very small amount of requests or a very large amount of requests That graph is going to roughly match the one that we saw on the previous screen So we'll talk about pricing because this is where the function side of things gets kind of interesting So functions get priced based on two metrics and one of those metrics is execution counts So on that first graph I had nearly 67 000 execution counts You pay 20 cents for every million of them So I had to pay six cents in order to support almost 67 000 requests I am quite happy with six cents. That's that's not a difficult discussion that I then have to have with my wife The other metric of course is the one I just mentioned which is the function execution units Now I consumed 1.14 billion megabyte milliseconds But you pay based on gigabyte seconds And you pay a very very very small fraction of a cent per every gigabyte second Bottom line is I end up also on top of the six cents having to pay eight cents For the gigabyte seconds function execution units And and incidentally every cost I show here is going to be in USD because it's just the one that's most broadly understood across the globe So in total that cost me 14 cents. I had to shell out 14 cents to do 67 000 requests Now the the beautiful thing about this is that this is Pay per execution This is all it is so we've suddenly moved away from this premise of you have this great big logical unit which is an azure app service and You might be using a very small amount of it and then as you start using more of it You might need to get another big logical unit and now what you're doing is you're just paying for when you execute code And this is great because you get an enormous amount of linear scale Without it slowing down or without you having too much Now to be clear too, this is the consumption model of azure functions So everything i'm talking about here is consumption model. I think that is the most awesome model because that is literally pay per use Now what's even cooler about this Is that there are free grants So I don't have to delve into my pocket for the 14 cents because You get 1 million free executions per month of an azure function Now that roughly speaking is what about probably about 15 times more than what I actually needed You also get 400,000 gigabyte seconds per month Which is roughly about 400 times a little bit less than 400 times more than what I needed That the point I'm trying to make here is you will be amazed at how much you can get for free Out of azure functions because those free grants cover your free huge amount and everyone gets free grants as well I don't get anything special This is literally what everyone gets and then if you go over it, well then the cost is extremely low So for the for the pwn password service in have I been pwned It's run on this from day one and that's kept that cost massively massively massively low And because of the way functions work It would cost me a huge amount more if I was to put this on the old model of pads Now let me move on because I want to talk about a couple of other things too And I am keeping an eye on that twitch Comment or chat stream there as well. So if you do have questions, you can let me know over there I want to talk about data storage because data storage is kind of an interesting one in so far as as many people have very traditional views of data storage And here's what I mean by that I used to work in a very large corporate environment And the view there was always If you're going to store data, you're going to need a sequel database Right and many of you have probably seen this before like the it's almost like in people's minds The only way you get to store data in an enterprise is with the massive relational database management system And there are times when that makes a lot of sense a sequel is very good for a lot of things But it's also overkill for a lot of things as well So very early on in have I been pwned I decided to do everything in table storage So when you search through those 5.x billion records at the moment, you're searching as you're at table storage That's very cheap to run and it really scales beautifully Now I also decided to use table storage for pwned passwords and I'm going to show you What that looks like Now what you're seeing here is petition keys, which are the first five characters of the char one hash So remember I showed you how you search by the first five characters of char one hash And in this case, we just started alphabetically. So we've obviously started with the zeros And then the row keys of the suffix And then you get a timestamp every Azure table storage instances timestamps And I've added a count. So for example The third row whatever password it is behind that hash has been seen 630 times And then when I was running table storage and I have a look at it in app insights This is what my function execution looks like So what we're seeing here is a record of how many times over this period of time the function execution Has occurred. So this is over the period of a day. You can see there's 403 thousand requests Average of one point or 122 milliseconds on top of table storage Now that was pretty good. I think searching through half a billion odd records for For only 122 milliseconds is probably not too bad But I do like to optimize all the things So one of the things I started looking at is how can I get that 122 milliseconds way down? Like can I do a better job than what I was doing in table storage? And and again just just to sort of clarify a little bit here We are searching for these first five characters of a char one hash The petition was using those first five characters. So when you did a search, all I had to do is just pull the entire petition So logically it worked pretty well Now I had decided to give blob storage you go. So I thought what would it look like If instead of having all of these rows in table storage, what would happen if I put things in blob storage And what I do then is I just create blobs like this So this is effectively the same data from the previous screen But now what I've done is I've just created a blob I've called it the first five characters of the char one hash dot txt and then I've just put all the data directly in that blob Now remember on that previous slide the function execution time was 122 milliseconds Here's what happened when I went to blob storage Went down to 54 milliseconds So I sliced more than 50 percent of the execution time off By putting this large amount of data into flat files that sit on the file system And sometimes I talk to people like this and that blows their minds because they're like but you're not meant to do that You know like it's meant to be in some queryable kind of structure Well, it doesn't matter because the nature of this data and this is a very specific sort of data But the nature of it is I create this great big collection of data and then it sits there and it doesn't change for a long period of time And by shaving off more than 50 percent of the execution time Think back to what that does to the cost of visual functions For the function execution component of the pricing the cost reduces by more than half It doesn't change the total number of executions, but it changes the duration With which the execution needs to occur and that has a direct impact on cost So that that's all I wanted to talk about on storage And I just kind of wanted to make the point here that there are ways of optimizing spend Which are very different to what people would be used to in the past And particularly when you're paying for the number of executions and how hard the execution works over a period of time Suddenly you get to save a lot of money if you can make things go faster Okay, so I'm just going to pause here for a second because I saw a question come up here on the twitch window Actually, uh, Javier or Jeff, are there any questions that I've missed that I should cover at the moment? Sweet okay Let's move on because I'm conscious of time and I've got more slides and incidentally I actually designed this talk so that I would have more slides than time With the hope that people would ask me questions anyway And we I just do like the most important stuff first Which brings me to this next one because this is a really really cool thing And this has made an enormous difference to the way I run the service And it's the best way to make azure go fast Now I'm probably going to make Javier and Jeff fall off their chairs when I show this next bit But I reckon the best way to make azure go fast Is not to hit azure Now I want to explain what I mean by this because this makes a huge amount of sense And I'm going to talk a little bit about cloud flare So cloud flare is a service which runs in 152 points around the world So every single purple dot that you see on this map is a cloud flare edge node Now just to give a little bit more background on cloud flare first firstly There is actually a nice microsoft synergy here microsoft has actually provided funding to cloud flare So they they have some faith in the company. They've put money into them and cloud flare is a reverse proxy So what that means is is that when we think about something like have I been pwned Which sits over here in the west u.s data center And then we think about me sitting down here in australia When I make a request to have I been pwned I don't necessarily need to get that traffic going all the way to the have I been pwned service I can get that traffic to go firstly to cloud flare's edge node And then they may respond from there without even hitting azure And you'll see in a moment how that makes sense Just before I go on and actually talk about the mechanics of using things like case I want to touch very briefly on this next slide because I thought this was kind of interesting This is a future plan for cloud flare and their CEO Matthew Prince shared this recently and he said They're going to build cloud flare out to 250 cities and get within less than 10 milliseconds of 99 percent of the global population Now keep this in mind as I progress because what what we're going to say here Or what we're going to see here is that if 99 percent of your audience is within 10 milliseconds Of what can ultimately be a case of your data Suddenly some really really really cool stuff can happen Last sort of background slide on cloud flare first They serve a lot of data This is 258 billion encrypted requests over a 24 hour period And I had to google it, but actually that is a quarter of a quadruion So there are a quarter of a quadruion requests that they're serving in a 24 hour period So they have a massive global scope Now let's sort of get to the pointy end and talk about what this means when you run a service through cloud flare So this is a graph of pwned passwords And I wrote a blog post recently about how to make pwned passwords and serverless and stuff go very very fast And this was in the post And what you're seeing here is a graph over a period of one week Now if you look at the top left hand figure, there are 32.4 million requests in the last week The next figure is the cool one 32.2 million requests were cached And what that actually means is there's a 99.62% cache hit ratio So 99.62% of requests could go to one of those little purple dot edge nodes around the world And have a result returned directly from cache rather than hitting the origin server So what we're sort of saying here, and I'm not saying cache I can see that comment there. It's cache I will I will hold my Australianism all the way through this. You guys know what I mean So what we're really seeing here is a massive reduction in the amount of traffic that has to go to azure And that translates to a massive reduction in the bill And it translates to a massive increase in performance So think about Matthew's tweet just before as well getting 99% of the population within 10 milliseconds of a cloud for edge node What would that do to your performance? That's that does awesome things Now I was kind of curious as well that 99.62% Like this is really really good, but how do I get like the last 0.38% You know or somewhere very very close to there and I noticed there's like this one tiny tiny little bit of the graph here You see how there's like a little bit of light blue For some reason my cache hit rate cache cache I can't stop thinking about that my ratio dropped in that little bit of slice of time So they may have had a higher than usual cache eviction ratio, for example So that did cause it to drop a little bit lower than what it could be But I also don't want to diminish the significance of how important it is to have functions behind this Because sometimes things like this happen So I snapped this only a few days ago and we can see for the first part of this graph It's like dark blue all the way and it's like all right. This is awesome. I'm just getting massive cache hit ratio And then for some reason like just after 9 a.m. I guess at the round this point in the graph Cache was purged or something like that and everything I had in case disappeared and then suddenly I've got a massive uncache ratio And you can see over time as you get to the right of the graph the the the height of that little blue line Or the light blue line rather starts to diminish so more stuff is getting cached But that meant I had an absolute sudden ginormous increase in traffic to that origin website Now think back to the martin louis money show thing where when I got sudden influxes of traffic on pairs It actually caused some real problems. I either didn't have enough resources or I was paying too much money This is what it looked like on my as your functions when it hit So have a look at that bottom line bottom left line to begin with we're there at about 50 requests per unit of time And then suddenly we got a 10,000 So the the question we're sort of asking here is If we use a strategy like this does the underlying infrastructure does the origin service Have the capacity to suddenly have a 200 fold increase in traffic in what might be Single or low double digit milliseconds rather so almost like instantaneous maxing out of the traffic There's also a couple little spikes in there as well And I think those green spikes were were for some reason Degradation of the service accessing the blobs. I'm not entirely sure why that happened. It doesn't correspond with traffic It's something on the back end But it's very transitional as well a very transient rather And now of course if you serve from case and there is this sort of transient outage Where you where your origin request times go up? Well, then you get a lot of isolation from that because a lot of stuff is coming from the edge anyway So this is sort of the point I was making around Making azure go fast by not hitting azure But also having the ability to have like this instantaneous scale up if you do suddenly get an influx of traffic And to be clear as well the cloudflare service you can get into for free So a lot of what i'm showing here won't cost a cent if you lay that on top And then of course you save on function execution units and execution counts Okay, so Want to go into something uh, something just a little bit different here, which is about rolling over Because one of the things that I started to do and this is really only in very recent weeks now is As I was mentioning before I was getting excited like I was going hey these functions are really really cool They solve a bunch of problems It was only pwned passwords, which was running on functions So just to sort of put that back in a context if I bring my browser window open This was hitting an azure function and I get all the awesomeness we just spoke about The front page was hitting web api running in the web app itself And what that meant was that this front page was going to have all the same sorts of performance problems as What I was writing about earlier on with things like the martin luis money shirt Now what happens if you want to roll over? so part of it the challenge that I had is that When someone hits the api on have I been pwned? This is a public api design for people to consume And just in case you're curious about doing this if you got an api and you got an overview You'll see a whole bunch of info here about how to go through and consume it Getting all breaches for an account for example And you'll see that there's a url here. Have I been pwned.com yada yada yada yada Now that maps directly through to the app service to web api running in azure If I want to roll this over to a function that's going to run on a different host name on a different service So how am I going to do that? So continuing the serverless all the things paradigm I wanted to go from web api at that path to functions at that path. How are we going to do it? So there's this really really cool concept called a cloud flare worker And cloud flare worker is serverless code Just like azure functions are and pro tip serverless does use servers It's just that you never have any visibility to them But it's serverless code on the edge So that means every one of those 152 data centers around the world runs your code I added code that looks like this And I'm just going to explain this very briefly This listens to incoming requests It gets the requested url. It lower cases it because I don't want this to be case sensitive It looks for the old url have ibeampone.com forward slash blah blah And it replaces it with the new url of the function And what it does then is the cloud flare worker from the edge calls into that new location and the joy of this is You can roll over to azure functions Without actually having to do anything to the original app All you're doing is saying hey when the request comes in through cloud flare pick it up Send it to somewhere different And then obviously I return a response which still adheres to the same contract that's in the api So it's still the same json structure and still the same response codes But now what I can do is I can just roll over in one clean swoop without having to change anything on the actual have I been pwned in So I think I've actually managed to time that like bang on 45 minutes. We may only have seconds left Are there any questions that people have about any of this? And uh javier and jeff i'm i'm throwing that that question about questions to you What are the main concerns about so you know to be honest and one of the things that took me a while was The rollover process and and part of the concern about the rollover process was I had dependencies on old Old paths, which is you know, basically if I go back one slide, this is the answer to it Um incidentally if you go to my blog and you look for azure There's a tag for jure. You'll see that I have written before about rolling over And doing things like ab testing you can use Workers like this on cloud flare to do things like say like let's just take 20% of my traffic and send it to the new path Because if I can take just 20% of the traffic, then that's fantastic now I can sort of you know, basically use some guinea pigs without breaking it on everyone The other concern that comes to mind is that Functions are paper execution and they scale Beautifully in so far as it's just entirely linear. You just keep loading on traffic and under that consumption model. You just get given more Servers in the serverless model, you know like underlying instances of service The one thing that that admittedly keeps me up at night sometimes is what happens if I just suddenly get like massive Unexpected traffic And then I look at my bill and it's crazy And on as I've had heart and mouth moments particularly when you see function execution units expressed in billions or even trillions So I do have a concern about that the mitigation is you can configure alerts in azure So you can say hey, I would like to get an alert so that if I start seeing more than end Request over a period of time. Let me know about it And that doesn't necessarily solve the problem But now you can have the discussion about is this traffic worth paying for or is this traffic I now need to take some sort of mitigation against All right, so I am conscious of time. Were there any other questions? I see someone here who says not a question But a thank you, uh, oh, thank you. That's that's awesome. That's nice to hear Jeff. Have you anything else? Good question. Would you like a book? Silence All right, so hey, uh, while we're It's it's not that we're sort of killing time per se, but if you go to my my blog Let me find you the things that are actually worth reading on this so This blog post here, which you'll see on the front page you scroll down a bit Serveless to the max doing big things for small dollars That talks in a lot more detail about the things that I just showed you guys So that's definitely worth the look. I've got a heap of numbers in there as well And it's it's a typically long blog post that I tend to write If you look at the azure tag as well This one on seamless a b testing talks about what I just mentioned before it's like look, let's let's just take 20% of my traffic for example Another thing that I show in there as well. This is a really neat way of doing it I I did a cloud flare worker that said if the request contains a particular cookie Then send it to the new api the one running on functions. Otherwise send it to the old one And what this allowed me to do is just go and set a cookie in my own browser So it's like let's just chest all my own requests And I'll start sort of dogfooding it sending my traffic and seeing seeing how it goes And then probably the the last one they had to mention as well is the I want to go fast blog post So this talks about why I rolled over from table storage to blob storage And it effectively talks about the nature of the data and and the cost savings by reducing that execution time From 122 milliseconds down to I think it was about 54 or something But look there's a heap of other stuff on the azure tag and if you scroll far enough back through it You'll see all the logic that went into using paths and autoscale There's a blog post in there that has a lot more detail about how it got smashed by that martin louis thing As well So that may be a cautionary tale in there And if anyone else has sort of any other questions as well After this you can find me on the twitter As troy hunt Is there anything else um harvier and jeff from your side guys? Okay, so uh There are there are multiple ways of answering this now Let me let me show you first of all like the formal officially cloud flair way They have a pricing tab there if you go into pricing you can see the options The the the zero dollar per month one is actually really really interesting And the reason why it's interesting is there's a huge amount of stuff you can do with that So one of the things that the people keep asking me is they're like look, you know You wrote about pwn passwords, etc Cloud flair does give me some services for free to support the project. How much would it actually cost? Because you're talking about this massively high k shit ratio What i'm trying to do with cloud flair is run other services through there on the free plan So for example, uh, why no htdps.com? There's a project that i'm running with scott helm to track the world's largest websites that aren't using htdps This website has a 99 point something percent k shit ratio This runs on the free plan So this doesn't cost me or anyone else who runs through on a service like this anything It's totally free and that is a 99 point point something percent k shit ratio So you can do a huge amount of this for free The blog post about serverless to the max does specifically talk about what happens if cloud flair wasn't there What would the cost be and it's still really good because as your functions are really cost effective to run And then it says okay Well, what would happen if I was if I wasn't getting some freebies from cloud flair and I had to use either their their normal mainstream free servers or pay for it And it's still a ridiculously small amount to support about five million requests to that api per day And keep in mind everyone has five million requests is searching through 517 million records as well So the scales are pretty cool Alrighty well, hey, thank you very much everyone for joining if you have questions Please grab me on twitter and check out trojan.com on that as your tag for heaps of other cool stuff