 Hello there. I previously worked at both Distilled and Verve Search on teams that collectively secured more than 22,000 pieces of linked coverage on sites like this. We created things like this. This is how some of the world's most loved artists structured their days. Here, we revealed the films with the highest on-screen death counts. I'm smiling, saying that. It's not nice, is it? Sorry. And finally, this was a panoramic gigapixel timelapse of London's skyline. Internally, at one point, we were calling it a gigalapse. I think that sounds like a medical condition. So I try not to refer to it like that. Today, I work with companies and in-house teams that are making similar things. This puts me in a very privileged position. I have distance and perspective. I'm no longer trapped in the day-to-day agency grind. Rather than just seeing the challenges in a single company, I get to see the challenges across many companies. This allows me to see things much more clearly than I did before. Of course, there's a downside, right? There's always a downside. I've realised some things which make me very uncomfortable. I've been doing this for over a decade, and I've given many talks on stages like this one. Now, some of the things I've said over the years I've stand by, are the things? Not so much. I wish there were a real-life disavow file. For some of the things I've said in the past, but there isn't. I can speak about some of those things, and really that's what today's talk is about. A myth I've perpetuated a pretty huge mistake I made not once, not twice, but over and over and over again, and finally a misconception that I have helped fuel. We're going to kick off with the myth. Let's do this. In 2018 I gave this talk, What happens when a werewolf bites a goldfish? Towards the end of the talk I said this, you don't need to be lucky, or you actually need to do, is work really hard. That is a lovely message. The trouble is it's not true. Here's the problem. Luck played a bigger part in my own successes than I've ever truly been comfortable with it missing. What do I mean by that? What do I mean by this? Well, I showed you smilingly previously. Here, director's cut. We revealed the films with the highest on-screen death counts, and this piece got more than 500 pieces of linked coverage. Delightful, right? What I want to do is tell you a little bit more about how those 500 pieces of linked coverage actually came about. So some of this coverage was secured directly by us. This is an early piece of coverage in The Independent, which is a UK news outlet, and it's centred on the winner guardians of the galaxy, being surprising and controversial. But a huge chunk of the coverage this piece received wasn't directly generated by us. Because shortly after the piece launched, director James Gunn shared an early piece of the coverage, then spent two hours arguing with people about it on Twitter. He essentially spent two hours saying, no, Star Wars does not count. And most of the subsequent coverage the piece received was just journalists reporting on James Gunn's tweets. Now, we got very, very lucky. Very lucky. Now, I suspect that the piece would still have achieved reasonable levels of coverage without James Gunn's tweets. I'm not saying our success was 100% down to luck, but nevertheless, luck played a huge role. This is not an isolated example. When I was putting this deck together, it got me thinking, when did I start downplaying or outright ignoring the role of luck in successful campaigns? Actually, I think it was pretty early on in my speaking career. Someone gave me this feedback. When I talk about things like, I feel like you're doing yourself a disservice. You make it sound like you don't know what you're doing. And I didn't really know what I was doing, though. That was the truth. I didn't really know what I was doing. And so the thing I absorbed from this feedback was that letting other people know that I don't really know what I'm doing is a terrible idea. And so I began to present sanitised versions of how campaigns went down. Stories that skipped over the parts where we got lucky framed this luck as some combination of good judgment and hard work. What's the problem, though? What's the problem with this? When I fail to acknowledge the part luck played, I also fail to gain a deep understanding of why a piece is successful. And this lack of understanding impedes my ability to effectively judge ideas for future pieces. This is a myth. I think this is closer to the truth. You need to work really hard and recognise when, where, and how luck contributed to the success of a piece. More on this than part two. Speaking of which, let's go to part two and talk about the mistake I made once, not twice, but over and over and over and over again. At various times on various stages, I've said things like this. Study successful content and try to figure out why it worked. What's the problem with that? That sounds fine, right? It's not necessarily bad advice. It's just woefully incomplete. And because it's incomplete, people struggle to follow it. What does that really mean? How do you actually do this? I've not told you. I want to travel back in time to 2015. This is the piece that I have been obsessed with since 2015. I was at Distilled at the time. It was created by a company called Vinepair. It depicts every country's most popular beer. It was wildly successful. As a team at Distilled, we had nothing to do with it, but we were fascinated by the success of this piece. Will Critchow was on this stage late last night. I'm not late last night. I think he was drinking late last night. He was here towards the end of the day. He's a very smart man. This is what he had to say. I saw this piece in a few places and I was puzzled about why it was so popular. I don't really know what all the fuss was about except beer now. I don't think the beer was the reason that piece was successful. Will tells me he doesn't think that either. I think he's just trying to big himself up. If beer was really the reason that piece was successful, then every piece about beer would be successful. I can promise you they're not. I've made many pieces about beer which were not successful. The trouble is we often are knowingly absorb thinking like this. This happens due to something called sense-making. Sense-making is the process by which people give meaning to their collective experiences. When we sense-make, we favour plausibility of over-accuracy in accounts of events and contexts. All of this is just a very academic way of saying that most of the time we don't think that deeply about anything. We're often quick to accept explanations that seem reasonable without questioning how valid those explanations really are. When thinking about why a piece was successful, we often jump quickly to an explanation like beer, which sounds plausible but actually isn't accurate, but we don't realise because we've already moved on. One person getting one thing a bit wrong shouldn't be a problem, right? Well, actually it can be. Sense-making is not about how individuals make sense of things. It's about how organisations and groups of people make sense of things. Plausible stories are preserved, retained and shared by groups of people. We absorb them, we absorb these stories, and that affects the way we interpret future events. It leads us into some very weird situations ultimately because if we accept that beer is the reason that this piece is a success, then we see another piece about beer. Our brains go, oh, pattern, there's a lovely pattern here. And what happens as we move through the course of our careers, all future successful pieces that we encounter about beer add evidence to support this explanation, and over time it becomes more and more and more true. I see this happening in a lot of directions at once. What else do these two pieces have in common? They're both maps. How many times have you heard like journalists love maps? No, they don't. It's another pattern problem, right? If it were true that journalists really love maps then we'd just cover any map-based piece, right? Every map-based piece would be successful. It's definitely not the case. Now humans are programmed to spot patterns and we're pretty great at spotting them, but we find it much harder to distinguish whether or not those patterns are actually meaningful. Our ability to spot patterns doesn't always serve us well. Now I'm sure that no one in this room, apart from Will Quitch, though, thinks that beer on map is the reason that those pieces were successful. In the past if you'd asked me why, I'd have probably said something like this. They offer journalists something they don't have the time or resource to create themselves. Which is true, but again, I've made several pieces about resonant topics which offered journalists something they don't have the time or resource to create themselves and they weren't all successful. It's just another pattern I've spotted, but it's not a meaningful one. Despite saying this a bunch of times, I wasn't great at figuring out why a piece was successful. And this bit is part of the problem, right? Because we don't need to study the content. We need to study the coverage it generated. I think that when we study the content we automatically switch into pattern recognition mode. And I was really guilty of this, right? I spent a lot of time looking at successful pieces and not nearly enough time looking at the coverage they generated. Often all I'd actually be doing is looking at a successful piece and trying to spot it neatly into one of the patterns that I'd previously recognised. And this was a low effort and very comforting way of feeling like I understood this stuff, but those patterns were not meaningful. And of course I wasn't just hurting myself here because it wasn't just me getting something a bit wrong on my own. I was part of a team and we were sense making. And those patterns that I was spotting, which weren't at all meaningful, were often being accepted and believed to be true. I still think there's merit in this advice. But we need to somehow make sure we're arriving at a better answer than beer or map or something journalists don't have the resource to create themselves. We need to avoid our tendency to slip into pattern recognition mode. And one way to do that I think is to ask better questions. So now rather than asking myself, why was this piece successful? I tried to answer these six questions instead. First up, what stories did journalists write when they covered this piece? Now you could still answer this in a pretty shallow way. Please try not to do that, right? Don't just go, they wrote about beer. It's not true. Read the stories. Actually if you do read the stories you'll start to see some patterns emerge. With this piece there are a bunch of different stories that are written. So there was a lot of coverage that was designed to provoke outrage at the most popular beers. So there was definitely this sensational take. There was also, interestingly, on sites like The Washington Post, analysis on the two major breweries in Bervern SAB Miller who own most of these beers. So there were stories about the globalization of beer and the beer market worldwide. And of course there were nostalgic travel stories. Second thing I would like you to look at, did the coverage of this piece feed into something else which was going on on the news cycle? Like another news story or an event or a trend. Were there waves of coverage that led to the ultimate success of this piece? If one of those things hadn't happened, would it still have been a success? Number four, this one's really important. What emotions did this coverage provoke? Had a breeders react, what emotions did these stories provoke in you? Now this is really important, I think. I showed this to my friend who's a journalist. And she was like, oh yeah, that's a really good point because that's largely how I determine which stories I'm going to cover. Which stories I'll write, which ones I won't. She said this, I'm always thinking about what emotions a story is likely to evoke. Strong emotional reactions equal page views and my page view targets are really challenging. Fifth thing I look at, what verticals or types of publication covered this? Did the story get picked up in different ways by different verticals? And finally, did this piece get covered in multiple regions or countries? And if so, why and how did that come about? Quick recap, gone really quick. Answering these questions gives me a much clearer understanding of why a piece really worked and it snaps me out of pattern recognition mode because I'm not looking for patterns at all. Now, of course, answering all those questions is way more work and some of you might be thinking do you really need to do all that? I'm not here to tell you what to do but there are benefits should you choose to. This is a question that I'm asked pretty frequently. This piece got a bunch of coverage a few years ago. Could we remake it? How do you figure out whether or not remaking something is a good idea? When again in the past, if you'd have asked me that question, I'd have probably said something like, has enough time gone by? I used to think that this was really important because if someone else had done something similar fairly recently, I figured our chances of landing coverage might diminish and this sounds reasonable but it isn't always the case. Because I noticed that in some instances, even when I considered that sufficient time had passed, though a long time had gone by, the remakes we made didn't always land the levels of coverage we expected. And I noticed the opposite. In some verticals, very similar pieces seemed to get coverage even though I thought insufficient time had passed. This question isn't the question we should be asking. I should have been asking these questions instead. What were the conditions which led to the success of the original piece and what are our chances of replicating those conditions? Remakes don't fail because insufficient time has passed. They fail when we're unable to recreate the conditions which led to the success of the original. Answering these questions will help give you a clearer picture of what the original conditions were. Then you want to get a feel for your chances of replicating those. Are those conditions still alive and well? This is best explained with examples. I'm going to whip through these real quick. This is a piece called Highways to Hell. It is a mash-up of publicly available data on the most congested, most trafficky roads in the UK and Europe. Should we remake this piece? We're kicking off here. These are the questions we're trying to answer. First up, we answered these questions. Back then, originally, the piece was covered by automotive sections of national news outlets, and it was also covered by regional news journalists. Regional news journalists covered regional versions of the story. What about now? What's going on with those journalists now? What are they writing now? Interestingly, those journalists continue to write those stories. They continue to write those stories about those types of studies. They do so frequently. They don't seem to mind that the same roads and the same towns always win. Another quick side note from my friend who's a journalist. She was like, that's not unique to automotive journalists. She said, often as journalists, we care more about how well-specific types of stories are performing in terms of page views. Can you see a pattern emerging with journalists and page views? Yes, you can, of course you can. The thing that she's most interested in is performance, right? Whether or not a story is new isn't that important to her. So, she said to me, let's imagine I covered a study last week and someone pitches me a similar study this week. If my first story generated a lot of page views, I'd probably write up the second one, too. How do we know if a story has got a lot of page views? We can't know for sure, right? A good proxy, though, is social shares. Broadly speaking, if something got shared a reasonable amount, it normally equates two page views. So, we can trick our way around some of this. This was really interesting, I thought. She did ask me to highlight this isn't the case in all verticals and there are limits. I think she just didn't want me to spam her, probably. Back to our example piece, though. Should we remake this? Should we remake highways to hell? In this instance, it seems that the conditions that led to the success of the piece are alive and well today. So, we can be reasonably confident that if we remake this piece with a fresh dataset, we'll probably achieve similar levels of coverage even if the worst places don't change. Safe bet, right? Caveat usual things. Let's look at another example. Directors cut. Should we remake this one? I'm hoping you already know the answer to this. If I've done my job right, you're already going, no, don't remake this, it's a terrible idea. As you know, early coverage centred on the winner being surprising. Tweet storm from James Gunn. What are our chances of replicating that? Yeah, boo, down. Can we do that? Well, studies about on-screen death counts are not something entertainment journalists perpetually cover you won't be surprised to learn. Possibly they chose to cover it because the winner was surprising and controversial. As such, if you do this study again and you fail to find a new winner, I think you'll almost certainly struggle to get coverage. Because the result won't be a surprise, James Gunn is not going to take to Twitter again for you. I would hope he's learned his lesson maybe. I mean, he's a terrible human. I don't know. But even if you do find a new winner, you still might struggle because your new winner needs to be surprising and controversial enough that will stand out. And these journalists, entertainment journalists have plenty of controversial stories to cover. The landscape has changed. Even if you've got all that, you'll also need someone high-profile, ideally linked to your winning film to see your coverage, take to social media and create a storm about your piece. I'm not saying that. I'm not saying that. I'm not saying that storm about your piece. I'm not saying those things can't happen. Just saying on balance, I think they're unlikely to happen, right? So this is not such a safe bet. Just to loop back, this is why I think it's so important that we recognise when, where and how luck contributed to the success of a piece. Because when we fail to acknowledge the part that luck played and when we fail to acknowledge the real reasons why a piece is successful, we'll also fail to judge future ideas. Right? That was a lot. Quick recap. The advice I gave previously was woefully incomplete. I think this is better advice. Answering these questions will give you a much clearer understanding of why a piece worked and you'll be less likely to fall into pattern recognition modes, which is important because patterns like beer and map are not helpful. Next time you're considering remaking a piece, rather than asking, has enough time gone by, ask yourself these questions instead. Part three. So I want to talk a bit about a misconception which I've helped fuel. At the beginning of this talk, I showed you three successful pieces, right? These three here. There's something really important that I failed to tell you though. Those pieces are all outliers. Less than 10% of the pieces that I have been involved with making in my 10-year career have performed like this. It was not my intention to mislead anyone or to give the impression that these thoughts of results were normal. But nevertheless, that is what I've been doing. Every time I walk on stage and I tell people about a very successful piece, I'm like normalising it. Here's another question that I am frequently asked and it breaks my heart. My last piece generated no linked coverage at all. Am I terrible at this? Because as an industry, we live in a society where we don't know what to do. Because as an industry, we largely focus on sharing our successes, our barometer for the actual success rate of digital PR is hopelessly broken. And today I want to try and help fix that. So I got in touch with a bunch of agencies and in-house teams and asked them to share their data with me. As datasets go, it's reasonably robust. I have combined data on more than 2,000 digital PR pieces from 11 different agencies so that your own results, should you choose to try and compare, may or may not compare favourably. There are likely to be good reasons for this. As such, I would like you to be mindful of how you interpret and use this data. Most importantly, I do not want this data to be used as yet another stick to beat digital PR people with. We have plenty of them already. Comparisons are dangerous again, so if you compare your performance, this is a better barometer to use. I'm just trying to give you a better barometer. Let's do this. First up, I've split this dataset in two. We're going to look at results from asset-led pieces. An asset-led piece is just any kind of piece where there's something on the client's site. So it might be a fully interactive, glorious thing that costs a lot of money. It might be a blog post. It's still an asset-led campaign. Something linkable. I'm not going to read this out, but maybe take a picture. You can see for yourself. The data here is from 1,398 pieces. I also collated stats on digital PR pieces without assets. So these are pieces where it's press release only. So we're just going straight to the journalists with a press release. That's how this breaks down for those. So I've got data on 730 pieces without assets. Now, this is the slide you want me to show. I'm sure. It's the comparison I don't think we should be making, but almost inevitably will. So, there you go. Just for the record, though, I don't think that this means that asset-led pieces are better or worse than pieces without assets. I just think they perform differently. Pieces without assets seem to fail at a higher rate. So 5% of asset-led pieces generated no coverage at all versus 31% of pieces without assets. In a similar vein, they are less likely to generate 100 or more pieces of link coverage. So 8% of asset-led pieces generated 100 or more pieces of link coverage versus just 1% of pieces without assets. I think the thing that we need to acknowledge, however, is that pieces without assets are typically quicker and therefore cheaper than pieces without assets. So I don't think this is like a sensible comparison to make. If a tool possible, I would recommend doing both. I think both have their place. But let's look back to this original question. My last piece generated no link to coverage at all and my terrible about this aren't equivocally no. You are not. The truth is, everyone is failing at this. And the truth is, everyone is failing at this. Everyone is failing at this. And they're probably doing so more often than you realize. Because what you see are just people's successes, not their whole body of work. While we're here, believing stuff like you're only as good as your last piece, I saw this on Twitter and it was very cross. Believing stuff like this really isn't a great idea because if you tie your own self-worth to the results you achieve, your confidence and your self-esteem will be all over the place. This here is an ugly graph and I apologize. It's the performance of 40 digital PR pieces I was personally responsible for over just a four-month period at Verve. The peaks are about 200 links. The troughs aren't. You can see it is bumpy. I'd like to leave you with a couple of final thoughts. I said that our barometer for the actual success rate of digital PR is broken. The truth is only 8% of pieces perform like this. These are outliers. If we're launching pieces with the expectation that they're going to achieve 100 or more pieces of link coverage, we are going to be disappointed a lot of the time. That's not the only way in which our expectations are unrealistic. In most of the organizations I've worked with, something like this has been the goal. Every piece should generate a minimum of 10 pieces of link coverage. Possibly you think that sounds reasonable. It really, really, really isn't. The notion that this goal is achievable I think is the most pernicious myth of all. This is the data that I gathered again, just as a reminder. Only 60% of pieces in my dataset generated 10 or more pieces of link coverage. That's the dataset where I was looking at asset-led campaigns. Campaigns without assets, it's even worse. It looks even worse, right? Our goal, though, is 100%. Effectively, our goal is never to fail, which is utterly unrealistic. I wonder why we are setting these impossible standards for ourselves as digital PRs. Technical SEO folks, if you're still awake, hello. I love you. I describe myself as a recovering technical SEO. Please raise your hands if you've ever implemented a technical change which had no impact. Yeah. Of course you have. It's not unusual. It's horrible, but it's not unusual. It happens all the time. If I were to ask you what proportion of technical SEO changes that you've made to deliver an impact, possibly you go, God, I'm not sure how to answer that, right? I asked every technical SEO I knew until somebody gave me an answer. An adorable friend gave me an answer. Here's how they responded. About 60% of what I implement now delivers a measurable impact. 30% of what I implement is future-proofing. I do not expect it to deliver a measurable impact. I'm doing it to reduce the risk of losing visibility in the future. About 10% of what I implement, I expect to work, but fails to deliver a measurable impact. This person also asked me to highlight that this is what they're experiencing in their career right now. Earlier on, maybe only 40% of the changes they implemented delivered a measurable impact. Now, this may or may not square with your experience. It's a sample of one. Another friend of mine said, I'm right now, it technically sounds that most of the tech changes I implement are future-proofing. I focus on content projects because that's what delivers an impact. As I said, I asked a bunch of people this question. Everybody said, everyone agreed that the technical changes that they implement, which they expect to work, so excluding the future-proofing stuff, the stuff they expect to work still doesn't always deliver an impact. So I think it's fair to say that all technical SEOs experience failure just like digital PRs. Now, some of you might be going, like, a single technical change is cheaper to implement than a single digital PR piece. And to that, I would respond, sure. Sometimes, but not always. Mainly, you just think it's cheaper because you don't actually calculate the full cost. A single technical change might incur a negligible cost, but it's not always the case once you account like the full resourcing. Also, the entire programme of activity is likely to incur significant costs. I actually think that the notion that a single digital PR is expensive versus technical SEO is a little bit wrongheaded. Like, they cost about the same, really. Nevertheless, I'm not suggesting that a single digital PR piece or a single technical SEO change are, like, strictly comparable. They are not the same thing. They just have one thing in common, though. They have a reasonably high chance of failure. The difference is we don't expect every technical SEO change we implement to have an impact. We expect some level of failure there. Why aren't we expecting some level of failure for digital PR, too? It's unrealistic to expect that every digital PR piece will generate 10 or more pieces of link coverage, just like it would be unrealistic to expect every technical SEO change we make to have a positive impact. We need to end our obsession with the results of individual pieces. It's fueling unrealistic expectations and causing unrealistic goals to be set. Also, it's incredibly difficult to tie meaningful business results to a single digital PR piece, even if it is one that got hundreds or thousands of links. Rather than focusing on the results of individual pieces, much like technical SEO teams, I believe we should, as digital PRs, be assessing the results we generate over a number of pieces or a program of activity. But most of all, we need to stop setting goals like this. I'm not saying you can't use something like this as a metric, right? You can use this as a metric, so you can say if a digital PR piece we create generates 10 or more pieces of link coverage, we'll consider it a success. You can use that as a metric and from that metric have some sort of more sensible percentage-based goal. But this absolutely shouldn't be your goal. I've been doing this stuff for more than a decade and I still launch pieces that get no link coverage at all. And I'm okay with it because it is not possible to eradicate failure. So the next time you launch a digital PR piece that generates no link coverage at all or you implement a technical SEO change that doesn't deliver the uplift you were expecting, I'm hoping that you'll remember that this stuff happens to everyone and actually it happens more often than you probably think. I would like to thank the agencies who very generously shared their data with me. I would like to thank these humans who helped me put this talk together and most of all thank you for listening to me.