 I'm so excited to be talking with everyone today. We have about 30 minutes together. And I'd like to talk about components that build a successful organic search strategy. You guys already know about me, so we can skip that. Awesome. I work for a company called Merkle, and we work with clients of all different shapes and sizes. And the only really important part here is just that the content going into build this deck was essentially some of the problems and challenges that they're facing today. Let's dive in. SEO is about understanding search bots and users and how they react to a particular online experience. And we, as search professionals, are charged with the task of bridging the gap between search bots, people, as well as our web experiences. In creating this deck and experience for you guys, I kept coming back to this idea of a cyborg, a being that has both organic and mechanical parts, and through that synergy is able to enhance its limitations. And that comes to the SEO cyborg, which is a team or an individual who is able to work seamlessly across technical as well as content initiatives to essentially make the most optimized search performance possible. And we've been given this classic SEO model, which is quite elegant. It comes in threes. Most wonderful things come in threes. Bend diagrams, puppies, the like. Crawl index rank, which is quite simple and elegant. At the same time, it doesn't necessarily describe the breadth of work that we're expected to do with search professionals. And so I took this model and I doubled it into almost like a Voltron-like fashion, separating different components, adding in a render phase, a signaling phase, and then finally a connect phase. And I think that's been coming up a lot over MosCon. The idea that people are actually a really important component of search. And so the SEO cyborg at its essence is all about technical and content SEO and knowing to where to strategically pinpoint our efforts to drive our organic performance. So the venture we're going to get into today, we're going to talk about potential areas of focus, some of the challenges that each of those areas have, and actionable strategies for resolving those as much as possible. So we're going to start with technical. And technical, as an overarching concept, is all about our relationship with search engine bots. And in many ways, it can be more straightforward, which by no means makes it any easier. I'm sure we have like how many technical SEOs do we have in the room? Yeah, I'm sure how many of you guys think your job is easy? Yeah, yeah, exactly. Especially it's challenging because of all the nuances that exist. And if we step back and think about the different CMS platforms, the different JavaScript frameworks or any frameworks that exist, and as well as new technologies, we can imagine the factorial possibilities that exist and challenge us on a daily basis. And then add on to that, that it's very challenging to quantify the value of technical SEO because typically the initiatives are behind the scenes or they're sent live with multiple sprints. And that can be extraordinarily challenging to provide the value. But if I post you guys a question today, how much is identifying a 403 being served to Botsworth? Or what about a misplaced, no index talk on a really critical, important page? And that really essentially comes down to the fact that technical SEO is like roots. It's the foundation upon which the content is able to flourish. Because if Google can access and understand and render and index your content, it's kind of like tumbleweed at that point. Look at it go. Animation fun. And then one of the other challenges that we have also is that search at scale is really quirky. What happens in one industry for a site that's a particular size of a certain popularity may not happen for another site in the same industry of a similar appreciable size. And one of the things that I find really, really quirky is the fact that Google does not instantaneously render all JavaScript upon crawling. And that creates a weird situation that we all have to end up dealing with. And they share this with us when Google I owe 2018 that they're basically queuing everything. And the render phase happens on the second phase of indexing, which adds this almost connotation that if there's a list, our site's being prioritized to some extent. And we know somewhat bigger sites receive somewhat different treatment. And if we chop up the data and look at it by how many clicks over a certain period of time and how often other sites getting crawled, you can kind of see a pretty clear trend that really popular sites get crawled more often, which actually is kind of interesting because it kind of fits into that stale while revalidate model that Google's using with the Google AMP cache as well as PWAs. So Technical SEO at its core starts with the ability to crawl, which is all about the ability to simply find a page. And we know that pages can be found via links or via a redirected page, which this is kind of benign information. At the same time, this can be really critical for when you're analyzing a particular site. If you're seeing, let's say, weird URLs popping up in the index or weird URLs popping up in a crawl that you have, check backlink reports, internal linking reports, or even redirected into that URL. And it might help you to identify that particular situation. We also know that sites can be found via providing a list to search engines. And all these particular methods are functioning and work. And you can submit them into Google Search Console. And they also work with Bing, too. If multimedia is something that's really critical to your site, you can also provide a multimedia list. The second component of crawling that's really critical is that pages and resources need to be obtainable. And that, of course, starts with the robots.txt, as well as making sure that status codes on a particular page are accessible and that you're getting a 200 status code. Which actually, interestingly enough, I kept coming back to the slide because HTTP status code can also be a really powerful signal to search engines. Particularly for 404s, 410s, 500s, 503s, you can actually signal to Google what your intent is there. You also want to have an organized information architecture, which for many of us comes out to be a simple, clear, main navigation, making sure that all links are encoded in HTML and that you have relevant on-page internal linking so that the user can flow through the journey the same way a bot would be able to flow through the journey. And I feel like HTML site maps are a little bit unappreciated because although they're sort of archaic in almost a way, they offer a lot of potential. Because if you notice with your tracking that users are going to your HTTP site map and are then going to a specific section of the site, that may indicate that something in your main navigation is there's an area to potentially approve. And then, of course, photo links. In all of this, we could talk about information architecture probably for hours. There are thousands of page books. There's professionals that focus on this for life. But really what it comes down to is a deep understanding of the user and how they use the site. And then also, how do they flow throughout that journey and then how we can improve that to make it have less friction? And it would be sort of silly to talk about crawling without addressing this idea of crawl budget, which essentially comes down to the fact that search engines don't want to break our site. They don't want to generate a denial of service attack, which would be kind of silly if they did. But essentially, the idea is that they don't want to send too much traffic to your servers and shut down your servers. So if they sense that your servers are slowing down, they'll back off a little bit. They've also told us that we really don't have to worry about this unless we have billions of URLs. And this is perhaps an overhyped idea. But I like worrying about things. So why not worry about crawl efficiency? And the idea of behind crawl efficiency is how easy is it for a bot to traverse your site and experience? And that can come down to things like, how are your pages prioritized? And are they prioritized in a way that makes them as close as possible to the route so that they're linked from there? That you have no unnecessary crawl traps. And then you have the most consolidated, well-linked to experience. All right. What's the next phase of our journey? Rendering, which really relates to a search engine's ability to capture the desired essence of a page. And the big kahuna in the room is JavaScript. And the question that comes up so often is, is Google able to render my JavaScript? And so I provided a few checklist items here. And the first is, if you take a direct quote from your piece of content and you put that into a search engine, is your result returning? That's like a really quick gut check. And then after that, ensuring that you're using HTML links, Google's mentioned that although they can crawl JavaScript links, that they prefer HTML links, and that we can also particularly check what the search engine bot is being served. So if you switch your user agent to be some sort of bot, assuming that your site name allows that, then you can check to make sure that in the document object model, you are serving links that are hard-coded in the HTML. I would also suggest using Google's new tool that is kind of hidden within the mobile-first test, friendly testing tool. And that is a JavaScript console. And you can find it under the View Details panel, and it's a really useful additional gut check. And finally, if you're using any sort of server-side rendering, just check in. You know, occasionally see what the bots see in, hang out, have a good time. And it'll be good, because nothing's worse than serving the experienced users, but then serving nothing to your bots and kind of confusing them in that respect. Another common thing that comes up is infinite scroll. And Google Bot is a lazy user, so they will not scroll to get that content. So the first question you have to ask yourself is, one, do I need to index every single thing within my infinite scroll? And what do I need to be indexed? And of course, if you could chunk that out into separate pages, that would probably be the best solution. But Google's also offers us some other solutions. We can chunk those out into separate pages and then use pagination, if that particular thing is a set, and then identify it that way. Or if you can first somehow fit all the content onto one page, you can just do a View All Page, just make it a single page. Lazy loading imagery, kind of on a similar vein there. Google's offered us a solution that we can put image in no script tags, which is actually so cool, because if you turn off your JavaScript, the image just appears out of nowhere, which is awesome. Or you can use JSON-LD structure data to indicate that there's an image on the page. And actually, this past week, someone was telling me that they had put an image only in their Twitter summary card, and it was appearing within search. And I thought that was kind of an interesting thing and an interesting reminder that Google is looking at that structured data as well as the location of where those images are. Secondarily, for rendering, CSS. And I'll leave a few notes here. And mostly, they relate to images. But CSS background images aren't being picked up in search. CSS animations aren't being interpreted. And layouts are still really important, especially in relation to excessive mobile ads. You might have guessed this because in both the JavaScript section as well as the CSS section, images appeared. And I just have a few additional elements to add to those other bullets. I didn't include all of them because I didn't want to overwhelm you guys on one particular side. But there have been tons of breakthroughs in machine learning relating to image recognition. But as far as I know, that hasn't been appearing within search. And that probably relates to the fact that that problem requires a ton of computing. So for our purposes, what we need to know is that all tags, metadata, and surrounding content are still really important parts of optimizing for image search. And also, I found this kind of odd, but inline image CSVs are not being picked up, which I hope that Google changes that eventually. And finally, for rendering, the idea of personalization, if we look at digital overall, there is a trend to target real known individuals and not simply proxies. Which we've been kind of put in a weird position because most of that work Google's taken on in terms of serving the appropriate results to the users, particularly for non-authenticated experiences. But just so I guess everyone's aware, Google doesn't save cookies across session. And cookie, of course, is a proxy. But just kind of an interesting thing to note that we have to make sure, as of now, which it'll probably change in the future, that we have a base user default experience. Indexing. So probably the shortest section. And it relates to getting your URL in Google's database to be part of that competitive set to rank. And when you're getting a URL into the index, a lot of it relates to ensuring that your pages are both crawlable and renderable. And of course, provide useful information and that nothing is preventing them from being indexed. How many people have had a random no-index tag on a page before during a launch? Yeah, we all suffer. Oh my gosh. Lots of hands. And then submit that sitemap. If you have a sitemap, you can submit the sitemap within GSC, or you can submit an individual URL in GSC as fetch as Google. But the sitemap was pretty efficient. I also included Google's mobile first index in this section. If only because the word index is in this, it's more of a crawling thing and being able to organize the information. But really, it's all about Google prioritizing the mobile experience. And what everyone here needs to know is that essentially, if their mobile experience does not equal your desktop experience and you have a separate one, just make sure that you ensure parity between the content that includes structured data, internal linking, GSC settings, as well as ensuring that your link tag is appropriate for that particular situation. Getting into the link tag, signaling. So signaling is all about suggesting the best representation of a page. And probably the most popular one of this bunch is the canonical, rel-canonical attribute. And I chose some bullet points here that I thought were interesting and relevant. First, Google first and foremost maintains prerogative over choosing what URL is ultimately the canonical URL, which can be a little bit quirky when they don't get it completely right. And theoretically, Google only crawls canonical in HTML. I added that asterisk because there have been some studies where people have seen that Google is able to pick up those canonical tags. So I feel like we should phrase it more like we should try to put our canonical tags in the HTML because that's the way the system was designed to function. And then typically, if you actually have a non-canonical URL with a canonical tag and you crawl as Google, Google will actually go out and crawl that page, which I found that really exciting when I was doing some testing. Next most popular, rel-next-prev, which is a collective set of information. I find it interesting that it's not considered duplicate content and the fact that if page two is more relevant to a particular query, it can rank as well. So kind of interesting to think about. Most notorious of this group is hreflang, which indicates the appropriate language and country of a particular page. And the reason why it's so notorious is it can be very unforgiving. It's very strict in terms of its details. So make sure that you're following documentation, that you're using the online checkers, because you want to make sure that everyone is being served the appropriate experience. And then finally, this one is the one that always throws me for a loop when doing analyses on a site, because it's something that I usually don't traditionally think about. But GSC settings, when they're honored, can be very powerful signals to Google. So making sure that only the people that should have access do have access and that nobody is going willy-nilly in there and that all that information is correct as well. And so content, that brings us to content, which is all about appealing to the bots, is a best response, as well as developing the relationship with the users. And this comes down to, does your page have the best response? And are you known for this topic, whether via authority or via the mind of users? And I split this into two categories, rank and connect. And so starting off with rank, search engines, it's all about search engines arranging of pages. And some of the things that we can influence for on page include textual content, multimedia, meta elements, as well as structured data. In terms of text-based content, we want to make content that is understandable to both search engines as well as people. So that means answering questions directly, writing short, logical, simple sentences, making sure that our subjects are very clear and not to be inferred, which can sometimes confuse search engines. And then creating scannable content. And if there's any sort of uncommon vocabulary, just link it or define within context. Using semantic HTML, which is the most popular being hdags, unordered list, ordered list, as well as tables. And then adding relevant schema.org structured data. If you guys want to talk more about structured data, totally catch me after this show. I only have this one slide here on it. Making content accessible, we want to be able to do this so that everyone can enjoy the same experiences that anyone would be able to partake in. And a lot of that relates to keyboard functionality, various forms of structured data, text alternatives for non-text media, and ensuring adequate size and color contrast. And I definitely recommend, if any of the sites haven't had this, to have a professional check that out because it can be a really eye-opening experience to see what someone else is going through. So it brings us to how to find relevant queries. And a lot of this comes down to a combination of research and intuition. Because with intuition and data, we can test and then get better strategic decisions, ultimately. And there are really three facets and ways of looking at this. The keyword in landscape research, which we saw amazing presentations yesterday on that, analytics and performance, which I know we have some presentations to come. Dana DiTamazia is gonna be talking a little bit about analytics, and then users and audience. And I cut out the keyword and analytics information for time, but focusing on more, this particular one, on users and audience, because I think it'll roll into the next section really lovely. So audience research. We need to find out who our users are, where they are, why do they buy, how do they buy, and what do they ultimately want? And then we can look at different tertiary questions, like, all right, well, what do they value? What kind of functional consequences, personal consequences, personal values are they looking for? What is their relationship with technology? Because how we write for someone that is a millennial is gonna be very different from how we write for someone who's a baby boomer. What do they do online? How do they engage with online experiences? What other brands do they engage with? I know we had a presentation by Lisa yesterday that talked about how people are really willing to collaborate, and how to get those synergies together can be so powerful. And then we take all this information, aggregated, and map it out into a user journey, and that allows us to define what our goals are for each section, to identify common traits that each individuals have, and to really target our content more efficiently. And then this is one of my favorite charts. It's basically a tool to prioritize content. And basically what it says is that the content that is branded and transactional, you wanna make sure you get that right, and then as you traverse backwards towards the non-brand research oriented, it's more about being a strong content program from there. I also like to ask the question, what is best in class? And we've been talking a lot at Moscon about what are the most interactive, engaging experiences, and I think that's a really powerful thing to consider because we don't just wanna be just focused on getting found, we wanna be found and be impactful. And then I just love this chart about basically, we wanna start off with our current state and make realistic goals so that we can eventually transform to what we need to do. We don't need to do everything at once, especially for things that may not be a priority. And that brings us to connecting. And this is really about people and resonating with people. And connecting is all about understanding that we are humans and we have certain limitations and that we are constantly filtering, managing, multitasking, processing, coordinating, and trying to extrapolate information that's useful to us. And if we think just for a second about the lights, the people, everything, everyone moving around here, everything that's going through our brain at this exact second, and still, we have to filter out what is useful to us. It's pretty incredible. Like, our brains are designed so that nothing useless comes in, which is actually pretty awesome. And as marketers, we have to figure out how do we become psychologically sticky? And first and foremost, that relates to getting past the mind's natural filter. And a positive aspect of being a poll channel is that individuals are already seeking us out. They're already looking for help. They're already seeking advice, information, something to buy, something to find. And we can be a part of that journey with them. And at that point, we have one job, once they found us, is to be memorable. And interestingly enough, the brain holds whatever is relevant, useful, or interesting. Which, searcher's interest is also peaked. So we kind of have it good there, too. Even if they're not consciously aware as to why they're searching a particular topic. Which means we have a genuine opportunity to be there for people. And this is a really simple philosophy, but I think it extends, and could extend, well past SEO implications. And that is that being a great brand is like being a great friend. We have similar cycles of our relationship. Meeting, bonding, nurturing, building trust, maintaining a relationship, and even rekindling. And so a simple question to reflect upon and to ask in surveys is, what type of adjectives do your online customers use to describe your brand? And are they similar to what they would use to describe a friend? Available, dependable, honest, trustworthy, listens, compassionate. Because ultimately, we want to be a part of their journey, not just for this one point in time. We want to basically build trust and build friendship throughout the course of the process. And overall, today what we've talked about, a site we want to make sure our sites are crawlable, renderable, and indexable, that all signals that we are saying are clear and consolidated, and we're not sending anything that's unnecessarily confusing to search engines, that we're answering related questions, that all content is either relevant, useful, or interesting, and that we treat users as a friend. And all this has been about the fact that man and machine, creating experiences that users find interesting, and relevant, and compelling, but also making sense to search engines. That's all for today.