 Hi, everyone, and welcome back to the Google Search News series. I hope life is treating you reasonably well wherever you are. I'm your host today, John Mueller, here from Switzerland. With this show, we want to give you a regular summary of what's been happening around Google Search, specifically for website owners, publishers, and SEOs. If you find these useful, which I hope you do, and if you'd like to stay up to date, then make sure to subscribe to the channel. I hope your year both ended well and is starting off well. What a unique year it was, right? If you're watching this in the far future, then first off, congratulations for making it that far. And secondly, as you can see, in early 2021, we're still recording from home in Switzerland. In this episode, we'll be covering some neat new things around the foundation of Search, namely crawling and indexing, as well as another relevant part of Search, namely links. If you're curious to find out more, then stay tuned. As a bit of background, crawling is when Googlebot looks at pages on the web, following the links that it sees there to find other web pages. Indexing, the other part, is when Google Systems try to process and understand the content on those pages. Both of these processes have to work together and the barrier between them can sometimes be a bit fuzzy. Let's start with news about crawling. While we've been crawling the web for decades, there's always something that we're working on to make it easier, faster, or better understandable for site owners. In Search Console, we recently launched an updated crawl stats report. Google Search Console is a free tool that you can use to access information on how Google Search sees and interacts with your website. This report gives site owners information on how Googlebot crawls the site. The report covers the number of requests by response code and the crawl purposes, host-level information on accessibility, examples, and more. Some of this is also in a server's access logs, but getting and understanding them is often hard. We hope this report makes it easier for sites of all sizes to get actionable insights into the habits of Googlebot. Together with this tool, we also launched a new guide specifically for large websites and crawling. As the site grows, crawling can become harder, so we compiled the best practices to keep in mind. You don't have to run a large website to find this guide useful, though. We'll add a link in the description if you're keen. And finally, still on the topic of crawling, we've started crawling with HTTP2. HTTP2 is an updated version of the protocol used to access web pages. It has some improvements that are particularly relevant for browsers, and we've been using it to improve our normal crawling, too. We've sent out messages to websites that were crawling with HTTP2 and plan to add more over time if things go well. As you can see, there's still room for news in something as foundational as crawling. And now let's move on to indexing. As mentioned before, indexing is a process of understanding and storing the content of web pages so that we can show them in the search results appropriately. For indexing, I have two items of news to share with you today. First, requesting indexing in the URL inspection tool is back in Search Console. You can once again manually submit individual pages to request indexing if you run into a situation where that's useful. For the most part, sites should not need to use these systems and instead focus on providing good internal linking and good sitemap files. If a site does those well, then Google Systems will be able to crawl and index content from the website quickly and automatically. Secondly, in Search Console, we've updated the index coverage report significantly. With this change, we've worked to help site owners to be better informed on issues that affect the indexing of their site's content. For example, we've removed the somewhat generic crawl anomaly issue type and replaced it with more specific error types. There's a bit more about this update in our blog post, which I've linked in the description below. Finally, I mentioned links in the beginning. Google uses links to find new pages and to better understand their context in the web. Next to links, we use a lot of different factors in Search. But links are an integral part of the web, so it's reasonable that sites think about them. Google's guidelines mention various things to avoid with regards to links, such as buying them. And we often get questions about what sites can do to attract links. Recently, I ran across a fascinating article from Giselle Navarro on content and link building campaigns that she saw last year. While I obviously can't endorse any particular company that worked on these campaigns, I thought they were great examples of what sites can do. It's worth taking a look at these and thinking about some creative things that you might be able to do in your site's niche. I added a link in the description below. Creating awesome content isn't always easy, but it can help you to reach a broader audience. And who knows, maybe get a link or two. And just a short note on news about structured data. As we mentioned in one of the previous episodes, we've decided to deprecate the old structured data testing tool and to focus on the rich results test in Search Console. The good news is that the structured data testing tool isn't going away, but rather finding a new home in the schema.org community. And that's all for now, folks. In closing, I'd love to hear more news from you all, especially around this video series. Which parts did you find particularly useful? Which parts less so? What should we focus on more for this year? Please let me know in the comments below or drop me a note on Twitter. I really appreciate all feedback, yours too. Finally, if you'd like to see more of these episodes or catch up on the new series for sustainable monetized websites, make sure to subscribe to the channel. I look forward to seeing you all again in one of the future episodes of Google Search News. Bye.