 Hello, and welcome to the 3x3 SEO Tips for JavaScript web apps. I like to start with a few words about Google Search, SEO, and JavaScript in general before we dig into the three tips I have for you tonight, plus a little bonus tip at the end. OK, first things first. Google Search does support JavaScript. Since May 2019, we are also using an Evergreen Chromium to render all pages, making sure that we can index content generated by client-side JavaScript as well as content in static pages. Most modern frameworks will work fine out of the box, and SEO isn't more of a concern than for non-JavaScript different pages. So you may wonder, why are so many SEOs nervous when hearing that a website will be implemented using JavaScript frameworks? Well, the short answer is, because things can still go wrong. There are lots of potential challenges and aspects that should be considered also from an SEO perspective when building or changing a website. These are not limited to JavaScript sites, though, even if they sometimes do manifest after a site relaunch that happens to be leveraging client-side JavaScript. Such things could include issues like marking pages as no index by accident or blocking our crawler from retrieving crucial JavaScript resources or API endpoints, or setting up the server in a way that prevents us from accessing your site and its content properly. Again, we do see these problems in JavaScript-driven websites as much as in non-JavaScript-driven websites. And these can usually be prevented by having a plan for implementation and testing of a new website or a new website version in place. OK, so we've got that out of the way. Let's look at the first tip, the first thing you should check in a new or an existing project. Does every page have useful metadata? Two bits of metadata are particularly important for your website in Google search, the title and the meta description, as these show up in search result pages. They are not important for ranking purposes, but they do help your users identify the page they might want to go to in order to find the information that they are looking for. If all your pages show a generic title and description snippet, that is exactly helpful for users trying to find the best page according to their search intention. If you do have specific titles and descriptions, on the other hand, it is perfectly clear to users which one of your pages would be the best to serve their interest. So how do you get that implemented? In Angular, you can use the built-in title and meta services. Import them from the platform browser package and inject them in the constructor. Once you've done that, you can use them in the lifecycle hooks of your component and specify them with information available in the component. That way, it reflects the page content. Here, for instance, we set the title and the meta description according to the currently shown recipe, for example. In React, the React helmet library can help with this. Once you installed an imported React helmet, you can use the helmet component in the JSX of your component's render method. Again, we are setting the title and meta description with information from the current recipe. In ViewJS applications, you can install View-Meta from NPM and specify a meta info lifecycle hook in your component. Similar to the previous examples, here, we are setting both the title and the description snippet with recipe-specific information from the component data. But title and description aren't the only pieces of metadata that you might want to set. Tip two is that in some cases, like on error pages, you may want to prevent Google search from indexing the pages. And if multiple URLs point to the same piece of content, you may want to specify which of these you consider to be canonical. Let's look at how to do this. Let's start with the error pages. I'm pretty sure you would like to avoid having error pages show up in search results, and so do we. Often, we can spot error pages and exclude them automatically, but it is better to make sure they don't show up by excluding them explicitly on your site. So if you're using Angular, you can use the meta service for this. When you specify a robot's meta tag and set its content to noindex, the page will be skipped by the indexing pipeline in Google Search. In React, React Helmet gives us the same ability. We add a meta tag for robots and set it to noindex, if the recipe does not exist. For ViewJS, it's viewMeta's meta-info method that we will extend to set the robots tag to noindex when necessary. OK, now let's deal with the canonicals, because we can pretty much build on the implementations we've just seen. So let's say we have a bunch of different URLs pointing to the same content, like recipes, cupcakes, or recipes, ID 1337. How can you tell Google Search which of these you would prefer to be shown in search results? For Angular, we could build us a link service ourselves. The service allows to inject link tags into our pages. This implementation is basic, but does the job. In our components, we inject the service and then use it to create a new link with relation canonical and the URL we'd like to be showing up in search results for this page. React Helmet has this built in. In React applications with React Helmet, add the relevant link tag in the Helmet section of your components render template, and there you go. For ViewJS applications, you can add a link property in the return object of your meta info method. Include the relation in href attributes and set them to canonical and the desired URL, respectively. With that, we got the basic metadata nailed down. So tip three is looking at one more bonus way of having Google Search highlight our content. Structured data. Using structured data makes your pages eligible to be shown as rich results if the content falls within one of the many supported verticals, such as recipes, books, articles, products, and many more. Note, however, that implementing structured data does not guarantee that your page shows up as a rich result, though. To implement this in Angular, we can build us another small service to inject the necessary JSON-LD script tag into our pages. The service can then be used in our components similar to the other services we've seen in this presentation so far. In React, you might add the information in your render method as part of your Helmet markup. For ViewJS applications, you can pass structured data into your template by specifying the JSON-LD markup in your data lifecycle hook and then pipe it into the rendered HTML using the VHTML mechanism. If you want to learn more about structured data, the available verticals, and the implementation details, check out our search gallery at goo.gle slash search dash gallery. Oh, I promised a bonus tip. It's for those of you who use ViewJS. As ViewRouter lets you pick different strategies as to how you want to do the routing in URLs in your application, let's make sure that crawlers like Googlebot can actually crawl them. Because one way of configuring it is to use fragments. These have the upside of being trivial to set up on the server, as the server isn't really concerned with such URLs. That's because fragments, the things you see following a hash symbol, are ignored on the network level. They only address content that is part of the content of the document at that URL before the hash symbol. Hmm. Unfortunately, that does mean that crawlers treat pages that have all the routing information for specific content in the fragment as if they're all the same URL. To give you an example, a URL like slash recipes slash cupcakes is very clearly different from, say, slash recipes or slash. Crawlers see these as distinctive, different URLs. Fine. But what if our URL is slash hash and then, say, slash recipes slash cupcakes? It's all in the fragment there. That it looks like the home page when the crawler takes the URL. So crawlers will also just request the home page. And that means that all the different recipes and all the other content is invisible to this crawler. Luckily, by selecting the history mode in the router configuration and making sure that your server serves all URLs correctly, you can fix that. That way, crawlers can process all your URLs and see all of your content. All right. So those were some of the basic tips for your JavaScript web apps. I'm sure that was quite a bit, and you probably want to dig deeper and learn more. So let me give you the takeaways again and a few more pointers. First and foremost, JavaScript and SEOs aren't enemies. Most modern frameworks will just work out of the box. But it is crucial to test your implementation and keep an eye on those things and how your site is doing in Google Search. Our testing tools and Google Search Console are fantastic for doing just that, and they're all free. We also put together a JavaScript SEO basics guide to get you and your coworkers started on this topic. Should you encounter any challenges, check out our troubleshooting guide for more information on the most common issues and solutions. We also do have a YouTube video series on JavaScript SEO that you can check out on our YouTube channel, youtube.com, slash Google webmasters. There's more episodes to come in the future. Thank you very, very much for watching this video, and do remember to build cool stuff for your users. Have a great time. Stay safe. Bye-bye.