 If you are building a website or web app using JavaScript, you should take a few basic steps to make sure your content is discoverable via search. Let's look at a few SEO techniques to help users find your content. All of your pages should have a descriptive and helpful title that describes what the page is about in very short terms. For example, on recipe pages, avoid using a generic title such as Barbara's Baking Block. Instead, each page should have the name of the recipe in the title so it's clear what the page is about. You also should provide a description of what the page will contain specifically. For example, what makes this recipe special? Or what are its main characteristics? So people have something helping them identify the best page to fulfill their intended goal. Both of this can be done by adding a title and meta tags in your markup. You can check your pages for those tags by using the right click Inspect and then search for double slash title and double slash meta to find them. If you do not see all of your content in the markup, you are probably using JavaScript to render your page in the browser. This is called client-side rendering and is not a problem per se. Rendering is the process of populating templates with data from APIs or databases. This can happen either on the server side or on the client side. When it does happen on the server, crawlers as well as your users get all the content as HTML markup immediately. In single page apps, the server often sends the templates and JavaScript to the client and the JavaScript then fetches the data from the backend, populating the templates as the data arrives. As explained in the first episode, the indexing for JavaScript sites happens in two waves. Content that requires JavaScript to be fetched will only be indexed in the second wave, which might take some time. In later episodes, we will cover how to overcome this and often also improve the user experience and loading performance by using techniques such as dynamic rendering, hybrid rendering or server-side rendering for single page apps. Another important detail is to allow Googlebot to crawl pages from your website by linking between your pages properly. Make sure to include useful link anchor text and use the HTML anchor tag with the destination URL of the link in the href attribute. Do not rely on other HTML elements such as div or span or use JavaScript event handlers for this. Not only crawlers will have a trouble finding and following these pseudo links, they also cause issues with assistive technology. Links are an essential feature of the web and help search engines and users find and understand the relationship between pages. If you are using JavaScript to enhance the transition between individual pages, use the History API with normal URLs instead of the hash-based routing technique. Using hashes, also called fragment identifiers, to distinguish between different pages is a hack that crawlers do ignore. Using the JavaScript History API on the other hand with normal URLs provides a clean solution for the same purpose. Remember to test your pages and server configuration when using JavaScript to do the routing on the client side. Googlebot will be visiting your pages individually so neither a service worker nor the JavaScript using History API can be used to navigate between pages. Test what a user would see by opening URLs in a new incognito window. The page should load with an HTTP 200 status code and all the expected content should be visible. Using semantic HTML marker properly helps users better understand your content as well as navigate it quicker. Assistive technologies like screen readers and crawlers also rely on the semantics of your content. Use headings, sections and paragraphs to outline the structure of your content. Using HTML image and video tags with captions and alt tags to add visuals you help crawlers and assistive technology to find this content and surface it to your users. In contrast, if you use JavaScript to generate your marker dynamically, make sure you aren't accidentally blocking Googlebot in your initial markup. As explained in the previous episode, the first round of indexing does not execute JavaScript. Having markup, such as a no-index meta tag in the initial payload can prevent Googlebot from running the second stage with JavaScript. Following these steps will help Googlebot understand your content better and make your content more discoverable in Google search. Hi Googlebot! Hi Googlebot! Did you see the new Webmasters video series? No I did not! What? You missed out on so much stuff! Really? Yes! Oh no! How could we have prevented that? Well, subscribe and follow our videos.