 giving a session about Drupal Azure Pitfalls and how to avoid them. The session is based about things we encountered on our own websites and also during Azure audits we did. This will be a dual session with me and Walter who is joining remote. So, Drupal Azure Pitfalls, the Drupal component extended edition. So, about us. First of all, me, I'm Wendt. I'm a Drupal architect and Drupal trainer at Dropsolid. I will speak with Drupal on Amsterdam, Dev Days Lisbon and Drupal Cam Gantt 2018 as well. So, how about my colleague? Hi everyone, I'm Walter. I hope the audio is coming through quite well and you can hear me. Is that okay, Brunt? Yeah, it's perfect. Okay, perfect. So, I'm joining from Belgium. I didn't make it due to the circumstances as you can all know. But I'm an SEO strategist and evangelist also at Dropsolid, so I'm a colleague with Brunt. And I like to speak about all things related to SEO. That's why I didn't want to miss out on the opportunity to call in Brunt. So, during this session, I will talk about the more depth approach and what we'll talk more about the SEO approach. Maybe to annoy Walter a bit. Let's do a show of hands. Who here is the developer? Okay, biggest part of the, people are developers. And the SEO specialists are marketing people in here. You have one people who is really with SEO, right? And rest site builders or Drupal users, I guess. Okay, so mostly developers, also some site users and one SEO person. Okay, let's get right in. Can you go to the next slide? Yeah, okay, let's dive right in. So, the first thing we're going to talk about is public entities. So, by default, entities might be publicly available on their own URL. So, for example, if you have a team module, then a team member could generate public nodes for each separate team member. So, if you have, for example, 20 people in the team that could result in 20 separate pages for one team member, while they are only really used for a team overview page. So, it could result in things you see at the bottom of the slide there, slash known slash 42, or slash taxonomy slash term slash 42. So, this generates nodes for pages for each entity separately. So, this is actually low value content and in content pages. And this can be indexed by Google if you're not careful of your setup. So, really, you don't want these pages on your site because they are a waste of resources. They're a waste of bandwidth, waste of crawl budgets, and a waste of database storage, among others. So, to recap, in general, you want your pages on your websites to be as valuable as possible. And if you have a lot of these team content pages, then, well, that sounds good for your user experience and it's not good for Google where it crawls and indexes these low value pages. So, the solution to this is actually preventing pages that shouldn't be accessed by accessing them for visitors. This can be done by modules such as the rabbit hole module. When, as an example, Watergate, you have a team page and every team member is a separate node. If the page isn't designed, is it themed or is it meant to be accessed? Take away the access to that page and that will be not indexed by Google as well. Next, all pages should always be an entity if that is possible. So, oftentimes, pages on your website will be generated based on other content and these will not be an editable node in the backend. So, for example, a whole page is an example of that or overview pages like we talked about earlier. So, when a page isn't editable as a node, there is no easy way to edit the MATE attack information or to configure the XML sitemap inclusion settings for this page. So, a content editor should be able to edit the MATE title or the MATE description, things like that. And if a page isn't editable as a node, then the content editor doesn't know how to do that or he isn't able to do that. There are actually two separate solutions for it. The first one is to use a Drupal Corly Outbuilder. So, with the Corly Outbuilder, you can use a page which is just a node and add certain blocks to it. So, you create your page, add a view to it and then the end user can just edit the page, add some MATE attacks, not a problem since all the pages are perfectly doable for SEO for MATE attacks. The other solution, which is the one we use more in DrupSolid, since we are not, well, we hope the Outbuilder is very good to use in the future, but at the moment, there are still some problems with translations or other things. So, for now, we actually use paragraphs for our pages and we have one paragraph that uses either the block fields or the overview field module. That's a module that allows you to create a view block and use that in a field in the paragraph. So, you add your normal basic page, you add the paragraphs to it, and then you can add an overview. An added bonus of this is that the content editor can add blocks on top or below the overview as well. So, there's two advantages. Yeah, let's talk a bit about search to be more specific, indexable internal search. Now, by default, internal search result pages or SERPs, as Google likes to call them, by default, they are often indexable by search entries. Now, again, this results in low value and big content pages, which can be indexed by Google. So, we're talking about internal search, so the search functionality on your own website, so not on Google. So, you don't want these low value pages in the index, the same reasons as before. I want all your pages on your website to be of a high quality and you don't want low value of the content pages to be called by Google. Now, Google even implicitly mentions this and it's called the guidelines to not let Google bot index internal search results. So, if Google tells us to do it, we should do it. Yeah, there are different results. When you're just using views for search pages, which I, in previous slides, said don't use view pages, use view blocks, but if you're still using it or it's been used in your website, you can install the meta tags or meta tags views page and the meta tag views module. And then in your views, you can edit the meta tags and check the buttons to prevent search index from indexing this page and present them from following links on this page. So, as you can see, it is possible with a view page to edit the SEO, the meta tags, but it's easier in a separate page. So, if you use separate page, then you just have to install the meta tag and then it's on the right same procedure, only a bit easier since it's not a meta tag and essence is not in view pages and when it's in view pages, it's in config so your content editor can't change it. So, this is the advice solution. Just to recap, small, if you're not doing this, if you're not blocking Google from indexing those pages, then in theory, your website contains an infinite amount of pages because you can search for anything which will all generate your URL probably and if you're not blocking it from indexing it, then your website is an infinite amount of indexable pages, which is of course not something you want. Okay, continuing on the subject, we'll talk about index test environments and pages. Now, development and staging environments are often crawlable and indexable by search engines, sometimes because of configuration issues, sometimes because of laziness, we all know how it is. That's not true. Now, temporary content like your text is also something that often shows up in search results. Now, I think everybody can agree that we don't want staging environments or testing pages like paragraph testing pages to be in the Google index or to even be publicly available for that matter. Now, there are wide range of reasons for this. For example, apart from SEO, you don't want people searching for their site ending up on a staging website and ordering something from a staging workshop, for example, these are all things that can and will happen if you don't look out for them. Now, I added some screenshots here from real websites, real URLs that are in the Google index. So, as you can see here, there are some deaf and staging environments here. There are some Nord Gibson text appearing on the right-hand side. I see a web shop, it seems. So, these are all things to look out for because you don't want people who start shopping on your staging environment, of course. The solution, there are two solutions. If you have a test page on your live environment, it's more difficult to deny access from it. So, the solution, again, is with the Meditech module. Just check the checkbox, prevent search engines from indexing this page. That's one solution for live environments. And for staging, you might think the solution is robots.exe, but what it will cover, why it's not, the solution later in this presentation. The best solution is actually preventing access to the page with HG passwords. So, if you want to set it up, you can Google it pretty quickly, it's pretty easy to set up, and the client will have to enter a username and a password before they can access the deaf or staging environments. Yep, this will make sure Google can access the page and can't access it. So, that's what you want. Next up, let's talk about assets being blocked by the robots.txt file. Now, sometimes, public calls or images or other website assets will be located inside a folder that's blocked from callers by the robots.txt file. We see a screenshot here at the bottom right, so in this screenshot, for example, at the bottom you see disallowed themes. This means that everything that is in themes forward will not be accessed by Google bots or cannot be accessed by Google bots. Now, in general, we always want Google to view and understand the page, just like a regular website visitor will see it. If you're loading assets that you are telling Google, you can't call these assets or you can't visit these assets, then Google can't really view your website the way a person views it. This could result in some notifications in Google Search Console, as you see on the screenshot on the right here, notifications saying page partially loaded. So, not all page resources could be loaded. This can affect how Google sees and understands your page. So, we want Google to be able to understand our page entirely, so we don't want these assets blocked in the robots.txt. Now, you should make sure all your assets, like images, icons, public phones, are in a publicly available folder, so they are not blocked. And you could also keep an eye on Google Search Console for notifications regarding blocked resources. Okay, next topic, module overload. So, there are a lot of modules in Drupal, and of course you want to add a lot of them since all the modules add good things, but we have to watch out a bit. You have to be careful. Modules that you implement that have an impact on site loading pages are of course less good. A module that's just used in backend doesn't really hurt, but a module that loads something when rendering the page, every single thing that has to be rendered by loading the page of course slows down your site and slows, yeah, that sounds good for Google. So, this is a quote from Google. We encourage you to start looking at your site speeds, not only to improve your ranking and search engine, but also to improve everyone's experience on the internet. So, it's really important for Google to load your pages very quickly. What's the solution to it? Actually, it's pretty simple, just think twice before you install a module, if it has an impact on a page load, and if you really need a module. A classic example of it is the add to any module, which add the share to Facebook links. We actually don't use it, we use a custom solution. Since a custom solution we use is just adding some links and add to any load some JavaScript, which is very slow, well, it's just a simple link. So, that's something to consider. If you add a module, does it impact the page speeds in a positive or negative way, and do I really need it? Okay, let's talk a bit about redirects or chain reactions. So, if you don't pay close attention to your redirect setup on your server, this could result in multiple redirects falling off on each other. So, these are also called redirect chains. Now, these redirect chains are really not user and search-handed friendly, because when Googlebot, the crawl from Google, visits a page, which returns a redirect status code, so for example, a 301 or a 302, then Googlebot will add that next page to the bottom of its to-visit list for that website. So, Googlebot does not crawl all pages of your entire website every single day, it has a list of pages of your website and it just goes down that list. Now, if it stumbles upon URL that has a 301 redirect, and it adds the new URL at the bottom of that list. Now, for small websites, this isn't really a big problem, but for big websites, this could result in very slow indexation times, because when Googlebot visits a page and it wants to index it, but it returns a 301, then that new page will be added at the bottom of the list, so this will result in very slow indexation times. This is a screenshot of our link redirect trace. Now, this shows that we visited a page on HTTP and without www.info of it. This resulted in a 301 redirect, redirecting to the W version, which then redirected to the HTTPS version with the Ws. So, it would be better to combine these two redirects into one, and if you visit the first URL, you are immediately redirected to the last one. So, combine redirects wherever that's possible. The solution for this, we all, or most of us, probably have seen these lines in the default HTTP password of Drupal. It's fine to just uncomment this for small sites, but if you have bigger sites, it's better to think, like, hey, what are the redirects of my site, and change this. That was a bit too quick. Two, something more like this. Like, if it's the HTTPS www, then go to that one. If it's that one, go to that one, and if it's another one, go to another one. Just to combine those double redirects to get the middle one out. So, that's only one redirect instead of two redirects. Yeah, and the last redirects are better. I think Google also said that from, I think, five or six redirect ops, it will start ignoring your URL entirely. So, just try to combine them whenever possible. Let's talk a bit about security leaks that could have a big impact on your SEO. So, if you allow public file uploads on your website for one place, this could result in lower organic traffic when it's not set up correctly. So, for example, if files can be uploaded without some form of authentication or capture, it could result in spammers uploading thousands of files and all these files being indexed by Google. A specific example would be if you have jobs on your website and you allow people to submit applications and to upload their resume. This could mean that if this form is not authenticated or has a capture, this could mean that spammers could use that form to upload torrents, for example. So, that's what you see here. You see a screenshot of a website that contains a lot of torrents that were uploaded, which is, of course, something you do not want. And it's not only bad for your website, for your server, but it's also bad for your SEO. So, this is a screenshot from Google Search Console. And this screenshot says, one issue detected, which affected by manual actions, are either denoted in Google search results or are moved entirely. So, this means that Google noticed there was a spam on this website. And it said, yeah, we're going to put a manual action on your website, and it could mean your page or your entire website is removed from Google NX, which could have a huge impact on your business, of course. Now, these manual actions, they really are manual in the case that is actually a Google employee that is reviewing your website, noticing the spam and adding the manual action. This is not something that's automated. This is really an employee that adds those things. And they're also very hard to get rid of. You can get rid of them, but it's pretty hard. So, whatever you can, you do not want manual actions. Just a quote from Google itself. So, Google issues a manual action against the site. When I even review at Google as determined that pages on the site are not compliant with Google's master quality guidelines. So, it's a reviewer, it's a Google employee that slaps your site with a manual action. So, try to avoid those spamming characters. So, this is a really important one. Luckily, there's quite a simple solution. When you have a form on your website, just add a recapture. There are two modules that I can suggest. The recapture that you probably all know, and the simple recapture, which is less known, I'll quickly go over the advantages and disadvantages of both. So, the recapture is the widely used one that allows Ajax forums and it's fine, it's only one contact page. But the problem with this one is been used quite widely. But it actually disables caching on every page where recaptures are added. I don't know if a lot of people know this, but if you have, for example, a form that's on every page and you add the Google recapture module, which everyone use, then all pages that contain that form will receive the no-cache tag. So, that's a pretty big impact on the speed of your website. Because we detected this issue as well, one of our developers created another module, Simple Recapture. It's less advanced than the recapture module. As said, it doesn't allow Ajax forums, which the recapture module does. But it does allow caching. So, if you have a form on every page, then I would advise to look at Simple Recapture so that the caching doesn't get no cached. The other thing, this is more for GDPR and all those things, is to think if I have an upload form, where should those be put? Should those be set in the public files? Or is it possible to put them in private files? It is, for example, a resume of customers, for job applicants. Then you don't want those publicly indexed. For example, in my resume, my name is in there, my address is there, my telephone number is in there. I would hate it if that gets on Google just by uploading a form on a website. So, when possible, try to use the private files, and it's also advised to put those folders outside of your doc route. This should become a knowledge, but I prefer to say it's one time extra. Okay, so now let's talk about the difference between robots.txt and a new index. So, contrary to popular belief, blocking a page using the robots.txt or adding a new index directive to the meta-rubles tag, they are not the same thing. So, robots.txt instructions will impact crawling, not indexing. Adding a new index directive, for example, by using a meta-tag on a page, will impact the indexing of that page, but not the crawling of it. So this is a very specific and important difference. So, robots.txt is for crawling, no index is for indexing. Now, it sounds pretty weird, but Google is able to stumble off on a link on some external website that's linking to a page which is blocked by robots.txt and Google will still index it. Now, this result, the result of this will most likely be a snippet in search results without the title or description, because Google can't read the title or meta description, but the URL could still be in the index, even though it's blocked by robots.txt. Now, I can hear somebody thinking, now, wait a second, I understand it could happen in theory that the link is indexed, even though it's blocked, but does it really happen in practice? And actually, yes, it does happen in practice in some cases. So, for example, here, we have a screenshot of Google.org, which doesn't have a description, it doesn't have a title, it just has a weird little looking URL. Now, if we dig into why this is the case, there are robots.txt, and if you look there on the screenshot, it says Googlebot picked up some strange home page URLs somewhere, and then it disallows the entire slash home folder, hoping that those weird URLs would disappear from the index. But, of course, like I just said, this only impacts the crawling and not the indexing. So, Google is still able to index it, and it has indexed it, but it's just not able to crawl it, and there's no title or description. So, this is not really the good way of trying to remove a page from the Google index. Actually, if they would want that page to be removed from the index, then they should remove the entire robots.txt instruction, so this slash home should be removed, and they should simply add either a no-index directive to that page, or they could just 404 the page, and if a page is 404, or 404, then Google will remove it from the index after a while. So, if you want pages removed from the index, do not block them using robots.txt. Next up, some Google analytics horror. Now, of course, correct data in Google Analytics or another analytics tool is very important when people are analyzing their SEO efforts. Now, I know this is not something developers usually take a look at, but if you are just launching sites, you should, or in a couple of months, you should pay close attention to sudden drops and spikes in Google Analytics data because sometimes there might be a configuration issue. I don't want your customer calling you a couple months later saying there's an issue which you could have prevented. So, pay close attention to sudden drops and spikes. For example, this is a screenshot showing users to a website, and we can see that in the beginning of January 2019 that there was a slight or big increase in the amount of users. Now, this could be the customer saying well done, our content efforts, our SEO efforts are finally paid off, but in this case that really wasn't what was happening. So, what was happening was that the a new cookie device module, sorry, was updated. The update of this module resulted in each page view starting a new session as long as website visitors didn't accept the cookies. So, for example if I browsed to the website and I visited 10 pages, then this would result in 10 sessions of one page each instead of one session of 10 pages. So, this is really messing with the data. Now, there's a couple of possible solutions for this. The first solution is this one. In this one to be GDPR compliant we will anonymize the visitor's IP address. Now, this will be done either in Google Analytics or Google Tag Manager depending on what you use for tracking. So, the left screenshot is Google Analytics. The one below it is Google Tag Manager. And we will just set the anonymized IP setting to true. This will make sure that the IP address and the last part of it are anonymized. So, they are not personally identifiable information which is good for GDPR. Second, in this step of this two-step fix we will whitelist the Google Analytics cookies in the Drupal module. So, there you can see on the right hand side we have three whitelisted cookies and these are the Google Analytics cookies. Now, this whole thing combines make sure that Google Analytics is always executed but the IP address is anonymized. There is also another possible solution which I will gradually explain. The second solution is more the developers approach. There is an update or patch for the Google Analytics module to be compliant with the EU QQ compliance. It will only start tracking as soon as people have pushed the OK button of the QQ compliance module. So, this will of course be reduced in your page visits since I don't know how many people really click the button, but as long as they don't click the OK button, they are not being tracked. So, this is a more GDPR compliant solution but I don't think it's the best for your SEO results. Yeah, first you want as much data as possible. So, anonymizing is often a better solution but it could depend on your own situation. Now, let's take a look at untranslated content on your website. So, if you have a multilingual website, they should be translated 100% whenever that is possible. Because if content is not translated, Google will automatically show content from a default language which is often English. So, this could result in English content being shown on the French section on the website, for example. Now, this is not ideal for a couple of reasons for what it could scare away non-English speakers and it could also confuse Google because if you have a French page on multilingual sites and it has a French section with untranslated content, then it could be the case that your header is in French, your photo is in French but the content is in English, for example that could give mixed signals to Google because Google will think is this now an English page or is it a French page and this will have an impact on your SEO rankings so when possible you should translate it, of course. So, the solution depends, of course, on the situation. For this you have to sit together with your clients. First question, of course, can you translate everything? Is that an option? If not, then maybe the solution is to deny access to translated pages. This can be done with a module such as the content language access module but preferably the client translates everything but when a client is really lazy or it's not possible, then this might be a better solution. Yeah, you could talk a little bit more about that first case. You could also say if we can translate everything, maybe we can add a line of contents to that page and say the page is not available in French but in French and translated, of course, add a link to the English version so that's an option as well. Maybe that way you can follow up on your Google Analytics and you can see if people visit that page, if people have an interest in it and if yes, it might be a good idea to translate it. So, next up we will add a couple of rapid fire and best practices. So, for tracking, you should use the Google Analytics module or the Google Tag Manager module. You should not use both these modules together. Now, if you do use these modules together it makes your tracking setup very prone to errors and we have seen this time and time again that this happens. So, for example you will see a screenshot here from Google Analytics and in this case on the 6th or the 7th of May the Google Tag Manager module was added to the website and Google Analytics tracking was also added to the Google Tag Manager. So, basically this means that the page use the page use data, they sent twice to Google Analytics. Once because of the Google Analytics module and once because of the tracking setup within Google Tag Manager. So if a client sees this they might think, oh yes, our page use went up, we did a good job but actually you are just sending twice the amount of page use to Google then you need to send. So, be careful with this and use Google Analytics or Tag Manager not both. In most cases Google Tag Manager is the best way to go since you then consolidate all your tracking related information to one module. The second one, when possible aggregate and minify your CSS and JavaScript files makes your page a little twelter, relaxed websites. Yeah, also make sure each page has a correct and well configured canonical tag. Now, depending on the complexity of your websites, you might want to let an SEO specialist review all your canonical tags because there can be a lot of small issues depending on your setup. Now, to be honest, even for small websites I advise you to really take a close look at your URLs, make sure they are setup correctly and just let a specialist have a look at it. There are some specific best practices here. You should always use absolute URLs in the canonical tag. This is something Google actually explicitly said, so you should use absolute URLs, not relative ones. You should also make sure that all these canonical tags return status code of 200. Now, you don't want to set economical for example to a www.version. If that URL automatically redirects to the non-W version. Another example is that you have an HTTPS website, but your chronicles are set to the HTTPS version of the non-secure version. So, these will redirect them to another page. So, your chronicle will return 200. So, make sure all your chronicles return status code 200. Now, also in most cases, you will want to omit URL parameters from the canonical URL. For example, if you have a web shop with some facet filtering on price or on color or on brand and these facet filters probably modify your URL. You don't want these parameter parameters to be added to the canonical URL of that page. Now, there are some exceptions. Let's say you have a web shop and one of your largest products is, for example, white sneakers. And these are web shop facet filters as well. Then you might want the white sneakers parameters to really be in the chronicles. So, Google can index that page separately since it's an important product category. So, there are some exceptions, but in general you want to omit the URL parameters. And when that's the case, also maybe have a look at the facet's pretty parts to not just use query parameters but really use decent URLs. The next rapid-fire Pal-to-Automodule everyone should know it everyone should be using it. So, I'm not going to cover it into detail. But just don't forget to translate them as well. This, for example, an example it's a Dutch website and the URL is slashproducts. Wow, in Dutch I don't know how many people here know Dutch but it isn't products but producten. So, you should follow up on the amount of pages indexed by Google this can be done by using Google Search Console. So, here you can see in this coverage report that for this specific website there are 263,000 valid URLs indexed by Google. Now, if this seems a bit high, then there maybe is a rabbit hole setup needed to remove pages from the index. So, if we look back at our first and our second points we talked about the teams overview page and every team member being indexable and having a separate page, maybe that is the issue here. If your page count seems too high and maybe you need to adjust your rabbit hole setup. If it seems too low if I go back, if this screenshot shows too few pages in your eyes, then maybe some important pages are no indexed. Maybe there is a refund, maybe not accepting somewhere that no indexes have a lot of pages, so you could take a look over there. But really keep an eye on the count here, on your coverage report and see how much pages are alexed and if this sort of matches what you thought it would be. Okay, the site map, take a look at your site map. It should cover the URL, but a lot of websites we've seen have HTTP defaults in the URL in the site map. This can be to misconfiguration about the site map. Some more information can be found on this page, but on the next slide I'm also going to cover the best solutions we found. So, apparently this slide was written, so I'll just tell it without showing. So the first solution if possible adds your base URL to your settings on PHP. But in some cases that isn't possible. If that's the case, if it's not possible. Oh, there's a slide. So the other solution is if you're executing Chrome, don't forget to add the URL to your Chrome. If you add this, the site map will generate without the default. If you forget to do those two things 95% of the time your site map will contain default in the URL, which is of course not a good practice. Yeah, that was the screenshot shown here. So if you submit a site map to Google or Google finds a site map which contains HTTP default for example, then it will show something like this. So it will show submitted site maps and submit a URL not found 404. So this is not something you want. This is a screenshot of Google search console by the way. Next up you want to create some checklists. So this is common knowledge I take, but you should keep the checklists handy for things that have to be done before you go live. Things you have to check just when the site went live, things to check when the site is live, for an amount of days. So for example, the things in Google Analytics and things like that, follow that up once every seven days or something in the first days after launch. And we go over these checklists either in depth or in SEO. These are important things to follow up on because small things could have a big impact in SEO. So follow up on these. Okay. So make sure you have all the available tools to increase the speed of your website. As said a couple of time, Google takes into account the speed of your website. So default triple caching should be enabled for everyone. Keep in mind the simple recapture and Google recapture. It breaks all these things. The other solutions varnish, memcached, or redis if possible for your website. Try to install one of multiple of them. And also the advanced aggregation is also a speed boost. But these are for most sites the default triple caching will be. Okay. For bigger sites I hide advice one of the other ones as well. I don't know if there are any questions. Question. If you use the path auto module then this we mentioned using the global redirect module. So if you've got the URL just like node slash one, two, three you're allowed to redirect to the alias made by path auto because I'm in both present around the best SEO. And I'm not sure I'm following complete question but when you set up the path auto I think it will automatically set the canonicals and the URL perfectly so when you go to the correct page it will show the path auto generated page and the canonical it will show the correct one I don't know. I think it's the correct one on the canonical not the node the real path right? Yeah the canonical should never contain a node should contain the real URL. Yeah when you set up the path auto it will do it perfectly for you. I don't know if that answers your question already. Yeah it's just around like you mentioned use the global redirect module but yeah just make sure that both node slash one, two, three and the path auto so you're allowed to get an access to the file. Yeah there should be both accessible but the node is always accessible and the path auto will be if you had it correctly. I don't know if there are other questions Just to clarify you're talking about possibility of duplicate content you want just one of the URLs Yeah then the canonical is very important that's right Walter to prevent duplicates pages the canonical is the most important part Sorry I didn't get the question. To prevent duplicated pages Yeah the canonical So duplicate content is when your website contains all of same blocks of text on multiple pages and if you have a couple of pages which contain exactly the same content then you can say to google hey google google I know I have duplicate pages but you should only select this one page and that one page is done the canonical URL So the canonical URL is an indication for google to say this is the URL you want to get next. Someone else with another question? Yeah How do you suggest handling pages to get removed from the website? If someone is no longer valid in a month or it's just something that's no longer there or something you simply remove from the website and apply it in 404 responses or do you redirect it to someone else? Well best practice is to redirect to another one just for usability of a site but it depends on what kind of pages it was I think if it's like a page I'm going to say opening hours and you have another page that has the same then of course you try to redirect but if it's if the shop has been closed down then you don't really need to redirect or redirect to the home page maybe so it kind of depends on what kind of page you're talking about so try to but the default one is it best to what can you weigh in? Maybe I can drop in so if I understand correctly because the volume was pretty low you were asking what the correct or the best approach is when a content is no longer on the website so either to remove it or to redirect it right? Yeah, indeed So that indeed depends on the type of content say for example you have a workshop with the product category that you removed and you decided not to sell these products anymore then you get them just for the page because it's not relevant anymore to rank for it in Google because you don't sell these products anymore and you don't have another category that looks like it so you can just for for it but say for example you stopped selling men's shoes but you still sell women's shoes you could redirect all the men's shoes URLs to your women's shoes for example because it's a relevant page it has something to do with each other so if the pages are contextually relevant you can usually redirect but if not you can just remove the content another example is is when you for example place a news item on your website saying we will be closed during the summer months of 2020 if that page is no longer relevant but you go to the next year for 2021 for example you could just redirect the old URL to the new URL so then you don't have to remove your information of your opening hours or your vacation time but you can just redirect to the latest page containing the latest information so redirect whatever possible and whatever relevant what if it's content that is really no longer relevant then you can just for for the page someone else another question not really then everyone thanks for listening if you need if you have other questions feel free to contact us you can contact us on LinkedIn or through mail or on Twitter as well and I'll be around here obviously not maybe for another year but everyone thanks for your attention