Loading new...
technical SEO Checklist

The Complete Technical SEO Checklist for Better Rankings in 2021

November 02, 2021

Improving your technical SEO is the first step in any complete SEO strategy. Ensuring that your website is in tip-top shape helps lead to more organic traffic, ranking keywords, and conversions.

No matter what industry your brand or company is in, the principles of technical SEO have never been more critical. Google announced its Google Page Experience update–which includes page experience signals as a ranking factor–would be launching in May 2021.

technical seo audit checklist
technical seo audit checklist

The Ultimate Technical SEO checklist (2021) 

Optimize your page experience – core metrics 

The new page experience signals combine Google’s existing search signals, including mobile-friendliness, safe-browsing, HTTPS security, and intrusive interstitial guidelines.

To refresh your memory, Google’s Core Web Vitals are made up of three factors:

  • First Input Delay (FID) – This metric measures the first time a user is able to interact with the page. A page should have a FID of less than 100 ms to provide a good user experience.
  • Largest Contentful Paint (LCP) – LCP measures the loading performance of the most prominent contentful element on the screen. It should load within 2.5 seconds to provide a good user experience.
  • Cumulative Layout Shift (CLS) – This measures the visual stability of elements on the screen. Ideally, sites should maintain a CLS of less than .1 second.

In Google Search Console, you can see which URLs have potential issues based on these ranking factors:

There are a number of tools that can help you improve your site speed and Core Web Vitals, including Google PageSpeed Insights, Lighthouse, and Webpagetest.org. Some optimizations you can make include:

  • Implementing lazy-loading for non-critical images
  • Optimizing image formats for the browser
  •  Improve JavaScript performance

Look for any crawl errors on your site

The second thing you should do is make sure your site doesn’t have any crawl errors. When a search engine tries to reach a page on your website but fails, it causes a crawl error.

You can use Screaming Frog, Deep Crawl, SEO Clarity – there are many tools to help you do this. Look for crawl errors after you’ve crawled the site. This can also be done with Google Search Console.

When scanning for crawl errors, you’ll want to.

  1. a) Correctly implement all redirects with 301 redirects.
  2. b) Go through any 4xx and 5xx error pages to figure out where you want to redirect them to.

Fix broken internal and outbound links

Users and search engines can both be adversely affected by a poor link structure. When people click on a link on your website and it doesn’t take them to the right URL, it can be frustrating for them.

You should make sure you check for a couple of different factors:

  • links that are 301 or 302 redirecting to another page
  • links that go to a 4XX error page
  • Pages that don’t have links to them (orphaned pages)
  • A deep internal linking structure

In order to fix broken links, you should update the target URL or remove the link altogether if it is no longer active.

Remove any duplicate or thin content

Make sure your site has no duplicate or thin content. Many factors can lead to duplicate content, including faceted navigation, having multiple versions of the site online, and scraped or copied content. It would be best if you were only allowing Google to index one version of your site.

You can fix duplicate content by implementing the following strategies:

  • You are setting up 301 redirects to the primary version of the URL. So if your preferred version is https://www.abc.com, the other three versions should 301 redirect directly to that version.
  • Applying no-index and canonical tags to duplicate pages.
  • We are setting the preferred domain in Google Search Console.
  • Setting up parameter handling in Google Search Console.
  • Where possible, we are deleting any duplicate content.

Migrate your site to HTTPS protocol

Google announced in 2014 that HTTPS protocol was a ranking factor. In 2021, if your website is still using HTTP, it’s time to upgrade.

The HTTPS protocol protects your visitors’ data by ensuring that their information is encrypted to prevent hacking and data leaks.

Make sure your URLs have a clean structure

Google states that “a site’s URL structure should be as simple as possible.” Complex URLs can cause problems for crawlers by creating unnecessarily high numbers of URLs that point to the same or similar content on your site.

As a result, Googlebot may be unable to index all the content on your site altogether. Examples of problematic URLs include:

Sorting parameters. Several large shopping sites offer multiple ways to sort the same items, resulting in a much higher number of URLs. For example:

http://www.example.com/results?search_type=search_videos&search_query=tpb&search_sort=relevance&search_category=25

Irrelevant parameters in the URL, such as the referral parameters. For example:

http://www.example.com/search/noheaders?click=6EE2BF1AF6A3D705D5561B7C3564D9C2&clickPage=OPD+Product+Page&cat=79

In order to shorten URLs, you should trim these unnecessary parameters.

Ensure your site has an optimized XML sitemap

XML sitemaps tell search engines about the structure of your site and what to index in the SERPs.

An optimized XML sitemap should include:

  • Updated content on your site (recent blog posts, new products, etc.).
  • Only 200-status URLs.
  • No more than 50,000 URLs. In order to maximize your crawl budget, you should have multiple XML sitemaps if your site has more URLs.

It would be best if you excluded the following from the XML sitemap:

  • URLs with parameters
  • URLs that redirect to another URL or contain canonical or noindex tags
  • URLs with 4xx or 5xx status codes
  • Duplicate content

You can check the Index Coverage report in Google Search Console to see any index errors with your XML sitemap.

Make sure your robots.txt file is optimized

The robots.txt file contains instructions for search engine robots on how to crawl your website.

It’s imperative to make sure that only your most important pages are being indexed since every website has a crawl budget.

On the other hand, make sure that your robots.txt file doesn’t prevent anything from being indexed.

Here are some examples of URLs you should disallow in your robots.txt file:

  • Temporary files
  • Admin pages
  • Cart & checkout pages
  • Search-related pages
  • URLs that contain parameters

The robots.txt file should include the location of the XML sitemap. Test your robots.txt file using Google’s robots.txt tester.

Add structured data or schema markup

Data structures help give context to Google about what a page is about and help your organic listings stand out on the SERPs.

Schema markup is one of the most common types of structured data. Different schema markups exist for structuring data for people, places, organizations, local businesses, reviews, and so much more.

Tags:
,

Leave a Reply

X

Get a Free

SEO Consultation