vineri, 1 august 2014

Building a new website? 10 reasons why you should consider SEO from the start!

Building a new website? 10 reasons why you should consider SEO from the start!

Link to White.net

Building a new website? 10 reasons why you should consider SEO from the start!

Posted: 31 Jul 2014 08:11 AM PDT

Whether you're just starting out and building a website from scratch, or you're looking to give your current website a makeover, pulling your SEO team in from the start will save you time in the long run.

I see it time and time again. A website is built, then the SEOs are pulled in to get to work…backtracking and undoing to do it again properly. I’m not saying that all SEOs are great at building sites, but most developers only have a basic knowledge of SEO, so getting experts from both sides working together from the planning stages is crucial.

One of the most frustrating things about being left out until the later stages is that often extra work is required to fix the SEO holes, so the client has to make sacrifices due to budget restraints, which often leads to problems further down the line.

Here I cover some of the most common issues that can be easily avoided by pulling in your SEO experts as early as possible.

#1 Disallow robots

This is one of the most common issues that I see – the site being developed is left open for Google to crawl it and list in the search results. If you have a live site with much of the same content, this could cause a huge duplicate content issue.

Adding the disallow instruction to your robots.txt file will prevent it from being crawled and indexed by the search engines. Don't forget to remove the disallow line when you're ready to go live!

disallow robots

#2 Test Pages

When a site is being developed, it is common practice to include example text to demonstrate how content will appear on the pages. However, another common issue occurs when test pages are not blocked from the search engines, sometimes causing them to be found by the crawlers, indexed and pulled into the search results.

These pages provide no value and several of them could bring down the overall authority of a site. Placing a 'noindex' tag on test pages will prevent the major search engines from accessing them.

#3 International Sites

If your website targets more than one country and/or language then you need to be sure that the right version of your pages are displayed to your visitors in the search results. The 'hreflang' tag allows you to specify all of the alternate versions of your pages, highlighting which version should be displayed based on location and language. It also helps to prevent duplicate content issues, where the same or very similar content is used across the regional versions. A great tool for generating the correct code is the hreflang tags generator tool, created by Aleyda Solis.

The 'hreflang' tags can be placed within the code on each page, or within the XML sitemap. A default version can also be specified to highlight which version to display if users are searching with a language or country that has not been targeted, using rel=”alternate” hreflang=”x”.

It is also recommended that you use the 'geolocate' option within Google and Bing's Webmaster Tools to specify the country that the site is targeting. If you're using folders to split up regions (example.com/en; example.com/de; example.com/it etc.) then you can create separate Webmaster Tools accounts for each, and then change the associated country settings to match.

Be careful when considering automatically redirecting users to the appropriate version of your site based on their location. This can cause all sorts of problems, including search engines being redirected to a single version of your site, causing only this version to appear in search.

#4 Keyword Research

Keyword research is one of the most useful and eye-opening tasks of all. It can help to shape your site and provide insights on where best to focus your efforts.

Understanding exactly which terms are being used to search within your industry and the associated volumes can help you to make informed decisions, allowing you to decide which keywords to target. Plan exactly which keywords you wish to target for each page and avoid optimising several pages for a single keyword, as this will confuse the search engines and dilute your efforts.

There are a number of tools that can help you with keyword research, including Google's Keyword planner, Webmaster Tools and SEMRush to name a few.

 #5 Site Architecture

The golden rule when planning your site architecture is to make it as easy as possible for both users and search engines to navigate through your pages, while keeping the number of clicks required for users to find what they are looking for to a minimum. This is not only to give the user a good experience, but also to prevent authority from being diluted. Each step in the tree passes less value to the next level, leaving pages at the bottom of the tree with little authority.

site architecture

The search engines are also less likely to crawl through multiple levels of a site, so some of the deeper pages may not be indexed at all and therefore not appear in the search results.

The keyword research will help you to decide which words to focus on when building your pages and categories, but it will also allow you to see which pages are more likely to attract higher search volumes. Based on these findings, you can draw up a navigation plan in order to pass the most value to your key pages.

#6 URL Structure

A clean, consistent URL structure will help users and search engines to understand what to expect from your pages while preventing potential duplicate content issues.

A few rules to follow:

  • Hyphenate URLs and only ever use lower case.
  • End all URLs with a trailing slash and make sure all internal links use this format.
  • Set up server-side 301 redirects to force the correct versions of URLs to load – this will prevent broken links, loss of PageRank and stop duplicate pages from being indexed.

It is also important to note that using the primary keywords assigned to each page within the URL will help the listing to stand out in the search results, as keywords appear in bold when they match the users search query.

bold url search query

 #7 Navigation & Internal Linking

Navigation is not only useful for helping users and search engines to quickly navigate through the site, but it's also an effective tool for distributing value between pages. It's important to build up authority to your main pages, so including them in your navigation will result in them being linked to from every page on your website, with each link passing value.

As I mentioned earlier, you should use the findings from the keyword research to decide on the category names.

While you're unlikely to include links to every page on your site within the navigation, your internal linking strategy must take into account that every page that you want to appear in search will need to be linked to from somewhere.

#8 Avoid duplicate content

Duplicate content happens, but the problem is that often people aren’t aware of it. In my experience, the most common causes are:

  • URLs – as mentioned in #6, it's important to set up server-side redirects to force all versions of a single URL to resolve to the absolute version. If a single page can load using more than one URL, it can be treated as separate pages with duplicate content.
  • Parameters – these are commonly used for tracking and sorting, but while the URLs change, the content often stays the same. You can handle parameters in webmaster tools to make the search engines aware of them, but you can also use rel=canonical to highlight the correct version to ensure duplicate pages don't end up in the search results.
  • Pagination – used to segment results across several pages, pagination is commonly used on blog and news sections. However, these pages list chunks of content that already appears on other pages on the site (the post or article pages). Club this together with duplicate title tags and you're likely to have duplicate content (or simply thin pages that should not be appearing in search). There are several ways to handle this, including the use of the 'noindex' tag, rel="canonical" or using rel="prev" and rel="next".
  • www Vs non-www & http vs https – another common issue is when you're able to load your pages with or without the www, and/or with or without https:
    >example.com
    >www.example.com
    >https://www.example.com
    >http://www.example.comAgain, 301 redirects and rel=canonical can fix this. You can set your preferred domain (www or non-www) in the site settings in your Google webmaster tools account, but it is still recommended that a server-side redirect is added to handle this.

#9 Google Tag Manager

This is a useful tool that can reduce the amount of work required from your developers, speed up tag implementation and prevent your site from being slowed down by unnecessary code.

Simply create a Tag Manager account then add a snippet of code to your template. Now you can manage all of your tags from a single account, rather than having to call on your developers to add them manually.

#10 Migration Strategy

This only applies to existing sites that are being updated, involving changes to the location of your pages. This can be moving to a completely different domain, or simply updating the structure and design on your current domain.

A redirect strategy should be put in place to map the old pages to the new. Using 301 (permanent) redirects is generally the best way to handle redirects. This is because the 301 is the only redirect that passes value from one page to another. Generate a list of every page on your current site – export live pages from your CMS or crawl the pages (this doesn’t always pull back every page). Using a spreadsheet, map all old pages to the new pages, then use this to build your redirect strategy.

If there is no exact equivalent page to redirect to, try to find the most relevant page instead, e.g. a retired product could redirect to its category page.

Remove redirect chains – These are commonly caused by redirects being added each time a URL moves, causing crawlers to jump between all of the old pages before arriving at the new page. This can slow down load times and leak value passed through links. By removing the unnecessary steps in the chain, you can link straight to the source.

An easy way to identify redirect chains is using Screaming Frog's redirect chains report – simply click 'Configuration > Spider' then under the 'Advanced' tab, tick 'Always Follow Redirects'. Now crawl your site then select 'Reports>Redirect Chains' to export the report.

 

Hopefully you're now sold on the idea that SEO isn’t just something that's bolted on once a site is built, but rather a core feature that can help to guide and inform you through the planning and implementation stages, right through to launch and beyond.. So next time you're looking to build a site, you know who to call!

 

The post Building a new website? 10 reasons why you should consider SEO from the start! appeared first on White.net.

Niciun comentariu:

Trimiteți un comentariu