miercuri, 3 iulie 2013

Domain Migrations: Surviving the "Perfect Storm" of Site Changes

Domain Migrations: Surviving the "Perfect Storm" of Site Changes


Domain Migrations: Surviving the "Perfect Storm" of Site Changes

Posted: 02 Jul 2013 07:23 PM PDT

Posted by Ruth_Burr

Last week, I held a Mozinar talking about the SEO steps involved in transitioning from SEOmoz.org to Moz.com, and sharing some of the results we got. We got some great questions on the Mozinar, and I wanted a chance to answer some more of them as well as expand on some points that didn't fit into the Mozinar.


Throwing Best Practices to the Wind

As we spent more than a year planning the transition from SEOmoz to Moz, one thing I wanted to make sure everyone knew internally was that we were engaging in â€" well, maybe not worst practices, but we were pretty far away from best practices when it came to domain migration.

One thing most SEOs will tell you about domain migration is that you shouldn’t make a lot of big changes at once. For example, if you’re switching to a new domain, just switch domains; don’t try to change anything else at the same time. If you’re refreshing your design, just do that; don’t try to change your content or URL structure at the same time. And definitely, definitely don’t change anything else if you’re changing your top-level domain (TLD).

Screenshot from "Achieving an SEO-Friendly Domain Migration - The Infographic" by Aleyda Solis

Avoiding making this many changes to your website at once will mean that search engines have a much easier time finding, crawling, and ranking your new site, and that you’re much better positioned to diagnose problems as they arise.

Nevertheless, there we were: plotting a massive re-brand, site redesign, content overhaul, and domain change â€" complete with TLD switch â€" all at the same time. A perfect storm. It’s enough to make a person lose sleep (I know I did). At the same time, I’m glad we went through this, because it’s exactly the kind of thing some of you are going to end up dealing with as well. We needed to make all of these changes simultaneously in order to do what we wanted to do with the new product and re-brand, and that took precedence over SEO best practices. Instead of throwing up my hands and saying “well, we’re doomed,” I had to learn to do as much as I could with the situation at hand.

Doing the Long, Boring, Hard Work

The major portion of my work preparing for the domain migration was my big giant list of URLs:

Casey helped me pull a list of every URL on the site from our database, and I found each URL a redirect target on Moz.com. I would recommend pulling your URL list from your own database or server logs if it’s at all possible; it will give you a much more complete list of URLs than simply running a crawl using a program like Xenu or Screaming Frog.

When I talk to people about the migration, they typically blanch at the big giant list of URLs. Is it really necessary to look at every URL on the site?

Well, no, not totally. In our case, there were large sections of the site (like the blog and Q&A) that were staying largely the same â€" we could just redirect everything at seomoz.org/blog/* to moz.com/blog/* without needing further detail. For sites that are simply changing from one domain to another without a major redesign/restructure (which, again, you should really do if you can), it becomes even easier: If your site’s staying exactly the same, you can just redirect everything to the same folder location on your new domain.

I’m so glad that I did go through every page on the site, though, since I was able to get rid of a lot of old orphan pages, and help make sure the new site taxonomy was more inclusive so we didn’t have new orphan pages going forward. A site migration is a great time to 301 old pages that have outlived their usefulness to newer, more useful resources.

Traffic and Ranking Loss

I can’t stress enough how important it is to manage expectations around traffic and ranking loss during a domain migration. In the Mozinar, I mentioned that some PageRank is lost through 301 redirects (thanks Ethan for sending along this video from Matt Cutts explaining that the amount of PageRank that dissipates through a 301 is currently identical to the amount that dissipates through a link). This is usually not a huge deal for your most popular, best-linked pages, but can be an issue for deep pages that rank for long-tail terms, especially if the external links pointing to those pages are old or there aren’t very many of them.

With the Moz migration, the site restructure meant that we changed the internal link juice flowing from page to page as well. In some cases that was beneficial, such as with our Learn section which gained importance as it moved from our footer to our (now-reduced) header. In other cases, however, it meant some pages losing internal link equity. Again, not a huge issue for the most important pages but definitely impactful on long-tail terms. Between those two factors, the chance that our traffic and rankings wouldn’t be affected was pretty slim â€" and they were.

Better User Engagement

The flip side to the traffic loss was that we saw a boost in engagement metrics. Cyrus ran a quick study on a subgroup of users who a) had arrived through non-branded organic search and b) were new visitors to the site, to mitigate as much as possible the influences of preconceived expectations and industry “buzz” surrounding the re-brand. Here’s what he found:

As you can see, nearly every section on the site saw a boost in pageviews and pages per visit, as well as a huge decrease in bounce rate. The only downside is that we did see a decrease in time on page, pretty much across the board. We have a few theories on that: It could be that the more people click around the site, the less qualified each page view becomes; or it could be that the redesign has, in many cases, made pages shorter and easier to read quickly. The fact that time on page has decreased while average visit duration and bounce rate have improved points to the lowered time on page not being an indicator of lower quality, so that’s good.

What About Changing Platforms?

I didn’t get much of a chance to discuss changing CMS/Platforms in the Mozinar, because we run the site on a custom back end and CMS. It’s a question we get a lot in Q&A, so I wanted to address it.

Like most domain migrations, it’s important to keep things as much “the same” as possible when migrating to a new platform or CMS. Ideally, your site would look pretty much the same to users before and after the change â€" you could start making improvements using your brand new shiny CMS after the migration takes place. One thing that’s especially important to avoid when changing platforms or CMS is to make sure the new back end isn’t appending extra things to your URLs. For example, you want to make sure your home page is still www.example.com and hasn’t switched to www.example.com/index or the like. Also be on the lookout for extensions such as .html or .aspx being appended to your old URLs by the new platform. That’s a really common cause of duplicate content on a new platform.

Sitemaps

In the Mozinar, I mentioned that we had multiple sitemaps in Google Webmaster Tools, and got a question about why we do it that way. Since that’s a decision that was implemented before I came on, I wanted to make sure I had the whole answer before I responded, but it was as I suspected. We have separate sitemaps for our blog, Community profiles and YouMoz because those are three of the largest areas of our site. Since each sitemap can only contain 50,000 URLs, this multiple-map experience ensures we have plenty of room in each one for these prolific sections to keep growing. Kate Morris wrote a great post on using multiple sitemaps a couple years ago; you can read it here.

Noise in the Signal

“This is great info, Ruth,” I can hear you saying, “but why did it take you a month to share it with us?” A lot of the reason has to do with noise in the signal.

In the days surrounding the launch, we had increased buzz from our PR efforts and excitement from our customers about the new site. We knew this would happen â€" and were happy about it! â€" and that this uptick wasn’t a good indicator of how the new site would perform in the long term.

I also wanted to wait until SEOmoz pages were no longer ranking (as I mentioned in the Mozinar, they’re still indexed but aren’t ranking for any of our target terms) and had been replaced with Moz.com URLs, to get a better sense of how our rankings were impacted before I shared the info. This kind of longer-term analysis is important in the wake of a migration; make sure you’re getting as accurate a picture as possible of your new metrics.

Thanks again to everyone who listened in on the Mozinar, and who sent your kind wishes and congratulations to the Moz team during this process. It was a huge effort by the whole company and we’re so happy to share it with you!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

The Right Keyword Data for the Right Job

Posted: 02 Jul 2013 05:40 AM PDT

Posted by russvirante

Keyword data sources have long been a key tool in the pockets of search engine optimizers. There is little argument that know what people search for and how often has and will continue to be an important knowledge set in nearly any SEO endeavor. However, like most things in SEO, the devil is in the data.

The problem

There are myriad keyword data sets available for consumption on the web. More often than not, we need keyword data and predicted search volumes in order to make decisions about content prioritization. The go-to product is normally Google's own Keyword Suggestion Tool, but it leaves much to be desired for those of us who need more data accessible in a programmatic fashion. So, which keyword data sets help us the most in getting keyword data, and how do they differ.

The providers

Virante, the company I work for, has used pretty much every keyword discovery tool or API out there. However, for our purposes here, we have to limit ourselves to providers that give Exact Match Local Search Volume data or estimates. This means we have to ignore one excellent keyword tool out there, Keyword Spy. This also ruled out popular tools like UberSuggest which does not provide search volumes. Finally, I looked only at web services, not standalone keyword tools like MarketSamurai or Xedant. Whenever possible, we used "fresh" data rather than historical indexes.

Please bear in mind that I am just judging the data here. Each of these data sources have tools associated with them that make their data more valuable and in different ways. I will touch on these differences in the conclusions, but understand that I am just judging one feature of the overall offering, not the tools as a whole.

Earlier in May, I reached out to the community to ask for every online keyword data set out there that provided search volumes and here is what I came up with:

SEMRush - http://www.semrush.com

This is an incredibly popular tool which I am quite familiar with. Virante has used their API now for quite some time. SEMRush presents search volumes as reported by Google.

Wordstream - http://www.wordstream.com

This data set is tied to a series of paid search tools that are excellent in their own right. Wordstream does not use Google's search volume data and instead provides their own relative number.

Keyword Discovery - http://www.keyworddiscovery.com

This huge data set has been a staple at Virante for some time.

GrepWords - http://www.grepwords.com

This is a newcomer. A simple tweet from what appears to be an empty twitter account reached out with beta access. As of writing this the tool still isn't available for purchase.

WordTracker - http://www.wordtracker.com

Perhaps the most well-known, Word Tracker has a huge database of keywords and their own proprietary search volume data. As a paid user, you can get Google search volume as well powered by SEMRush.

Getting a baseline

The first thing I needed to do was to create a "source of truth" to compare against these data sets. Using the Google Keyword Suggestion Tool, I grabbed the top 100 keywords for each of the DMOZ categories. I think converted their local search volumes into an index from 0 to 100, where 100 is the highest-trafficked term in the list and 0 was the lowest-trafficked term. Finally, I took the LOG of each for visualization purposes. One quick caveat: I am making a big assumption here. Google may report very inaccurate numbers for search volumes. We certainly know they at least round these numbers. However, it is the best I've got for now.

Method 1: Log of indexed search volumes

This most straightforward method of visualizing the differences in the data sets is to look at the comparison of the log of indexed search volumes for each data set. I looked up either by API or by hand the search volumes for every keyword returned via the Google Keyword Suggestion Tool baseline data. From left to right on the graph are the keywords of the highest search volume (according to Google) to those with the lowest.

Keyword Relative Search Volume

There were several key takeaways. First, both SEMRush and Grepwords returned a line nearly identical to that from Google. This was to be expected. Unless their data was wildly out of date, it was likely that they would perform best on this type of metric.

A few interesting takeaways:

  1. Wordstream and Keyword Discovery both seemed to track stability with Google data for the top terms, but diverged thereafter.
  2. Wordstream tended to over-report relative traffic of mid and long-tail keywords
  3. Keyword Discovery had the most similar trendline to actual Google results of those providers that use their own data sets. However, they also had the lowest keyword coverage.
  4. WordTracker's trendline was actually nearly horizontal, indicating an under-reporting of head terms and over-reporting of tail terms.

Method 2: Average error

I began by putting each of the data sets on to the same 0 to 100 index, where 100 is the most popular keyword and 0 is the least popular. I then subtracted the keyword index values from each of their corresponding Google Keyword Suggestion Tool indexed volumes. This resulted in the following:

Service Provider Error
SEMRush <.5
WordStream 6.8
KeywordDiscovery 3.5
GrepWords <.5
WordTracker 6.8

This doesn't really tell us much more about the performance, simply that SEMRush and GrepWords perform as one would expect, in line with Google's numbers, that Keyword Discovery trends closest to Google and that the error rate for WordStream and WordTracker are fairly similar.

Method 3: Coverage rates

What percentage of keywords are actually found in each index? We know that some indexes are larger than others, but this doesn't necessarily mean that they match up with searches performed on Google. Below are the coverages for the head/mid-tail terms:
Service Provider Coverage
SEMRush >99%
WordStream 85%
Keyword Discovery 83%
GrepWords >99%
WordTracker 95%

It is worth pointing out that even though Keyword Discovery had a lower coverage rate and a lower average error, the average error statistic ignores when words are not present, scoring them as null rather than 0. As expected, SEMRush and GrepWords get high accuracy rates for head and mid-tail keywords. But, upon further examination, we can see that their indexes degrade in coverage as you move down the keyword search frequency scale.

Long-Tail Coverage for Adwords Data Aggregators

Category SEMRush GrepWords
Sports Long Tail 86% 60%
Finance Long Tail 90% 87%
Arts Long Tail 49% 68%

As you can see, there are great coverage disparities among long tail for Adwords data aggregators like SEMRush and GrepWords. This is where services like Keyword Discovery, WordStream and WordTracker tend to shine. Because they get their data from sources other than the Adwords tool, they are able to pick up many more variations of keywords that might never show up in a Google Keyword Suggestion Tool query, even though the searches do actually occur on Google.

So which provider is right for which problem?

1. I want obscure, long-tail keywords that are less likely to be found by my competitors.
Keyword Discovery and WordTracker seem to reign supreme here. They have been industry mainstays for a while, but if you want real search and CPC numbers you will need to coordinate with GrepWords or SEMRush. WordTracker actually gives you access to SEMRush data for a limited number of keywords per month.

2. I want as valid of data as possible, so that I can easily compare with competitive metrics.
This is what makes SEMRush one of the most popular tools in the industry. They have a ton of great data.

3. I want data that can easily tie into PPC optimization.
WordStream is the clear winner here. Some of their related paid search tools are just killer.

4. I want data fast, accurate, and programmatic.
GrepWords appears to be the winner here. One of their API calls allows you return search and CPC data on a thousand words at a time. This is particularly valuable if you are using a tool like Keyword Discovery to get the raw keywords, but want to quickly see if there is Google data to go along with it. Not to mention that their API allows regular expressions for finding related keywords. As of writing this post, they still weren't open for business. Just beta access.

5. I want every possible keyword, period.
You need all of them. It really isn't that terrible of an investment when you are building an initial keyword universe on a large project. While this might mean only keeping accounts open for the first month, more is better. Right?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Niciun comentariu:

Trimiteți un comentariu