luni, 6 februarie 2012

Let's Move! is Turning Two

The White House

Your Daily Snapshot for
Monday, February 6, 2012

 

Let's Move! is Turning Two  

This week marks the second anniversary of Let’s Move!, the First Lady’s initiative to solve the problem of childhood obesity within a generation. Today at 2:30 p.m. EST, join Sam Kass, Assistant Chef and Senior Policy Advisor for Healthy Food Initiatives, for a special session of Let’s Move! Office Hours.

Want to ask Sam a question? Find out how to get involved.

Behind the Scenes: December 2011 Photos

Follow our Flickr Photostream to see some behind-the-scenes photos of the President as he welcomes home Iraq veterans, takes Bo for a walk, and does the coin toss for the Army-Navy game.

WH Behind the Scenes December 2011

President Barack Obama plays with Bo, the Obama family dog, aboard Air Force One during a flight to Hawaii, Dec. 23, 2011. (Official White House Photo by Pete Souza)

In Case You Missed It

Here are some of the top stories from the White House blog:

Weekly Address: It’s Time for Congress to Act to Help Responsible Homeowners
President Obama continues his call for a return to American values, including fairness and equality, as part of his blueprint for an economy built to last.

Weekly Wrap Up: Hanging Out with America
A glimpse at what happened this week at WhiteHouse.gov.
 
From the Archives: Startup America White Board
A White House White Board released for the launch of Startup America last year explains how the initiative will help entrepreneurs avoid the "valley of death" when starting new ventures.

Today's Schedule

All times are Eastern Standard Time (EST).

11:00 AM: The Vice President visits Florida State University WhiteHouse.gov/live

12:00 PM: The President receives the Presidential Daily Briefing

12:30 PM: Press Briefing by Press Secretary Jay Carney WhiteHouse.gov/live

2:30 PM: The President meets with senior advisors

2:45 PM: The Vice President attends a campaign event 
 
4:30 PM: The Vice President attends a campaign event

WhiteHouse.gov/live Indicates that the event will be live-streamed on WhiteHouse.gov/Live

Get Updates

Sign up for the Daily Snapshot

Stay Connected

 

This email was sent to e0nstar1.blog@gmail.com
Manage Subscriptions for e0nstar1.blog@gmail.com
Sign Up for Updates from the White House

Click here to unsubscribe | Privacy Policy

Please do not reply to this email. Contact the White House

The White House • 1600 Pennsylvania Ave NW • Washington, DC 20500 • 202-456-1111

 

Find Your Site's Biggest Technical Flaws in 60 Minutes

Find Your Site's Biggest Technical Flaws in 60 Minutes


Find Your Site's Biggest Technical Flaws in 60 Minutes

Posted: 05 Feb 2012 01:14 PM PST

Posted by Dave Sottimano

I've deliberately put myself in some hot water to demonstrate how I would do a technical SEO site audit in 1 hour to look for quick fixes, (and I've actually timed myself just to make it harder). For the pros out there, here's a look into a fellow SEO 's workflow; for the aspiring, here's a base set of checks you can do quickly.

I've got some lovely volunteers who have kindly allowed me to audit their sites to show you what can be done in as little as 60 minutes.

I'm specifically going to look for crawling, indexing and potential Panda threatening issues like:

  1. Architecture (unnecessary redirection, orphaned pages, nofollow)
  2. Indexing & Crawling (canonical, noindex, follow, nofollow, redirects, robots.txt, server errors)
  3. Duplicate content & On page SEO (repeated text, pagination, parameter based, dupe/missing titles, h1s, etc..)

Don't worry if you're not technical, most of the tools and methods I'm going to use are very well documented around the web.

Let's meet our volunteers!

Here's what I'll be using to do this job:

  1. SEOmoz toolbar - Make sure highlight nofollow links is turned on - so you can visibly diagnose crawl path restrictions
  2. Screaming Frog Crawler - Full website crawl with Screaming Frog (User agent set to Googlebot) - Full user guide here
  3. Chrome, and Firefox (FF will have Javascript, CSS disabled and User Agent as Googlebot) - To look for usability problems caused by CSS or Javascript
  4. Google search queries - to check the index for issues like content duplication, dupe subdomains, penalties etc..

Here are other checks I've done, but left out in the interest of keeping it short:

  1. Open Site Explorer - Download a back link report to see if you're missing out on links pointing to orphaned, 302 or incorrect URLs on your site. If you find people linking incorrectly, add some 301 rules on your site to harness that link juice
  2. http://www.tomanthony.co.uk/tools/bulk-http-header-compare/ - Check if the site is redirecting Googlebot specifically 
  3. http://spyonweb.com/ - Any other domains connected you should know about? Mainly for duplicate content
  4. http://builtwith.com/ - Find out if the site is using Apache, IIS, PHP and you'll know which vulnerabilities to look for automatically
  5. Check for hidden text, CSS display:none funniness, robots.txt blocked external JS files, hacked / orphaned pages

My essential reports before I dive in:

  1. Full website crawl with Screaming Frog (User agent set to Googlebot)
  2. A report of everything in Google's index using the site: (1000 results per query unfortunately - this is how I do it)

Down to business...

Architecture Issues

1) Important broken links

We'll always have broken links here and there, and in an ideal world they would all work. Just make sure for SEO & usability that important links (homepage) are always in good shape. The following broken link is on webrevolve homepage that should be pointing to their blog, but returns a 404. This is an important link because it's a great feature and I definitely do want to read more of their content.

   

Fix: Get in there and point that link to the correct page which is http://www.webrevolve.com/our-blog/

How did I find it: Screaming Frog > response codes report

2) Unnecessary Redirection

This happens a lot more than people like to believe. The problem is that when we 301 a page to a new home we often forget to correct the internal links pointing to the old page (the one with the 301 redirect). 

This page http://www.lexingtonlaw.com/credit-education/foreclosure.html 301 redirects to http://www.lexingtonlaw.com/credit-education/foreclosure-2.html

However, they still have internal links pointing to the old page.

  • http://www.lexingtonlaw.com/credit-education/bankruptcy.html?linkid=bankruptcy
  • http://www.lexingtonlaw.com/blog/category/credit-repair/page/10
  • http://www.lexingtonlaw.com/credit-education/bankruptcy.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-education/collections.html

Fix: Get in that CMS and change the internal links to point to http://www.lexingtonlaw.com/credit-education/foreclosure-2.html

How did I find it: Screaming Frog > response codes report

3) Multiple subdomains - Canonicalizing the www or non-www version

One of the first basic principles of SEO, and there are still tons of legacy sites that are tragically splitting their link authority by not using redirecting the www to non-www or vice versa.

Sorry to pick on you CVSports :S

  • http://cvcsports.com/
  • http://www.cvcsports.com/

Oh, and a couple more have got their way into Google's index that you should remove too:

  • http://smtp.cvcsports.com/
  • http://pop.cvcsports.com/
  • http://mx1.cvcsports.com/
  • http://ww.cvcsports.com/
  • http://www.buildyourjacket.com/
  • http://buildyourjacket.com/

Basically, you have 7 copies of your site in the index..

Fix: I recommend using www.cvcsports.com as the main page, and you should use your htaccess file to create 301 redirects for all of these subdomains to the main www site.

How did I find it? Google query "site:cvcsports.com -www" (I also set my results number to 100 for check through the index quicker)

4) Keeping URL structure consistent 

It's important to note that this only becomes a problem when external links are pointing to the wrong URLs. *Almost* every back link is precious, and we want to ensure that we get maximum value from each one. Except we can control how we get linked to; without www, with capitals, or trailing slashes for example. Short of contacting the webmaster to change it, we can always employ 301 redirects to harness as much value as possible. The one place this shouldn't happen is on your own site.

We all know that www.example.com/CAPITALS is different to www.example.com/captials when it comes to external link juice. As good SEOs we typically combat human error by having permanent redirect rules to enforce only one version of a URL (ex. forcing lowercase), which may cause unnecessary redirects if someone links in contradiction to redirects.

Here are some examples from our sites:

  • http://www.lexingtonlaw.com/credit-education/rebuild-credit 301's to trailing slash version
  • http://webrevolve.com/web-design-development/conversion-rate-optimisation/ Redirects to the www version

Fix: Determine your URL structure, should they all have trailing slashes, www, lowercase? Whatever you decide, be consistent and you can avoid future problems. Crawl your site, and fix these 

Indexing & Crawling

1) Check for Penalties

None of our volunteers have any immediately noticeable penalties, so we can just move on. This is a 2 second check that you must do before trying to nitpick at other issues.

How did I do it? Google search queries for exact homepage URL and brand name. If it doesn't show up, you'll have to investigate further.

2) Canonical, noindex, follow, nofollow, robots.txt

I always do this so I understand how clued up SEO-wise the developers are, and to gain more insight into the site. You wouldn't check for these tags in detail unless you had just cause (ex. A page that should be ranking isn't

I'm going to combine this section as it requires much more than just a quick look, especially on bigger sites. First and foremost check robots.txt and look through some of the blocked directories, try and determine why they are being blocked and which bots they are blocking them from. Next, get Screaming Frog in the mix as it's internal crawl report will automatically check each URL for Meta Data (noindex, header level nofollow & follow) and give you the canonical URL if there happens to be one.

If you're spot checking a site, the first thing you should do is understand what tags are in use and why they're using them.

Take Webrevolve for instance, they've chosen to NOINDEX,FOLLOW all of their blog author pages.

  • http://www.webrevolve.com/author/tom/ 
  • http://www.webrevolve.com/author/paul/

This is a guess but I think these pages don't provide much value, and are generally not worth seeing in search results. If these were valuable, traffic driving pages, I would suggest they remove NOINDEX but in this case I believe they've made the right choice.

They also implement self-serving canonical tags (yes I just made that up), basically each page will have a canonical tag that points to itself. I generally have no problem with this practice as it usually makes it easier for developers.

Example: http://www.webrevolve.com/our-work/websites/ecommerce/

3) Number of pages VS Number of pages indexed by Google

What we really want to know here is how many pages Google has indexed. There's 2 ways of doing this, using Google Webmaster Tools by submitting a sitemap you'll get stats back on how many URLs are actually in the index.

OR you can do it without having access but it's much less efficient. This is how I would check...

  1. Run a Screaming Frog Crawl (make sure you obey robots.txt)
  2. Do a site: query
  3. Get the *almost never accurate* results number and compare them to total pages in crawl

If the numbers aren't close, like CVCSports (206 pages vs 469 in the index) you probably want to look into it further.

   

I can tell you right now that CVCSports has 206 pages (not counting those that have been blocked by robots.txt). Just by doing this quickly I can tell there's something funny going on and I need to look deeper.

Just to cut to the chase, CVCsports has multiple copies of the domain on subdomains which is causing this.

Fix: It varies. You could have complicated problems, or it might just be as easy as using canonical, noindex, or 301 redirects. Don't be tempted to block the unwanted pages by robots.txt as this will not remove pages from the index, and will only prevent these pages from being crawled.

Duplicate Content & On Page SEO

Google's Panda update was definitely a game changer, and it caused massive losses for some sites. One of the easiest ways of avoiding at least part of Panda's destructive path is to avoid all duplicate content on your site.

1) Parameter based duplication

URL parameters like search= or keyword= often cause duplication unintentionally. Here's some examples:

  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/mortgage-lenders-rejecting-more-applications.html
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/mortgage-lenders-rejecting-more-applications.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-repair-news/credit-report-news/california-ruling-sets-off-credit-fraud-concerns.html
  • http://www.lexingtonlaw.com/credit-repair-news/credit-report-news/california-ruling-sets-off-credit-fraud-concerns.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/one-third-dont-save-for-christmas.html
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/one-third-dont-save-for-christmas.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/financial-issues-driving-many-families-to-double-triple-up.html
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/financial-issues-driving-many-families-to-double-triple-up.html?select_state=1&linkid=selectstate

Fix: Again, it varies. If I was giving general advice I would say use clean links in the first place - depending on the complexity of the site you might consider 301s, canonical tags or even NOINDEX. Either way, just get rid of them !

How did I find it? Screaming Frog > Internal Crawl > Hash tag column

Basically, Screaming Frog will create a unique hexadecimal number based on source code. If you have matching hash tags, you have duplicate source code (exact dupe content). Once you have your crawl ready, use excel to filter it out (complete instructions here).

2) Duplicate Text content

Having the same text on multiple pages shouldn't be a crime, but post Panda it's better to avoid it completely. I hate to disappoint here, but there's no exact science to finding duplicate text content.

Sorry CVCSports, you're up again ;)

http://www.copyscape.com/?q=http%3A%2F%2Fwwww.cvcsports.com%2F

Don't worry, we've already addressed your issues above, just use 301 redirects to get rid of these copies

Fix: Write unique content as much as possible. Or be cheap and stick it in an image, that works too. 

How did I find it? I used http://www.copyscape.com, but you can also copy & paste text into Google search

3) Duplication caused by pagination

Page 1, Page 2, Page 3... You get the picture. Over time, sites can accumulate thousands if not millions of duplicate pages because of those nifty page links. I swear I've seen a site with 300 pages for one product page.

Our examples:

  • http://cvcsports.com/blog?page=1
  • http://cvcsports.com/blog?page=2

Are they being indexed? Yes.

Another example?

  • http://www.lexingtonlaw.com/blog/page/23
  • http://www.lexingtonlaw.com/blog/page/22

Are they being indexed? Yes.

Fix: General advice is to use the NOINDEX, FOLLOW directive. (This tells Google not to add this page to the index, but crawl through the page). An alternative might be to use the canonical tag but this all depends on the reason why pagination exists. For example, if you had a story that was separated across 3 pages, you definitely would want them all indexed. However, these example pages are pretty thin and *could* be considered as low quality for Google.

How did I find it? Screaming Frog > Internal links > Check for pagination parameters 

Open up the pages and you'll quickly determine if they are auto generated, thin pages. Once you know the pagination parameter or structure of the URL you can check Google's index like so: site:example.com inurl:page=


Time's up! There's so much more I wish I could do, but I was strict about the 1 hour time limit. A big thank you to the brave volunteers who put their sites forward for this post. There was one site that just didn't make the cut, mainly because they've done a great job technically, and, um, I couldn't find any technical faults.

Now it's time for the community to take some shots at me! 

  • How did I do?
  • What could I have done better? 
  • Any super awesome tools I forgot?
  • Any additional tips for the volunteer sites?

Thanks for reading, you can reach me on Twitter @dsottimano if want to chat and share your secrets ;)


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Seth's Blog : Who is your customer?

Who is your customer?

Rule one: You can build a business on the foundation of great customer service.

Rule two: The only way to do great customer service is to treat different customers differently.

The question: Who is your customer?

It's not obvious.

Zappo's is a classic customer service company, and their customer is the person who buys the shoes.

Nike, on the other hand, doesn't care very much at all about the people who buy the shoes, or even the retailers. They care about the athletes (often famous) that wear the shoes, sometimes for money. They name buildings after these athletes, court them, erect statues...

Columbia Records has no idea who buys their music and never has. On the other hand, they understand that their customer is the musician, and they have an entire department devoted to keeping that 'customer' happy. (Their other customer was the program director at the radio station, but we know where that's going...)

Many manufacturers have retailers as their customer. If Wal-Mart is happy, they're happy.

Apple had just one customer. He passed away last year.

And some companies and politicians choose the media as their customer.

If you can only build one statue, who is it going to be a statue of?

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

duminică, 5 februarie 2012

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


Country Specific Blog Censorship by Google; Twitter Employs Censorship as Well; Echo Comments Not Working on Redirects

Posted: 05 Feb 2012 02:53 PM PST

Blog Redirects

Today I learned my blog is being redirected to another URL name in some countries. This is a new "feature" in Blogger that Google added beginning a few weeks back.

Instead of exposing a single "Blogger" to the world then censoring it to meet the requirements of local governments, Google decided to mirror content into country-specific domains then redirect users from foreign countries to the mirror associated with their country. If that country decides to censor something, it will somehow be noted on any page so the reader knows they're seeing a filtered view.

Readers can also try surfing to the original blog URL by appending /ncr (No Country Redirect) after the main name, such as

http://globaleconomicanalysis.blogspot.com/ncr

The above approach assumes a country doesn't filter on that pattern and block the request. For example: my blog is blocked in China so appending /ncr is unlikely to accomplish anything.

Google's Explanation

Why does my blog redirect to a country-specific URL?
Q: Why is this happening?
A: Migrating to localized domains will allow us to continue promoting free expression and responsible publishing while providing greater flexibility in complying with valid removal requests pursuant to local law. By utilizing ccTLDs, content removals can be managed on a per country basis, which will limit their impact to the smallest number of readers. Content removed due to a specific country's law will only be removed from the relevant ccTLD.

Q: How will this change affect my blog?
A: Blog owners should not see any visible differences to their blog other than the URL redirecting to a ccTLD. URLs of custom domains will be unaffected.

Q: Will this affect search engine optimization on my blog?
A: After this change, crawlers will find Blogspot content on many different domains. Hosting duplicate content on different domains can affect search results, but we are making every effort to minimize any negative consequences of hosting Blogspot content on multiple domains.

The majority of content hosted on different domains will be unaffected by content removals, and therefore identical. For all such content, we will specify the blogspot.com version as the canonical version using rel=canonical. This will let crawlers know that although the URLs are different, the content is the same. When a post or blog in a country is affected by a content removal, the canonical URL will be set to that country's ccTLD instead of the .com version. This will ensure that we aren't marking different content with the same canonical tag.
Echo Comments Not Working on Redirects

I was unaware this was happening until today when readers in New Zealand and Australia informed me that comments were no longer working.

Sites With Lost Functionality So Far


The "key" within Echo's database that associates comments to a blog entry is the full blog URL (site name + post permanent URL).

Filtering off the language code alone is insufficient because for some reason Google changed the suffix for New Zealand from ".com" to ".co".

I will get an email into Google to see if they can implement a scheme to only add a suffix. Then I still need to get Echo to do something or alternatively write a Java script to strip off the language code.

Anyone using Echo with blogger is going to have these same issues.

Twitter Employs "Filtering" as Well

Tech Week Europe reports Google To Censor Blogger Sites On Country-By-Country Basis
Google follows Twitter's lead and will use country-code top level domains to censor content as required.

Google has revealed that its Blogger service will now be able to block content on a country-by-country basis, just one week after Twitter announced that it would implementing a similar filtering strategy.

"Migrating to localised domains will allow us to continue promoting free expression and responsible publishing while providing greater flexibility in complying with valid removal requests pursuant to local law," wrote Google on a help page. "By utilizing ccTLDs, content removals can be managed on a per country basis, which will limit their impact to the smallest number of readers. Content removed due to a specific country's law will only be removed from the relevant ccTLD."

Twitter's decision to introduce this selective censorship attracted criticism from users last week, suggesting that the site was effectively aiding oppressive regimes squash freedom of speech. Google's implementation has been lower key, and whilst critics will argue the same points, the company has emphasised that the measure will prevent blanket censorship of content whilst keeping them in line with the law.

The BBC reports that Google will initially roll out the changes to Australia, New Zealand and India, but plan to apply the measures globally.
More Tech Week "Tweet" Articles

Twitter Can Now Censor Tweets In Individual Countries

Twitter said that it must begin censoring tweets, if the company was going to continue to continue its international expansion. Twitter has been blocked by a number of governments, including China and the former Egyptian regime after it was used to ignite anti-government protests.

Twitter Faces Protest Over Censorship Move

Judge Rules Twitter Must Hand Over Account Data to Wikileaks Prosecutors

Big Brother is watching. However, it's too late to worry about 1984. The worry now is if the next stop is the Year 2525 where "Everything you think do and say is in the pill you took today".

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


Postponed Till "Tomorrow"; Juncker Issues ultimatum "Comply or Default"

Posted: 05 Feb 2012 12:02 PM PST

It's Groundhog Day once again as Greek crisis talks for debt deal pushed to Monday
Coalition backers held a five-hour meeting late Sunday with Prime Minister Lucas Papademos to hammer out a deal with debt inspectors representing eurozone countries and the International Monetary Fund — but again failed to reach an agreement.

Leaders of parties supporting the Greece's coalition government say crisis talks for massive new debt deals will continue Monday.
Juncker Issues ultimatum "Comply or Default"

The theater of the absurd continues for yet another day with Juncker's ultimatum: Comply or default
Jean-Claude Juncker, the head of the Eurogroup, warned Greece through an interview to a German magazine that it will either comply with its creditors' requirements or default, as it should not expect any additional support from its peers.

Earlier, the head of the ruling coalition's third partner, Giorgos Karatzaferis, stated in Thessaloniki that he would not tolerate any ultimatums.

"We need to examine whether the creditors' demands are in favor of growth for the sake of the Greek people, otherwise we will not get the support package. I am not going to sign up to that,» said the leader of Popular Orthodox Rally (LAOS).
This is beyond ridiculous. The EU and IMF Need to Stand Up and Announce "Too Late" Deal is OFF, then work with Greece to plan a return to the Drachma.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


How to Build an Advanced Keyword Analysis Report in Excel

How to Build an Advanced Keyword Analysis Report in Excel


How to Build an Advanced Keyword Analysis Report in Excel

Posted: 04 Feb 2012 12:04 AM PST

Posted by Dan Peskin

Analyzing keyword performance, discovering new keyword opportunities, and determining which keywords to focus efforts on can be painstaking when you have thousands of keywords to review. With keyword metrics coming from all over the place (Analytics, Adwords, Webmaster Tools, etc.), it’s challenging to analyze all the data in one place regularly without having to do a decent amount of manual data manipulation. In addition, dependent on your site’s business model, tying revenue metrics to keyword data is a whole other battle.

This post will walk you through a solution to these keyword analysis issues and provide some tips on how you can slice and dice your data in wonderful ways.

With Microsoft Excel, we can create a report with all the keyword data you will need, all in one place, and fairly easy to update on a weekly or monthly basis. Then with all this data we can easily categorize segments of it to more quickly determine the better performing sets of keywords.

What we will need to do is push Google Analytics, Webmaster Tools, Adwords, Ranking data, and Revenue data all into one excel spreadsheet. Then we will put it all together into one master report and one categorized pivot table report.

To start, you should be especially familiar with pivot tables, the Google Adwords API, the Google Analytics API, and keyword research of course. Utilizing these APIs and being consistent in the formatting of the data you put into your spreadsheet will make it easy to update. If you aren’t familiar with these tools, I have provided resources below and some steps to organizing this data.

Here are some resources for learning to use pivot tables in Excel:

Excel for SEO
Microsoft Pivot Table Overview

Now let’s go fetch that data.

I Got 99 Problems, But A Keyword Visit Ain't One

First off we need to get our keyword traffic metrics through the Google Analytics API. I suggest using Mikael Thuneberg’s GA Data Fetch spreadsheet. You can follow the instructions, read the how to guide, and download the file here.

Make sure to build off the GA data fetch file or a copy of it, as it has the proper VBA functions (the Visual Basic code that allows for the API to work) installed for API calls. Once you have your API token and the spreadsheet setup you can perform your first API call.

We will be using the more complex query to extract organic keyword visits for a specific date field and filter by the number of visits. The query I use for example, will output visits, average time on site, page views, and bounces for any keyword with 5 or more visits in the last 30 days. However, you can modify the parameters to your liking. To see what other metrics can be used, check out the Analytics API documentation.

Your Analytics data should look something like this:

Analytics API Data

Google Analytics data called through the API in Excel.

Now select the whole keyword column and create a pivot table of the keyword list in another sheet. In the adjacent column create a table where the cells equal the values in the pivot table column. Label this table “KeywordList” or whatever you like. We now have the keyword table to reference for extracting Adwords data.

Keyword Lists and Tables

Pivot tables don’t have the same referencing abilities as regular tables, so the table in column B is what you will reference in future steps.

To Be, Or Not To Be Searched, That Is The Question

Next up is pulling in search volumes for our keyword table. Thanks to the wonderful Richard Baxter, there are a couple articles on using and installing the Adwords API Plugin. One on SEOmoz and one on Seogadget.

I know the Adwords API access is a bit of an issue for some, so if you cannot use the API, utilize the Google Adwords Keyword Tool (gathering data from this tool will unfortunately require a lot more work).

In a new sheet, use the Adwords API array formula called “arrayGetAdWordsStats” to pull in the average and seasonal monthly search volumes for your keyword table. Your formula should look something like this:

=arrayGetAdWordsStats(KeywordList,”EXACT”,”US”,”WEB”)

You should now have 12 months of historical search volumes and averages for all your keywords.

Adwords API Data

Results from an Adwords API call usually look like this.

Note: If your keyword list is greater than 800 keywords, you will have to break out the list into a few separate tables just to perform API calls for those keywords. If this is the case, make sure to keep each array of search volumes aligned in the same columns.

The Impression That I Get

No API required here, Google’s Webmaster Tools provides a pretty easy way to download its search query data. If you open up the Search Queries report in Webmaster Tools there is an option to “download the table” at the bottom. Download the table for the same date range you used earlier and drop it into a new sheet.

Webmaster Tools Keyword Data

The report downloaded from Webmaster Tools. Note the “-“ is used for zero values, in the yellow columns I simply cleaned that up with an IF statement.

Impressions, CTR, and Average Rank can now been added to our metrics.

If You Ain't First Page, You're Last

Since we all know how accurate average rank is from Webmaster Tools, let’s get some current rankings into this report .Grab your main keyword list from the spreadsheet and run rankings for them with your application of choice. I usually use Rank Tracker, but I am sure everyone has their own preference. Once you have your rankings drop it into a new sheet.

The More You Know

The number of metrics we can add to the report are limitless, but there comes a point where adding too many can create more work for updating the report or create analysis paralysis. The only other metric I suggest adding in is the SEOmoz Keyword Difficulty if you have a PRO account. Again this may be very time consuming to add for large numbers of keywords, hopefully you have an intern for that.

Mo Money Mo Metrics

Revenue data may come from different places dependent on how your business works, so I unfortunately don’t have a one stop solution to importing that data. However, most applications usually allow you to download that data to CSV or Excel. If you have Ecommerce enabled in Google Analytics, you can use the API to pull in this data. As long as you have some metrics to relate to your keyword such as Average Order Value or Conversion Rate, drop it in a new sheet and you will be good to go.

Some of you may be asking yourself what to do if your revenue data does not tie back to the keyword visit. This is where the categorization of keywords plays an extremely important part in this report. In this case, we want to create a bridge between the revenue data and keyword data. This can be done through categorizing your keywords into a category that relates back to a field in your revenue data. For example, you might be able to associate keywords with product names or landing pages. These products or landing pages would then become categories. Once you have determined what your categories will be, you can assign them to keywords in a new sheet that simply contains keywords in one column and the category tag in the other. You can learn more about keyword categorization here.

Keyword Categorization

Categorizing the keywords above not only lets me group them to aggregate metrics for analysis, but it allows me to bridge the gap somewhat between the keywords and conversions in this example.

One Report To Rule Them All

Finally we have all the data; we just have to put it all together. Create a new sheet and pull in your master keyword list by using =NameOfTheTable, drag this down until you reach the last keyword on the list (paste values after if you want sorting capabilities). Now select your keywords and create a new table. In the columns next to the keywords all you have to do is a VLOOKUP of each metric you would like to add to your report. Once you fill in the first cell of each column, the column should automatically be added to the table and populate the other cells with the equation. Repeat this process until all your metrics are in this table.

There will also be a need to calculate some metrics such as the Bounce Rate or Conversion Rate if you pulled in revenue data. Those should be added in adjacent columns as well. Additionally, if you didn’t need to categorize your keywords earlier, I suggest categorizing them now in an adjacent column. When completed your master report should look something like this:

Master Report

The master report.

Amazing. We have all the data in one place in a simple to sort and use table! Just wait…it gets better.

Pivotal Success

Now you may be wondering how this report can get any better. Two words my friends: Pivot Tables.

Creating a pivot table of your master report will allow you to segment your data in a number of ways that weren’t possible before. In the Pivot Table Field List, the Row Labels, Column Labels, and Values will define the layout of your report. What we first need to do is drag and drop the Category and Keyword fields into the Row Labels respectively. This will set your top level metrics to summarize at the Category level and allow you to drill down into each Category to see the associated keywords and their individual metrics.

Next you will want to start dragging your metrics into the Values section, which will automatically populate the Column Labels section with the Values field. As you add your metrics in, you can edit their names and the way they are aggregated. You will want to think carefully about how you will aggregate certain metrics so that viewing those summarized numbers at a Category level makes sense.

Pivot Table Fields

This shows you how best to setup your pivot table fields and their value settings.

For instance, I might summarize Impressions and Visits, but average CTR and Bounce Rate. Seeing the average CTR and Bounce Rate for a Category will allow me to narrow down which sets of keywords are performing better than others. Then looking at the total Impressions and Visits for those well performing categories will allow me to see where there might be a higher potential to increase traffic to my site. While this may not be an absolute rule to determine keyword focus, it is a good rule of thumb and can be a way to prioritize which ones to focus on.

Pivot table reports also allow you to add report filters, letting you filter out data by any metric or even multiple metrics. With this you could analyze keywords that only rank on the first page of SERPs using the current ranking as a filter. Hell, you could add a field to the master report calculating the number of words in each keyword phrase, then filter by that and bounce rate, giving you your well performing long tail keywords. Get creative, let loose, play with the metrics, you will be surprised at what kind of conclusions you can make about your site’s keyword traffic.

Final Keyword Analysis Report

The final product.

Conclusion

Updating the report is simple. Rerun the API calls with the new date range, rerun your rankings for the new keyword list, and export the other reports you need with new date range. As long as you kept your formatting and equations the same, the rankings and other reports should be dropped into their respective sheets without having to change anything. The master report should automatically be updated once you update the keyword column and the pivot report should update once you hit refresh under the pivot table menu. That’s it!

Well I should probably stop talking now and let you get to your hours upon hours of keyword analysis fun. Hopefully this was informative enough to make building a report such as this fairly easy. I would love to hear your feedback and will gladly answer any questions or comments about the post below. If you have issues later on, you can always contact me via Twitter.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Seth's Blog : In search of a timid trapeze artist

In search of a timid trapeze artist

Good luck with that, there aren't any.

If you hesitate when leaping from rope to another, you're not going to last very long.

And this is at the heart of what makes innovation work in organizations, why industries die, and how painful it is to try to maintain the status quo while also participating in a revolution.

Gather up as much speed as you can, find a path and let go. You can't get to the next rope if you're still holding on to this one.

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498