joi, 29 noiembrie 2012

Should We Chase The Algorithm?

Should We Chase The Algorithm?


Should We Chase The Algorithm?

Posted: 28 Nov 2012 06:52 PM PST

Posted by Dr. Pete

You might not be surprised to hear that I’ve been a little obsessed with the Google algorithm this past year. While many SEOs and business owners share that obsession, others have asked questions like “Why don’t you stop chasing Google and focus on 'real' marketing?” It’s an honest question, and I think it’s a fair one. I’d like to try to answer it and to explain why I think understanding the algorithm is an essential part of a well-rounded online marketing campaign going forward into 2013.

Panda Pac Man

Businesses Are Still at Risk

In the past two years, the Panda and Penguin updates have hit hard. For some people, they hit in a very real and personal way. I’ve seen small business owners lose everything, including their homes. I’m not here to judge Google – I understand their reasoning and even support some of it. Maybe more than that, I try to be realistic about Google’s goals and motivations. If I have to pick a side, though, I’ve been in the trenches with small businesses too long to abandon them now. If information can save you from losing everything, then I want you to have that information.

Offense + Defense = Victory

There’s been a big push toward content marketing and the broader world of #RCS (“Real Company Sh*t”, as coined by Wil Reynolds). I am 100% in favor of this movement. I believe in content marketing and in building a real brand and a product people want. There’s an implication, though, that we have to pick one or the other – either we’re content marketers or we’re algo chasers. To me, that’s like saying your team can only play offense or defense. You can have the best rushing and passing stats in the league, but you’re going to get crushed if you leave the field empty when the other team has the ball.

I want you to diversify beyond Google – if you’ve got 60%+ of your customers coming from organic search on Google, you’re in real danger of losing everything. You need to think more broadly about marketing, but you also need to protect yourself. If #RCS is your offense, then understanding the algorithm is your defense. You can have both.

People Clearly Want to Know

When we started building the algorithm history, it was honestly out of curiosity more than anything. I knew people would be interested, but I was amazed at the response. Here’s a traffic graph (unique pageviews) for 2012 through the end of October:

Algo History Traffic (1/1/2012 - 10/31/2012)

Keep in mind that the page launched in 2011. That first spike is Penguin, but the interest and traffic not only haven’t let up – they’ve increased. The page passed the 250K unique pageview mark in October, and is still growing strong. We’re chasing the algorithm because every piece of data I have says that you want us to.

The Big Picture Matters

You know how we traditionally measure algorithm updates? We use a metric called Aggregate Panic. If enough webmasters wake up, see their rankings change, and panic, we know there’s probably been an update. Sadly, that’s not really a joke.

I’m learning that probably the toughest question in search is “What’s normal?” – if we can’t understand what a normal day looks like, we’ll never be able to pinpoint an unusual one. On an individual level, I think this question translates to “Was it me, or was it Google?” Search is a highly dynamic environment, and separating out the algorithm from targeted actions (e.g. penalties and filters), competitive changes, our own SEO efforts, and seasonality is incredibly tough.  The more we know about the algorithm, the better we understand how our own data fits into the big picture.

Speculation Runs Rampant

There’s a wrong way to chase the algorithm, and we see it every day. The wrong way is to notice something changed, panic, and start building a bomb shelter while Tweeting about how Google is going to harvest your kidneys while you sleep if you leave a Google+ Hangout open. In all seriousness, we all have our pet theories, but it’s rare that they get put to the test. I’d like to see us evolve from chasing the algorithm to stalking it, and that means being methodical, collecting data, and asking questions that can be answered with that data.

Transparent is The New Black

Transparency is fashionable, and Google has put out a lot more public information in 2012, from Tweets about query impact to their monthly search quality highlights. While I think these public statements take Google real time and effort, and I don’t think they’re deliberately trying to mislead us, I do think we have to be careful how eagerly we accept these “gifts”. The monthly highlights are packed full of information, but it comes in the form of statements like:

#84394. [project “Page Quality”] This launch helped you find more high-quality content from trusted sources.

We know this update is important, because it has an ID number and a code name. Unfortunately, you could basically translate it to this:

#90210 [project “Turkey Giblets”] We made some stuff better.

…and you’d have learned just as much as you did from the original. I’m not bashing Google’s intent, because I honestly don’t know what their intent is. I’m worried, though. I’m worried that we’re so happy to have this information that we’re going to stop digging for our own data. If you want to listen to the wizard, that’s your business. I’d rather poke the curtain.

Google Controls Far Too Much

This one’s a little out there, and it goes well beyond SEO. Depending on who you ask, Google may control as much as 80% of the search market. Search isn’t just about finding a new pizza place or even customers finding your business.  Search is our portal to the largest archive of human knowledge we’ve ever had – the internet. No social site accesses a full crawl of the web. Only the major search engines do it, and Google may be getting 4 out of every 5 of those searches. Google is shaping how we work, how we play, and even how we think, and they’re making more than $40,000,000,000 a year doing it. I’m not a conspiracy theorist, but I think we need to fight for all the transparency we can get. Too much is at stake if we let the algorithm become a black box.

Let’s Be Careful Out There

There’s a fine line between healthy skepticism and paranoia. I hope that the lessons that a handful of us learn by chasing the algorithm let you sleep a little better at night and do the jobs you need to do. If you see me at a conference and say “I stopped chasing the algorithm and grew my business!”, I’ll say “Congratulations!” and buy you a beer. Until then, keep your eyes open and we’ll keep doing what we do.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Another November Index is Live!

Posted: 28 Nov 2012 05:56 AM PST

Posted by carinoverturf

This month we're bringing you a special holiday treat: two Mozscape indices in the month of November! We just released the latest index, and you can now find fresh Mozscape data in Open Site Explorer, the MozbarPRO campaigns, and the Mozscape API.

This index is similar in size to the previous Mozscape index with about 76 billion URLs. The heavy computing AWS machines we moved to in October, detailed in Anthony's blog post, has saved significant amounts of time in our processing schedule thanks to almost no machine failures.

This time saved means more time for the Mozscape engineers to work on exciting projects, like tuning the final configurations in our own private cloud! We've been running a similar sized index in our private cloud located in Virginia alongside the index releasing today. It's running a bit slower as we continue to tune and dial the last pieces, but we hope to be running a hybrid processing solution early next year. Running an index in the cloud and an index in our own private cloud means fresher index data for you and our applications!  

Here are the metrics for this latest index:

  • 76,668,945,929 (76 billion) URLs
  • 664,205,988 (664 million) Subdomains
  • 136,202,352 (136 million) Root Domains
  • 892,544,725,878 (892 billion) Links
  • Followed vs. Nofollowed
    • 2.31% of all links found were nofollowed
    • 56.61% of nofollowed links are internal
    • 43.39% are external
  • Rel Canonical - 13.91% of all pages now employ a rel=canonical tag
  • The average page has 73 links on it
    •  62.28 internal links on average
    •  10.54 external links on average

And the following correlations with Google's US search results:

  • Page Authority - 0.35
  • Domain Authority - 0.19
  • MozRank - 0.24
  • Linking Root Domains - 0.30
  • Total Links - 0.25
  • External Links - 0.29

This histogram shows the crawl date and freshness of results in this index:

Crawl histogram for the late November Mozscape index

As you can see from the histogram, this index has some pretty fresh data mostly coming from October and the first week of November. The freshest data in this index will be from 11/10 when we started processing, and a good percentage was crawled late October and early November.  

As always, we'd love to hear your feedback in the comments - the Big Data team will be reading and responding! And remember, if you're ever curious about when Mozscape is updating, you can check the calendar here. We also maintain a list of previous index updates with metrics here.

Happy data pulling, Mozzers! 


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Niciun comentariu:

Trimiteți un comentariu