vineri, 9 decembrie 2011

I'm Being Outranked by a Spammer - Whiteboard Friday

I'm Being Outranked by a Spammer - Whiteboard Friday


I'm Being Outranked by a Spammer - Whiteboard Friday

Posted: 08 Dec 2011 01:14 PM PST

Posted by Kenny Martin

What do you do when you are being outranked by a spammer? It's one of the most frustrating things that an SEO can face, but before jumping to conclusions it's important to understand what exactly is happening.

In this week's Whiteboard Friday, Rand gives us helpful advice with many important steps to follow if you discover that a spammer is outranking you. Let us know your thoughts on this challenging problem in the comments below!



Video Transcription

Howdy SEOmoz fans! Welcome to another edition of Whiteboard Friday. This week we're talking about a very tough problem, being outranked by spammers.

What I mean very specifically here is link spammers, because it is rare in the SEO world that today you are seeing other sorts of spam. Cloaking, manipulative redirection, doorway pages, they happen a little bit, but they are much less common. The most common forms of spam and the thing that I see people complain about all the time, the thing I get emails about, I get tweets about, we get Q&A about is, "Hey, Rand, I am being outranked by these spammers. Can you send this over to the Google webspam team? Can you tell the Bing webspam team? Who should I email over there? I filed my webspam reports. Do you think I should try and get it published on YOUmoz? Should I try and write to The New York Times and have them write about it because it seems like Google kicks people out when they're written about in The New York Times, at least for a little while?"

These are not always great tactics unfortunately, but I do want to walk you through some things that you should be doing when you think you are being outranked by spammers.

The first part is make sure, make 100% sure, that what you are looking at is really a ranking that's been earned through link spam. What I am talking about here is I will take you back. I will tell you a story of several years ago. This was probably, I am going to say 2007, and I was in the audience, I can't remember if I was on the panel or in the audience, and there was Google's head of webspam, has been for the last decade or so, Matt Cutts, on the panel. Matt was looking at some links using his special Google software where he is investigating a link graph right on his laptop, and someone from the audience had said, "Hey, Matt, I am getting outranked by this particular spammer." He looked and said, "No, you know, we see a few thousands links to that site, but we're actually only counting a few hundred of them, and they're the ones that are making it rank there."

So, think about that. We're talking about thousands of links pointing to a site. Think of all the links that might be pointing to a site here. Here are five different links that are pointing to this particular page. What the webspam team at Google is essentially saying is, "Hey, you know what? We know that this and this and this and this are spam. The reason that this page is ranking is because they do have some good links that we are counting." Remember it is often the case that Google's webspam team and their algorithm will not make these links cause a penalty against you, because then you could just point them at somebody else's site or page and make them drop in the rankings. Instead, what they'll do is drop the value of those, so that essentially it is like having a no-followed link for those pages. Yes, it's a followed link, but they are going to essentially say, "Oh, you know what? Our algorithm has detected that those are manipulative links. We are going to remove the value that they pass."

A lot of the times when you look through a list of hundreds or thousands of links and you see a lot of spam and you think to yourself, "Hey, that's why that guy is ranking. It's because he is spamming." It might not be the case. It could even be the case that person didn't actually build those spammy links. They just came through, you know, crap, junk on the Web. Not all the time, and usually you can tell the difference, but this is really something to keep in mind as you're analyzing that stuff. When you are, think to yourself, "Hey, how did they get the best-looking links that they've got, and could those be the ones that are responsible for making them rank so well?" Because if that's the case, you need to revisit your thesis around I'm being outranked by a spammer and think I am being outranked by a guy who's done some good link building who also happens to have lots of spammy links pointing at him. That's a completely different problem, and you need to solve for that.

If you are sure, so let's say you've gone through step one, confirmed that, you know what, this is a crap link too that Google shouldn't be passing value, but somehow they are. I want you to ask two more critical questions.

The first one: Is focusing on someone else's spam that's outranking you the best possible use of your web marketing time? You've got a lot on your plate. You don't just have to worry about SEO, right? These days you're worrying about SEO; you're worrying about keyword research; you're worrying about link building; you're worrying about content marketing; you're worrying about blogs and blog commenting and RSS and the traffic rating through there. You're worried about social media - Facebook, Twitter, Google+, and LinkedIn - and the longtail of all these other sites - Quora and Pinterest, Reddit, StumbleUpon. You're worried about web analytics and analyzing your success and making sure things are going through. You have to worry about crawlers and XML site maps and robots.txt. Make sure that thinking about and spending time on trying to flag somebody else's spam or trying to get them penalized is absolutely the best possible thing that you can be doing with your day. If it's not, reprioritize and put something else up there.

The second question is, which we don't discuss at SEOmoz here because we really kind of hate this practice, but are you willing to and does your site have risk tolerance to go acquire spammy links? If you see someone's outranking you and you're like, well, I could get those kinds of links too or those exact links too, do you have the risk tolerance for it? If you believe that it is an ethical issue, do you have the moral flexibility for it? If you don't believe it is a moral question, do you have the budget for it? Is it the best use of your budget? Is it the best use of your time? I almost always believe the answer is no, with the possible exception of some super spammy fields, PPC (porn, pills, and casino), which I have never personally operated, and so I don't pretend to understand that world. But virtually every other form of legit business on the Web, I can't get behind this. But maybe you can. Maybe you want to. Decide if that's the route you want to take.

Once you answer those two, you can move on to step three, which is, should you report the spam? The problem here is, you are going to go and send it through, let's say, your Webmaster Tools report, and there are thousands, probably hundreds of thousands of these filed every day. Probably put it somewhere between tens and hundreds of thousands of these filed every day, spam reports. Google says Webmaster Tools are the best place to file, when you are logged into your account, to report spam from other folks. Those reports go to a team of software engineers who work on Google's webspam and search quality teams. Then you can see, they've done a video, where they sit around and they prioritize all the day's projects and they determine who is going to work on what and how much energy they're going to put into it. You can probably tell that over the last couple of years, maybe even three years, there has not been a ton of energy spent to try and devalue link spam. In fact, a lot of paid links are working these days, and it's sort of a sad reality. I think that many people assume that Google's actually trying to move beyond linking signals, particularly social signals, Google+, by using the signals of users and usage data that they're getting through Chrome's market share, which I think was now reported globally as over 25% of all web browsers, which is very impressive, from StatCounter.

So, I would say that this is a low-value activity as well. Not that you necessarily shouldn't do it. I mean, if you want to try and help Google make the Web a better place and you believe in their sort of mission and the quality of the people there, then by all means, spend two minutes, report them for webspam. It's not going to take a ton of your time. But please, don't think this is a solution. This will not solve any of the problems you're trying to solve. It might help Google in the long run to get better, to try and analyze some forms of webspam and link spam that they might not have otherwise caught if you hadn't told them. Is it going to help you rank better? Boy, probably not, and even if it is, not for a long time, because these algorithmic developments take a tremendous amount of time and energy to implement. Panda was years in the making. Most of the link spam devaluations that happened in '07 and '08 were years in the making. You could see patents that were filed two years, four years before those things actually came out. But reporting spam is an option.

Then I want you to move on to step four, which is can you - I think you almost always can - outmaneuver the spammers using their own tactics? What I mean by this is you might see where those links are coming from, but what's winning? Is it coming from high PageRank or high MozRank pages? Sort of home pages of domains? Is it coming from internal pages? Are they coming from directories? Are they coming from forums? Are they coming from blogs? Are they coming from .edu sites? Where are those links coming from, what are they pointing to, and what kind of anchor text are they using? Is it diverse anchor text? Is it all exact match anchor text? You want to find, you want to identify all the patterns. You're going to say, "Oh, this is anchor text pattern and this is the diversity of those patterns of where those links are coming from and this is a type of site it is coming from and this is the quantity or the number of sites I'm seeing and here's where the link target's pointing to." You look at all those things and then you find ways to do it inbound. Find ways to do it white hat. I promise you, you can. Think of one of the most common forms of spam, which is someone hijacking .edu webpages on student domains and then they essentially have all these anchor text links pointing to a specific page on their site from .edu pages that are buried deep in a site, but because it is an .edu it is a trusted domain. Usually there are only 50 or 100 of them, but they seem to be passing juice.

So how do you get 20 or 30 good links from .edu? I'll give you some great examples. Well, I'll give you one, and then you can figure out tons more on your own and certainly there is tons of link building content that you can look at on the SEOmoz site. But here's a great one. Go do a search like your keyword - whoa, that's a lot of smudging - your keyword + file type:pdf or xls or something like that and site:.edu. What this is going to give you is essentially here is a bunch of research that has been done on .edu sites that's been published, that's probably kind of buried. Now, I want you to go create some great blog posts, some great content, that references this stuff, that turns it into a graphic, that makes a clever video about it, and then I want you to email whoever was responsible for the research, and I guarantee half the time they are going to link to you from that website, from the .edu website. They're going to be like, "Oh, this is great. Someone turned my research into an infographic on a commercial site. Very cool. Great to see that application in the real world. Thank you. Here's a link . . ." from an .edu that's not spammy, that's completely inbound, white hat because it's making the Web a better place. There are ways to figure out all of this stuff.

Then I want you take this last and final step, step five. Beat them by targeting the tactics, the channels, the people, and the keywords that they don't target. Remember what spam does. Spam tends to look at, if here is the keyword demand curve and we've got the head in here with all the popular keywords, that's where all the spam is. You very rarely, extremely rarely, see spam down in the tail. So if you can do things like user generated content, building a community, building tons of longtail great content, having a blog, having a forum, a place where real people participate and are creating a kind of Q&A site, you're going to target all that longtail. Remember 70% of the keyword volume is in here. This is only 30% up in the fat head and the chunky middle. Great. Fantastic way to work around them. Or think about ways that they can't target, the channels that spammers, especially link spammers never target - social media, forums, and communities. Rarely do they ever target blogs. Those people don't take those sites seriously. They don't take them authentically. Think about the branding elements you can build. You can have a better site design, a higher conversion rate, a way better funnel. People subscribe to your email. To follow you, subscribe to your RSS feed. No spammer is ever going to get that, and those are customers that you can keep capturing again and again and again, because when you do inbound marketing, when you do white hat marketing, you don't have to just push your site up the rankings. You can approach it from a holistic point of view and win in all sorts of tactics and all sorts of channels. That's what I love about this field too.

All right everyone. I hope you've enjoyed this edition of Whiteboard Friday. I hope you'll feel maybe a little bit less stressed out about that nasty spammer who is ranking above you. I hope you'll see us again next week for another edition of Whiteboard Friday.

Video transcription by Speechpad.com


Do you like this post? Yes No

Search Engine Algorithm Basics

Posted: 08 Dec 2011 03:33 AM PST

Posted by rolfbroer

A good search engine does not attempt to return the pages that best match the input query. A good search engine tries to answer the underlying question. If you become aware of this you'll understand why Google (and other search engines), use a complex algorithm to determine what results they should return. The factors in the algorithm consist of "hard factors" as the number of backlinks to a page and perhaps some social recommendations through likes and +1' s. These are usually external influences. You also have the factors on the page itself. For this the way a page is build and various page elements play a role in the algorithm. But only by analyzing the on-site and off-site factors is it possible for Google to determine which pages will answer is the question behind the query. For this Google will have to analyze the text on a page.

In this article I will elaborate on the problems of a search engine and optional solutions. At the end of this article we haven’t revealed Google’s algorithm (unfortunately), but we’ll be one step closer to understand some advice we often give as an SEO. There will be some formulas, but do not panic. This article isn’t just about those formulas. The article contains a excel file. Oh and the best thing: I will use some Dutch delights to illustrate the problems.

Croquets and Bitterballen
Behold: Croquets are the elongated and bitterballen are the round ones ;-)

True OR False
Search engines have evolved tremendously in recent years, but at first they could only deal with Boolean operators. In simple terms, a term was included in a document or not. Something was true or false, 1 or 0. Additionally you could use the operators as AND, OR and NOT to search documents that contain multiple terms or to exclude terms. This sounds fairly simple, but it does have some problems with it. Suppose we have two documents, which consist of the following texts:

Doc1:
“And our restaurant in New York serves croquets and bitterballen.”

Doc2:
“In the Netherlands you retrieve croquets and frikandellen from the wall.”
 

Frikandellen
Oops, almost forgot to show you the frikandellen ;-)

If we were to build a search engine, the first step is tokenization of the text. We want to be able to quickly determine which documents contain a term. This is easier if we all put tokens in a database. A token is any single term in a text, so how many tokens does Doc1 contain?

At the moment you started to answer this question for yourself, you probably thought about the definition of a "term". Actually, in the example "New York" should be recognized as one term. How we can determine that the two individual words are actually one word is outside the scope of this article, so at the moment we threat each separate word as a separate token. So we have 10 tokens in Doc1 and 11 tokens in Doc2. To avoid duplication of information in our database, we will store types and not the tokens.

Types are the unique tokens in a text. In the example Doc1 contains twice the token "and". In this example I ignore the fact that “and” appears once with and once without being capitalized. As with the determination of a term, there are techniques to determine whether something actually needs to be capitalized. In this case, we assume that we can store it without a capital and that “And” & “and” are the same type.

By storing all the types in the database with the documents where we can find them, we’re able to search within the database with the help of Booleans. The search "croquets" will result in both Doc1 and Doc2. The search for "croquets AND bitterballen" will only return Doc1 as a result. The problem with this method is that you are likely to get too much or too little results. In addition, it lacks the ability to organize the results. If we want to improve our method we have to determine what we can use other then the presence / absence of a term in a document. Which on-page factors would you use to organize the results if you were Google?

Zone Indexes
A relatively simple method is to use zone indexes. A web page can be divided into different zones. Think of a title, description, author and body. By adding a weight to each zone in a document, we’re able to calculate a simple score for each document. This is one of the first on page methods search engines used to determine the subject of a page. The operation of scores by zone indexes is as follows:

Suppose we add the following weights ​​to each zone:

Zone Weight
title 0.4
description 0.1
content 0.5

We perform the following search query:
“croquets AND bitterballen”

And we have a document with the following zones:

Zone Content Boolean Score
title New York Café 0 0
description Café with delicious croquets and bitterballen 1 0.1
content Our restaurant in New York serves croquets and bitterballen 1 0.5
    Total 0.6

Because at some point everyone started abusing the weights assigned to for example the description, it became more important for Google to split the body in different zones and assign a different weight to each individual zone in the body.

This is quite difficult because the web contains a variety of documents with different structures. The interpretation of an XML document by such a machine is quite simple. When interpreting an HTML document it becomes harder for a machine. The structure and tags are much more limited, which makes the analysis more difficult. Of course there will be HTML5 in the near future and Google supports microformats, but it still has its limitations. For example if you know that Google assigns more weight to content within the <content> tag and less to content in the <footer> tag, you’ll never use the <footer> tag.

To determine the context of a page, Google will have to divide a web page into blocks. This way Google can judge which blocks on a page are important and which are not. One of the methods that can be used is the text / code ratio. A block on a page that contains much more text than HTML code contains probably the main content on the page. A block that contains many links / HTML code and little content is probably the menu. This is why choosing the right WYSIWYG editor is very important. Some of these editors use a a lot of unnecessary HTML code.

The use of text / code ratio is just one of the methods which a search engine can use to divide a page into blocks. Bill Slawski talked about identifying blocks earlier this year.

The advantage of the zone indexes method is that you can calculate quite simple a score for each document. A disadvantage of course is that many documents can get the same score.

Term frequency
When I asked you to think of on-page factors you would use to determine relevance of a document, you probably thought about the frequency of the query terms. It is a logical step to increase weight to each document using the search terms more often.

Some SEO agencies stick to the story of using the keywords on a certain percentage in the text. We all know that isn’t true, but let me show you why. I'll try to explain it on the basis of the following examples. Here are some formulas to emerge, but as I said it is the outline of the story that matters.

The numbers in the table below are the number of occurrences of a word in the document (also called term frequency or tf). So which document has a better score for the query: croquets and bitterballen ?

  croquets and café bitterballen Amsterdam ...
Doc1 8 10 3 2 0  
Doc2 1 20 3 9 2  
DocN ... ... ... ... ...  
Query 1 1 0 1 0  

The score for both documents would be as follows:
score(“croquets and bitterballen”, Doc1) = 8 + 10 + 2 = 20
score(“croquets and bitterballen”, Doc2) = 1 + 20 + 9 = 30

Document 2 is in this case closer related to the query. In this example the term “and” gains the most weight, but is this fair? It is a stop word, and we like to give it only a little value. We can achieve this by using inverse document frequency (tf-idf), which is the opposite of document frequency (df). Document frequency is the number of documents where a term occurs. Inverse document frequency is, well, the opposite. As the number of documents in which a term grows, idf will shrink.

You can calculate idf by dividing the total number of documents you have in your corpus by the number of documents containing the term and then take the logarithm of that quotient.

Suppose that the IDF of our query terms are as follows:
Idf(croquets)            = 5
Idf(and)                   = 0.01
Idf(bitterballen)         = 2

Then you get the following scores:
score(“croquets and bitterballen”, Doc1) = 8*5  + 10*0.01 + 2*2 = 44.1
score(“croquets and bitterballen”, Doc2) = 1*5 + 20*0.01 + 9*2 = 23.2

Now Doc1 has a better score. But now we don’t take the length into account. One document can contain much more content then another document, without being more relevant. A long document gains a higher score quite easy with this method.

Vector model
We can solve this by looking at the cosine similarity of a document. An exact explanation of the theory behind this method is outside the scope of this article, but you can think about it as an kind of harmonic mean between the query terms in the document. I made an excel file, so you can play with it yourself. There is an explanation in the file itself. You need the following metrics:

  • Query terms - each separate term in the query.
  • Document frequency - how many documents does Google know containing that term?
  • Term frequency - the frequency for each separate query term in the document (add this Focus Keyword widget made by Sander Tamaëla to your bookmarks, very helpful for this part)

Here's an example where I actually used the model. The website had a page that was designed to rank for "fiets kopen" which is Dutch for “buying bikes”. The problem was that the wrong page (the homepage) was ranking for the query.

For the formula, we include the previously mentioned inverse document frequency (idf). For this we need the total number of documents in the index of Google. For this we assume N = 10.4 billion.

An explanation of the table below:

  • tf = term frequency
  • df = document frequency
  • idf = inverse document frequency
  • Wt,q = weight for term in query
  • Wt,d = weight for term in document
  • Product = Wt,q * Wt,d
  • Score = Sum of the products

The main page, which was ranking: http://www.fietsentoko.nl/

term Query Document Product
  tf df idf Wt,q tf Wf Wt,d  
Fiets 1 25.500.000 3.610493159 3.610493159 21 441 0.70711 2.55302
Kopen 1 118.000.000 2.945151332 2.9452 21 441 0.70711 2.08258
              Score: 4.6356

The page I wanted to rank: http://www.fietsentoko.nl/fietsen/

term Query Document Product
  tf df idf Wt,q tf Wf Wt,d  
Fiets 1 25.500.000 3.610493159 3.610493159 22 484 0.61782 2.23063
Kopen 1 118.000.000 2.945151332 2.945151332 28 784 0.78631 2.31584
              Score: 4.54647

Although the second document contains the query terms more often, the score of the document for the query was lower (higher is better). This was because the lack of balance between the query terms. Following this calculation, I changed the text on the page, and increased the use of the term “fietsen” and decreased the use of “kopen” which is a more generic term in the search engine and has less weight. This changed the score as follows:

term Query Document Product
  tf df idf Wt,q tf Wf Wt,d  
Fiets 1 25.500.000 3.610493159 3.610493159 28 784 0.78631 2.83897
Kopen 1 118.000.000 2.945151332 2.945151332 22 484 0.61782 1.81960
              Score: 4.6586

After a few days, Google crawled the page and the document I changed started to rank for the term. We can conclude that the number of times you use a term is not necessarily important. It is important to find the right balance for the terms you want to rank.

Speed up the process
To perform this calculation for each document that meets the search query, cost a lot of processing power. You can fix this by adding some static values ​​to determine for which documents you want to calculate the score. For example PageRank is a good static value. When you first calculate the score for the pages matching the query and having an high PageRank, you have a good change to find some documents which would end up in the top 10 of the results anyway.

Another possibility is the use of champion lists. For each term take only the top N documents with the best score for that term. If you then have a multi term query, you can intersect those lists to find documents containing all query terms and probably have a high score. Only if there are too few documents containing all terms, you can search in all documents. So you’re not going to rank by only finding the best vector score, you have the have your statics scores right as well.

Relevance feedback
Relevance feedback is assigning more or less value to a term in a query, based on the relevance of a document. Using relevance feedback, a search engine can change the user query without telling the user.

The first step here is to determine whether a document is relevant or not. Although there are search engines where you can specify if a result or a document is relevant or not, Google hasn’t had such a function for a long time. Their first attempt was by adding the favorite star at the search results. Now they are trying it with the Google+ button. If enough people start pushing the button at a certain result, Google will start considering the document relevant for that query.

Another method is to look at the current pages that rank well. These will be considered relevant. The danger of this method is topic drift. If you're looking for bitterballen and croquettes, and the best ranking pages are all snack bars in Amsterdam, the danger is that you will assign value to Amsterdam and end up with just snack bars in Amsterdam in the results.

Another way for Google is to use is by simply using data mining. They can also look at the CTR of different pages. Pages where the CTR is higher and have a lower bounce rate then average can be considered relevant. Pages with a very high bounce rate will just be irrelevant.

An example of how we can use this data for adjusting the query term weights is Rochio's feedback formula. It comes down to adjusting the value of each term in the query and possibly adding additional query terms. The formula for this is as follows:
Rochhio feedback formula

The table below is a visual representation of this formula. Suppose we apply the following values ​​:
Query terms: +1 (alpha)
Relevant terms: +1 (beta)
Irrelevant terms: -0.5 (gamma)

We have the following query:
“croquets and bitterballen”

The relevance of the following documents is as follows:
Doc1   : relevant
Doc2   : relevant
Doc3   : not relevant

Terms Q Doc1 Doc2 Doc3 Weight new query
croquets 1 1 1 0 1 + 1 – 0        = 2
and 1 1 0 1 1 + 0.5 – 0.5  = 1
bitterballen 1 0 0 0 1 + 0 - 0         = 1
café 0 0 1 0 0 + 0.5 – 0     = 0.5
Amsterdam 0 0 0 1 0 + 0 – 0.5     = -0.5  = 0

The new query is as follows:
croquets(2) and(1) bitterballen(1) cafe(0.5)

The value for each term is the weight that it gets in your query. We can use those weights in our vector calculations. Although the term Amsterdam was given a score of -0.5, the adjust negative values back to 0. In this way we do not exclude terms from the search results. And although café did not appear in the original query, it was added and was given a weight in the new query.

Suppose Google uses this way of relevance feedback, then you could look at pages that already rank for a particular query. By using the same vocabulary, you can ensure that you get the most out of this way of relevance feedback.

Takeaways
In short, we’ve considered one of the options for assigning a value to a document based on the content of the page. Although the vector method is fairly accurate, it is certainly not the only method to calculate relevance. There are many adjustments to the model and it also remains only a part of the complete algorithm of search engines like Google. We have taken a look into relevance feedback as well. *cough* panda *cough*. I hope I’ve given you some insights in the methods search engine can use other then external factors. Now it's time to discuss this and to go play with the excel file :-)


Do you like this post? Yes No

West Wing Week or "The Obamas of Osawatomie"

The White House Your Daily Snapshot for
Friday, Dec. 9, 2011
 

West Wing Week or "The Obamas of Osawatomie"

This week, the President gave a major address on the defining issue of our time, restoring economic security to the middle class. He also hosted former President Clinton, the Canadian Prime Minister and Startup America, honored five giants from the art world, and urged Congress to extend the payroll tax cut.

Watch the video:

West Wing Week

In Case You Missed It

Here are some of the top stories from the White House blog:

Vice President Biden Welcomes Home the USS Gettysburg
Vice President Biden delivered a brief welcome message before personally thanking every sailor for their service as they disembarked the ship to reunite with their loved ones.

By the Numbers: 782 Percent
The Department of Defense found that the APR on installment loans marketed exclusively to the military was often as high as 782 percent for a two-week installment loan.

President Obama Discusses Richard Cordray and the Payroll Tax Cut
President Obama spoke from the briefing room at the White House to address a vote by Senate Republicans to block Richard Cordray -- and the refusal of Congress to extend the payroll tax cut.

Today's Schedule

All times are Eastern Standard Time (EST).

10:00 AM: The President receives the Presidential Daily Briefing

1:30 PM: Briefing by Press Secretary Jay Carney WhiteHouse.gov/live

1:00 PM: The Vice President attends a campaign event

7:00 PM: The Vice President delivers remarks at an event honoring Ernie Davis, the first African-American to win the Heisman Trophy WhiteHouse.gov/live

WhiteHouse.gov/live Indicates that the event will be live-streamed on WhiteHouse.gov/Live

Get Updates

Sign up for the Daily Snapshot

Stay Connected

 

This email was sent to e0nstar1.blog@gmail.com
Manage Subscriptions for e0nstar1.blog@gmail.com
Sign Up for Updates from the White House

Click here to unsubscribe | Privacy Policy

Please do not reply to this email. Contact the White House

The White House • 1600 Pennsylvania Ave NW • Washington, DC 20500 • 202-456-1111

 

 

SEOptimise

SEOptimise


SEOptimise’s 58 most awesome blog posts of 2011

Posted: 08 Dec 2011 04:32 AM PST

Typing cat*
2011 has been another very busy year on the SEOptimise blog, with nearly 400 posts generating over 400,000 visits and well in excess of half a million pageviews (oh and one best blog award).

With the year drawing to a close, and Christmas just round the corner, I thought it would be a great opportunity to try and summarise the best and most popular posts of the year, and hopefully give you a few early SEO Christmas gifts. While I am personally not a fan of list posts, judging by the most popular posts a lot of you are. So ever eager to please, here is a list of the 58 best/most popular posts of the year.

Our 10 most popular posts

What better place to start than the most popular posts of the year? Between them, the posts below generated almost 100,000 pageviews. So, working on the basis that 100,000 people (ok, maybe not people) can't be wrong, there must be some awesome SEO gems contained within them.

  1. 30​ Web Trends You Have to Know About in 2011 the first post of the year is also the most popular, with Tad's post about what was going to happen to search and social in 2011 receiving over 17,000 pageviews. As you would expect, there were a few predications that didn't come true, but a fair few that did.
  2. 30 Ways to Use Social Media for Business People – with businesses seemingly falling over themselves to become more social, it's good to know what you can use it for, and this post rounds it up.
  3. 30 (New) Google Ranking Factors You May Over- or Underestimate – a one-stop guide to some of the more misunderstood Google ranking factors: link decay anyone?
  4. What Happens When You Build 10,000 Dodgy Links to a New Domain in 24 Hours? – this post got a fair few mentions at conferences, on other blog posts and even in the mainstream media. While the answer may seem obvious, you never know until you try, and Marcus did.
  5. 30 (New) SEO Terms You Have to Know in 2011 – this was the most shared post on our blog in 2011 and, amongst other things, told everyone what a content farm is shortly before Google tried to destroy them all. Still, there are always the 29 other terms.
  6. High Risk SEO: 33 Ways to Get Penalised by Google – a post that summarises ways to get penalised by Google. Best used as a "how not to guide", but I'm fairly sure people will still try a few and cross their fingers.
  7. 36 Must-Read Local SEO/Google Places Resources from 2010/2011 – with Local and Places results becoming ever more prominent in the SERPs, it's best to know how to master them. This post did the hard work and aggregated lots of the best sources for you.
  8. 30 Web Trends for 2012: How SEO, Search, Social Media, Blogging, Web Design & Analytics Will Change – despite only being published in late November this still made the top 10 list. Judging by the accuracy of the one Tad did for 2011, it's probably worth a read.
  9. Can You Get a New Domain Ranking Using Just Facebook Likes & Tweets? – there was a lot of talk earlier in 2011 about the potential use of social signals in Google's algorithm, so Marcus tried to find out if there was anything to back up the chat by experimenting.
  10. 30 Link Building/Link Baiting Techniques That Work in 2011 – With the Panda update (potentially) devaluing a lot of low quality links, Tad refreshed our memory about some of the great link building tactics that are still available.

The 20 posts that got you sharing

The later half of the year has all been about one thing – social media, largely fuelled by Google's latest attempt at social media, Google+. Bearing this in mind, we had to check what got our readers sharing during 2011.

  1. 30 New SEO Terms You Have To Know in 2011 – shared 1,510 times
  2. 30 Ways To Use Social Media for Business People – shared 1,284 times
  3. 30 Web Trends For 2012: How SEO Search Social Media Blogging Web Design Analytics Will Change – shared 890 times
  4. 30 New Google Ranking Factors You May Over or Underestimate – shared 678 times
  5. How To Write a Social Media Audit – shared 599 times
  6. Can You Get a New Domain Ranking Using Just Facebook Likes Tweets – shared 597 times
  7. High Risk SEO 33 – Ways To Get Penalised by Google – shared 582 times
  8. 30 Link Building & Link Baiting Techniques That Work in 2011 – shared 406 times
  9. What Makes a Real SEO Expert – shared 404 times
  10. 30 Google Quality/Panda Update Resources for Content Farmers and SEO Practitioners – shared 397 times
  11. 30 Efficient Web Tools That Save Time and Make Money For Power Users – shared 369 times
  12. 30 Google SERP Changes that Impact Your SEO Strategy – shared 357 times
  13. 157 Awesome PubCon 2011 Takeaways – shared 346 times
  14. What Happens When You Build 10,000 Dodgy Links to a New Domain in 24 Hours – shared 322 times
  15. 30 Web Trends You Have To Know About in 2011 – shared 285 times
  16. Google Dropping Analytics Keyword Data: What Does This Mean – shared 273 times
  17. 30 SEO Myths That Are True or False Depending on Who You Ask – shared 247 times
  18. 30 Ways To Optimise Your Site for Speed – shared 235 times
  19. Experiment Do Google+1S Impact Your Rankings – shared 229 times
  20. Social Media Why an SEO Background is Better Than PR – shared 229 times

The three best experiments of the year

Due to the secrecy behind Google's algorithm, a lot of popular SEO theory is based on anecdotal evidence and guess work. But, every so often Marcus, our resident SEO mad scientist, likes to don his lab coat shark outfit, lock himself away and get experimenting. Below are three of his best experiments from 2011 (no animals were harmed during the making of these posts – but a few sites were).

  1. What Happens When You Build 10,000 Dodgy Links to a New Domain in 24 Hours
  2. Experiment – Do Google+1s Impact Your Rankings
  3. Can You Get a New Domain Ranking Using Just Facebook Likes Tweets

The 10 greatest SEO tool posts

The one thing common to all SEOs is that we like our tools, whether it's to crawl a site or just optimise it. In the SEOptimise office we like to try everything out, and the good news for you is that we love to write about them too. So here are the 10 greatest posts about SEO tools we wrote this year.

  1. Top 20 WordPress Plugins 2011 Edition
  2. 30 Efficient Web Tools That Save Time and Make Money for Power Users
  3. 10 New Google Tools Products and Services Every Business Person Has To Know About
  4. Five Low Profile SEO Tools
  5. 30 Very Useful Twitter Tools You Must Be Aware of
  6. Google Webmaster Tools: A Beginners' Guide to Installation
  7. More Than 30 Google Tools, Extensions, Tutorials and Other Resources
  8. Simple Goal Conversion Tracking With Piwik, the Open Source Google Analytics Alternative
  9. 30 Social Search Tools SEO Resources For Power Users
  10. Salespeople The Free SEO Tool Every Agency Has

10 awesome SEO conference posts

As an agency we get like to go to and talk at as many conferences as we can (especially if they are in Vegas). But we also know that not everyone can go to them all, and even if you do there is a lot to remember, so we always try our best to post a round-up of the stuff we hear. So below I have summarised our 10 most viewed conference related posts.

  1. 157 Awesome PubCon 2011 Takeaways
  2. Using Social Media for SEO Benefit: Travel Presentation SAScon 2011
  3. How Important Are Facebook Likes for Search Presentation From SMX London
  4. BrightonSEO 2011 Roundup Who Said What and Why
  5. Post Panda Affiliates' Guide to Surviving Google, a4uexpo London 2011
  6. Keyword Research SMX Advanced London 2011 Presentation by Kevin Gibbons
  7. Top 65 Takeaways From a4uexpo London 2011
  8. SearchLove 2011: Top Trends
  9. 28 Top Takeaway Tweets From SAScon Mini 2011
  10. London SES Day 1

Five wicked polls

We love a good bit of crowdsourcing in the SEOptimise office, and we are very lucky that we have followers who like to get involved and share their opinions. Here are five of the best and most interesting polls we ran this year.

  1. 74% of SEOs Buy or Would Buy Links
  2. Google Dropping Analytics Keyword Data: What Does This Mean
  3. Quick Poll Is Google AdWords Remarketing (A) Great for ROI or (B) Annoying
  4. Poll What Are The Most Important Factors That Make a Blog Post Go Viral
  5. Think Visibility Voted 1 UK Search Conference by SEOs

So these are our greatest hits of the year, hope you all found some cool stuff. Next year there will be more of the same, plus hopefully a few more video blogs as well. If there are any blogs that we haven't included but you think deserve a mention, let me know in the comments section.

Finally, we are always happy to take requests, so if you want us to blog about something in particular you can add those to the comments too.

*Image credit: negatendo on Flickr (I had to use a cat picture once this year).

© SEOptimise - Download our free business guide to blogging whitepaper and sign-up for the SEOptimise monthly newsletter. SEOptimise’s 58 most awesome blog posts of 2011

Related posts:

  1. 154 Awesome Pubcon 2011 Takeaways, Tips & Tweets
  2. Top SEOptimise posts in September…
  3. 30​ Web Trends You Have to Know About in 2011

Seth's Blog : Well rounded (and the other)

Well rounded (and the other)

Well rounded is like a resilient ball, rolling about, likely to be pleasing to most, and built to last.

The opposite?

Sharp.

Sharp is often what we want. We don't want a surgeon or an accountant or even a tour guide to be well rounded. We have a lot of choices, and it's unlikely we're looking for a utility player.

Well rounded gives you plenty of opportunities to shore up mediocrity with multiple options. Sharp is more frightening, because it's this or nothing.

Either can work, but it's very difficult to be both.

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498