marți, 13 noiembrie 2012

A Week in the Life of 3 Keywords

A Week in the Life of 3 Keywords


A Week in the Life of 3 Keywords

Posted: 12 Nov 2012 06:57 PM PST

Posted by Dr. Pete

Like it or not, rank-tracking is still a big part of most SEO's lives. Unfortunately, while many of us have a lot of data, sorting out what’s important ends up being more art (and borderline sorcery) than science. We’re happy and eager to take credit when keywords move up, and sad and quick to hunt for blame when they move down. The problem is that we often have no idea what “normal” movement looks like – up is good, down is bad, and meaning is in the eye of the beholder.

What’s A Normal Day?

Our work with MozCast has led me to an unpleasant realization – however unpredictable you think rankings are, it’s actually much worse. For example, in the 30 days prior to writing this post (10/11-11/9), just over 80% of SERPs we tracked changed, on average, every day. Now, some of those changes were small (maybe one URL shifted one spot in the top 10), and some were large, but the fact that 4 of 5 SERPs experienced some change every 24 hours shows you just how dynamic the ranking game has become in 2012.

Graph of daily SERP changes (80.2% overall)

Compare these numbers to Google’s statements about updates like Panda – for example, for Panda #21, Google said that 1.2% of queries were “noticeably affected”. An algorithm update (granted, Panda 21 was probably data-only) impacted 1.2%, but baseline is something near 80%. How can we possibly separate the signal from the noise?

Is Google Messing With Us?

We all think it from time to time. Maybe Google is shuffling rankings on purpose, semi-randomly, just to keep SEOs guessing. On my saner days, I realize that this is unlikely from a search quality and tracking perspective (it would make their job a lot messier), but with average flux being so high, it’s hard to imagine that websites are really changing that fast.

While we do try to minimize noise, by taking precautions like tracking keywords via the same IP, at roughly the same time of day, with settings delocalized and depersonalized, it is possible that the noise is an artifact of how the system works. For example, Google uses highly distributed data – even if I hit the same regional data center most days, it could be that the data itself is in flux as new information propagates and centers update themselves. In other words, even if the algorithm doesn’t change and the websites don’t change, the very nature of Google’s complexity could create a perpetual state of change.

How Do We Sort It Out?

I decided to try a little experiment. If Google is really just adding noise to the system – shuffling rankings slightly to keep SEOs guessing – then we’d expect to see a fairly similar baseline pattern regardless of the keyword. We also might see different patterns over time – while MozCast is based on 24-hour intervals, there’s no reason we can’t check in more often.

So, I ran a 7-day crawl for just three keywords, checking each of them every 10 minutes, resulting in 1,008 data points per keyword. For simplicity, I chose the keyword with the highest flux over the previous 30 days, the lowest flux, and one right in the middle (the median, in this case). Here are the three keywords and their MozCast temperatures for the 30 days in question:

  1. “new xbox” – 176°F
  2.  “blood pressure chart” – 67°F
  3. “fun games for girls” – 12°F

Xbox queries run pretty hot, to put it mildly. The 7-day data was collected in late September and early October. Like the core MozCast engine, the Top 10 SERPs were crawled and recorded, but unlike MozCast, the crawler fired every 10 minutes.

Experiment #1: 10-minute Flux

Let’s get the big question out of the way first – Was the rate of change for these keywords similar or different? You might expect (1) “new xbox” to show higher flux when it changes, but if Google was injecting randomness than it should change roughly as often, in theory. Over the 1,008 measurements for each keyword, here’s how often they changed:

  1. 555 – “new xbox”
  2. 124 – “blood pressure chart”
  3. 40 – “fun games for girls”

While three keywords isn’t enough data to do compelling statistics, the results are striking. The highest flux keyword changed 55% of the times we measured it, or roughly every 20 minutes. Either Google is taking into account new data that’s rapidly changing (content, links, SEO tweaks), or high-flux keywords are just inherently different beasts. The simple “random injection” model just doesn’t hold up, though. The lowest flux keyword only changed 4% of the times we measured it. If Google were moving the football every time we tried to kick it, we’d expect to see a much more consistent rate of change.

If we look at the temperature (a la MozCast) for “new xbox” across these micro-fluxes (only counting intervals where something changed), it averaged about 93°F, high but considerably less than the average 24-hour flux. This could be evidence that something about the sites themselves is changing at a steady rate (the more time passes, the more they change).

Keep in mind that “new xbox” almost definitely has QDF (query deserves freshness) in play, as the Top 10 is occupied by major players with constantly updated content – including Forbes, CS Monitor, PC World, Gamespot, and IGN. This is a naturally dynamic query.

Experiment #2: Data Center Flux

Experiment #1 maintained consistency by checking each keyword from the same IP address (to avoid the additional noise of changing data centers). While it seems unlikely that the three keywords would vary so much simply because of data center differences, I decided to run a follow up test to measure just “new xbox” every 10 minutes for a single day (144 data points) across two different data centers.

Across the two data centers, the rate of change was similar but even higher than the original experiment: (1) 98 changes in 144 measurements = 68% and (2) 104 changes = 72%. This may have just been an unusually high-flux day. We’re mostly interested in the differences across these two data sets. Average temperature for recorded changes was (1) 121°F and (2) 118°F, both higher than experiment #1 but roughly comparable.

What if we compared each measurement directly across data centers? In other words, we typically measure flux over time, but what if we measured flux between the two sets of data at the same moment in time? This turned out to be feasible, if a bit tricky.

Out of 144 measurements, the two data centers were out of sync 140 times (97%). As we data scientists like to say: Yikes!  The average temperature for those mismatched measurements was 138°F, also higher than the 10-minute flux measurements. Keep in mind that these measurements were nearly simultaneous (within 1 second, generally) and that the results were delocalized and depersonalized. Typically, “new xbox” isn’t a heavily local query to begin with. So, this appears to be almost entirely a byproduct of the data center itself (not its location).

So, What Does It All Mean?

We can’t conclusively prove if something is in a black box, but I feel comfortable saying that Google isn’t simply injecting noise into the system every time we run a query. The large variations across the three keywords suggest that it’s the inherent nature of the queries themselves that matter. Google isn’t moving the target so much as the entire world is moving around the target.

The data center question is much more difficult. It’s possible that the two data centers were just a few minutes out of sync, but there’s no clear evidence of that in the data (there are significant differences across hours). So, I’m left to conclude two things – the large amount of flux we see is a byproduct of both the nature of the keywords and the data centers. Worse yet, it’s not just a matter of the data centers being static but different – they’re all changing constantly within their own universe of data.

The broader lesson is clear – don’t over-interpret one change in one ranking over one time period. Change is the norm, and may indicate nothing at all about your success. We have to look at consistent patterns of change over time, especially across broad sets of keywords and secondary indicators (like organic traffic). Rankings are still important, but they live in a world that is constantly in motion, and none of us can afford to stand still.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

When Is My Tweet's Prime of Life? (A brief statistical interlude.)

Posted: 12 Nov 2012 03:47 AM PST

Posted by followerwonk

Life's but a walking shadow, a poor player,
That struts and frets his hour upon the stage,
And then is heard no more. -- Macbeth

So, little tweet, how long do you have before you must exit stage left? A minute? An hour?

If you're Obama, well, your record-breaking tweet is probably going to live as long as school kids hate Shakespeare. But, for the rest of us, our tweets probably have the life of a very short-lived fruit fly, right? Right?

Perhaps not!

How can we measure the lifespan of a tweet?

We can't ever really know when a tweet has been "consumed" (to use Twitter's artless term) without access logs. However, we do have a few pieces of information that may help. Notably, retweets.

Retweets are the currency of Twitter. They're a transaction; I've consumed this tweet and find it valuable enough to pass along to others. And, as retweet-like functionality spreads to Facebook and other networks, they're increasingly becoming the currency of the Internet.

We can look at the time retweets occur relative to the underlying tweet for a pretty good signal for how long a tweet "lives." Ultimately, we're not necessarily interested in the old age of a tweet. Rather, with retweets, we can find a tweet's prime of life, when it's youthful and has many courters. Gather ye rosebuds...

Are there any issues using retweets as a prime-of-life metric?

Yep!

First, a retweet probably "kicks the can down the round." When a RT happens, more people may read and retweet the underlying tweet. Therefore, any conclusions we may come to in regards to lifespans of tweets are only for those with RTs. Tweets without RTs have their own hidden lifecycle. While that lifecycle probably correlates strongly, a retweet re-energizes and probably means that RTed tweets have slightly longer lives.

Second, not everyone gets retweeted.

As we see, as you approach a million followers, pretty much all of your tweets get at least 1 retweet. Those with less than 100 followers have virtually no tweets retweeted.

Why might this be a problem?

RTd tweets from small follower count users may have characteristics that set them apart from their other tweets. Something drove readers to applaud them compared to the many other tweets of theirs that get nary a clap. By comparison, high follower count users could tweet a single word and still count on lots of retweets.

So, bottom-line, this analysis looks at the gems of low follower count users. And, since we're limited to just looking at 100 RTs per tweet, we also rely on the forgettable tweets of high follower count users. Ultimately I don't think this is an issue, but I want to mention it up front.

Drumroll...

Ready for it? The magic number?

Eighteen minutes.

Yep, for half of the users sampled, 18 minutes or less was the time it took for half of their tweets' RTs to occur.

My suspicion was that tweets survived for a minute or so, never to be heard from again after that. Indeed, even following a few hundred users, it's hard to keep up with the tweets that come at you. But generally, tweets live longer than I had imagined. (This 18 minute figure keeps coming up again and again, no matter how you slice the data.)

What's to blame?

Well, nothing! That's just the way it is.

But just as nicotine may knock years off your life, a few things may change your tweets' lifespans significantly. As you'd expect, the number of people who follow you changes things up a bit.

Here, I plot the average retweet time for all users against their follower counts. (Parenthetically, you can see the stratification I did for follower count... yes, we randomly sample, but first we put all Twitter users into buckets by how many follow them. This ensures we also sample from the relatively few high follower count users out there.)

High follower count users have a longer life than low follower count users. Okay, not that surprising.  (By the way, it's kind of in vogue to dismiss follower count, but generally it's the most informative and productive metric there is.)

Look a bit closer at the above graph. Note the dispersion on the left, for lower follower count users, is less than on the right, for higher follower count users. If we plot the standard deviations (and I'll save you from that nerdiness), the dispersion is very tight on the right, really disperse on the left. This indicates that there are some low follower count users who hit the ball out of the park. We'll dig more into this in a future post; to find out what it is about some of their tweets that allow them to go big.

Here's something else that seems to impact how long tweets live.

Here, we compare the average lifespan of a tweet to the time it took that user to make 200 tweets. This positive correlation kinda makes sense, too: tweets cannibalize each other. Presumably, the longer a tweet sits at the top of your page, the longer its life. The more you tweet, the shorter the lifespan of each individual tweet.

Can we say this is a causal relationship? No... but it probably is. (If anything, this relationship is probably even stronger.)

You might say to yourself, "Ah, just because tweeting a lot means you drive down the life expectancy of any individual tweet, it doesn't mean that your overall retweet rate will be lower." Perhaps tweeting lots and lots rapidly will garner more overall RTs? The data doesn't bear that out:

Here we see that there's no correlation between how fast you tweet and the total RTs that you get.

Okay, so what else can we investigate?

You know how occasionally those studies come out saying people who use Internet Explorer are less intelligent than Chromers? Okay, well, that was just plain silliness. But perhaps we can learn something by looking at the "source" of the tweet: namely, what client made it. (This is something that Twitter now hides from users, but it is still available via API calls.)

So what do we find?

Yeah, so I suppose this isn't that surprising. Almost 90% of all RTs come from official Twitter clients. So much for the salad days of yore, when every developer and his mother published a little Twitter client. We all know that Twitter has, ahem, discouraged that. And it has apparently worked.

Above is the detail on top RT clients. Here, I looked at 1.3 million retweets. (While the sampling isn't perfect due to the sheer volume of calls that would be required to get a truly random sample, I don't expect that anything would change much with better sampling.)

I also include the median time that users of that client made a retweet. A few things stand out. First, desktop clients are speedy little suckers: note that Tweetdeck and Twitter for Mac both have really fast times. Second, note that Flipboard, a sorta tweet curation service, has a slow response time, which makes sense given that it exposes tweets in report-like "what you missed" format.

I looked at the difference of Flipboard for RTs of tweets by high follower count versus low follower count users. It definitely had more of a presence in high follower count tweets. Similarly, the fast-on-the-draw automatic retweet clients (like RoundTeam) seem to have more of a presence on low follower count tweets (perhaps folks trying to bulk up their influence scores, or engage in other RT exchange programs). Ultimately, though, these clients are so relatively scarce that it's perhaps not worth reading too much into these observations.

Bottom line?

In the next installment, I will dig deeper into both the types of users and tweets that get more retweets. (Please make sure to Like this blog post to encourage me to get started on the next one!)

Here, I wanted to lay the groundwork a bit by looking at retweets as a measure of a tweet's life.

We learned that 18 minutes is an important number. That's the median lifespan of a tweet. Sure, tweets can have an extended old age, with a couple of people continually zapping the tweet back to life. To that end, when you tweet becomes critically important. I recommend that people understand when their audience is online so as to best time your tweets.

In Followerwonk, you can analyze your followers (or those of any other user) and see a chart of when they're most likely to be online. This'll help you find optimal times to tweet.

Parenthetically, it remains an open question, in my ever contrarian mind, that the best time to tweet is the time when most of your followers are online. It almost certainly is. But until tested, I can also make a case that the best time to tweet is when the least amount of your followers are online. Why?  Because it's kinda like watching TV at 3 am versus 9 pm. At 3 am you find yourself watching infomercials because there is nothing else on. So, perhaps tweeting at 3 am, when few of your own timezone followers online, will more likely catch those night owl's attention, versus tweeting in the middle of the day when your audience has many other tweeters drawing their attention?

Finally, I think that we've also uncovered a bit of dirt in regards to tweet volume. I don't want to get all correlation versus causation on you, but it seems to be that the faster you tweet, the less life your tweets get. Since it's kinda sad to stamp out the life of a tweet too early, you might consider re-holstering your tweet finger now and again to ensure that you're tweeting quality content at a reasonable rate.

This is a preliminary and brief exploration of Twitter data.  Next time, I'll get even nuttier with data.  So please "like" this post if you'd like to see more!  And don't forget to follow me on Twitter!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

President Obama Lays a Wreath for Those Who Served

The White House Your Daily Snapshot for
Tuesday, November 13, 2012
 
President Obama Lays a Wreath for Those Who Served

On Veterans Day, President Obama, Vice President Biden, First Lady Michelle Obama, and Dr. Jill Biden travelled to Arlington National Cemetery to honor our nation’s fallen warriors, veterans and military families.

Before President Obama gave remarks, they laid a wreath to “remember every service member who has ever worn our nation’s uniform.” The President discussed our sacred duty to care for our men and women in uniform and their families, even after their military service has concluded.

Check out more about the President's Veterans Day commemoration.

President Barack Obama places a wreath at the Tomb of the Unknowns at Arlington National Cemetery in Arlington, Va., in honor of Veterans Day, Nov. 11, 2012. (Official White House Photo by Pete Souza)

President Barack Obama places a wreath at the Tomb of the Unknowns at Arlington National Cemetery in Arlington, Va., in honor of Veterans Day, Nov. 11, 2012. (Official White House Photo by Pete Souza)

In Case You Missed It

Here are some of the top stories from the White House blog:

Saying Thanks to our Troops on Veterans Day
Honoring Veterans Day, Captain Todd Veazie introduces himself as the new Executive Director of Joining Forces. As an active duty Naval Special Warfare officer with 26 years of service, Captain Veazie will join First Lady Michelle Obama and Dr. Jill Biden in this national initiative to provide support to service members and their families.

Thank an American Hero
We hope you'll take a moment to join First Lady Michelle Obama and thank an American hero for Veterans Day. You can send a note on JoiningForces.gov, and your card will be delivered to service members and veterans throughout the holiday season.

Weekly Address: Extending Middle Class Tax Cuts to Grow the Economy
President Obama says that it’s time for Congress to pass the middle class tax cuts for 98 percent of all Americans. Both parties agree that this will give 98 percent of families and 97 percent of small businesses the certainty that will lead to growth.

Today's Schedule

All times are Eastern Standard Time (EST). 

10:45 AM: The President and the Vice President receive the Presidential Daily Briefing

11:30 AM: The President and the Vice President attend a meeting with leaders from the labor community and other progressive leaders to discuss the actions we need to take to keep our economy growing and find a balanced approach to reduce our deficit

12:30 PM: Briefing by Press Secretary Jay Carney WhiteHouse.gov/live

WhiteHouse.gov/live Indicates that the event will be live-streamed on WhiteHouse.gov/Live

Get Updates


Stay Connected


This email was sent to e0nstar1.blog@gmail.com
Manage Subscriptions for e0nstar1.blog@gmail.com
Sign Up for Updates from the White House

Unsubscribe | Privacy Policy

Please do not reply to this email. Contact the White House

The White House • 1600 Pennsylvania Ave NW • Washington, DC 20500 • 202-456-1111
 

Seth's Blog : Effortless

 

Effortless

When John Coltrane plays the melody early in the track Harmonique, you can hear some of the notes crack.

Of course, Coltrane was completely capable of playing these notes correctly. And yet he didn't. 

It's this effort and humanity that touches us about his solo, not just the melody.

Sometimes, "never let them see you sweat," is truly bad advice. The work of an individual who cares often exposes the grit and determination and effort that it takes to be present.

Perfecting your talk, refining your essay and polishing your service until all elements of you disappear might be obvious tactics, but they remove the thing we were looking for: you.



More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

luni, 12 noiembrie 2012

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


Greece Allegedly Gets Time, Not Money; Mish Says Time Is Money

Posted: 12 Nov 2012 02:37 PM PST

After forcing Greece into more austerity measures, the next tranche of emergency loans to Greece (most of which will ultimately do a round trip back to Brussels) is now delayed because the path Greece is on is still not sustainable.

Greece still requires additional funding of around €32 billion. Germany has said no and the ECB has said no.

Please consider Greece to get more time but no immediate aid.
Euro zone governments will not agree to disburse more money to debt-ravaged Greece on Monday, despite the country approving a tough 2013 budget, because there is not yet a consensus on how to make its debts sustainable into the next decade.

Finance ministers gathered in Brussels should, however, give Athens two more years to make the budget deficit cuts demanded of it, a concession that will require funding of around 32 billion euros, according to a draft document prepared for the meeting.

The Greek parliament passed an austerity budget for 2013 late on Sunday and a structural reform package last Wednesday, meeting the conditions for the release of the next tranche of 31.5 billion euros of emergency loans from the euro zone.

But officials said the money would not be released since ministers are waiting for the European Commission, the IMF and the European Central Bank, together known as the troika, to present their 'debt sustainability analysis'.

A compliance report by the troika calculated that if ministers agree to give Greece two more years to meet its targets, extra funding of around 15 billion euros would be needed up to 2014 and another 17.6 billion for 2015/16 -- amounting to a 32.6 billion euros funding hole to be filled.

Discussion on how to close that gap will be top of the ministers' agenda on Monday.

Officials hope granting Greece two more years to meet its goals will allow the economy to start growing again, otherwise it would never produce enough for the country to repay its debt.

A target was set in March for Greece to achieve a primary surplus of 4.5 percent of GDP in 2014 and while there is no final decision yet, officials say it is likely to be moved to 2016 because of delays with reforms and a deeper than expected recession.

The two extra years would also mean that the targeted Greek debt-to-GDP ratio, would be shifted to 2022, officials said.
Time is Money

One amusing thing about this ridiculous result is that time is clearly money. By giving Greece more time, Greece needs to come up with another €32.6 billion.

One way or another, creditors will have to take another haircut.

No amount of hoping, wishful thinking, or delays can change one simple fact: Virtually none of these loans will be paid back. The Troika may as well shift the date to the 12th of never as to 2022.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com


New Loans and Money Supply in China "Lower Than Expected"; Hopes of China Soft Landing Too High

Posted: 12 Nov 2012 09:39 AM PST

The Chinese service sector expanded last month causing many to believe the worst news regarding China was over. I think it has barely started.

Damping the Pollyanna view comes news China New Yuan Loans, Money Supply Lower Than Expected in October
The value of loans issued by banks in China in October was less than expected, whereas the amount of money in the banking system grew at a slower pace, which suggest the authorities may be reining in liquidity toward the end of the year.

Chinese financial institutions issued a lower-than-expected 505.2 billion yuan ($80.83 billion) worth of new yuan loans in October, data from the central bank showed Monday.

October's new-loan tally was down from CNY623.2 billion in September and also below the median CNY590 billion of new lending forecast by 11 economists polled earlier by the Dow Jones Newswires.

China's broadest measure of money supply, M2, was up 14.1% at the end of October compared with a year earlier, lower than the 14.8% rise at the end of September, and below the median 14.5% increase forecast by economists.

Societe Generale economist Yao Wei said October's total new-loan tally shows that the People's Bank of China appears to be zeroing in on the CNY8.5 trillion lending target that many analysts and economists believe the central bank has set for the year.

Given the recent signs of stabilizing economic growth, it is unlikely that Beijing will further loosen its monetary policy in the coming months, said HSBC economist Ma Xiaoping.
Lending in a Command Economy vs. Lending in US

I see no reason to change my long-held belief that surprises in China will generally be to the downside, and probably severely so.

I say that even though there is one huge difference in bank lending between China and the US in terms of the government coaxing banks to lend.

When the Chinese Central Bank suggests banks should lend, they do. In the US money piles up as excess reserves if banks are reluctant to lend (as they are now).

US banks lend, on two conditions, both of which need to be true.

  1. Banks are not capital impaired
  2. Banks believe they have credit-worthy borrowers.

For a discussion please see Economist Fired for Expressing Opinions on Max Keiser Show; Errors in Observation.

Hopes of China Soft Landing Too High

In spite of command-economy lending prospects, hopes of a "soft landing" in China are misguided. China is far too dependent on housing, infrastructure, and State-Owned-Enterprises (SOEs).

Infrastructure Malinvestment

China is home to the world's largest shopping mall and it sits empty. For a discussion and video, please see How Will China Handle The Yuan?

Also recall that China is home to numerous vacant cities. For a discussion, please see World's Biggest Property Bubble: China's Ghost Cities Revisited; 64 Million Vacant Properties

The Video of Ghost Cities is a must see eye-opener for those overly bullish on China.

Either now or later China will pay the price, and the sooner the better.

China needs to rebalance, and will rebalance. Propping up the economy with more infrastructure projects and easy money will only cause the imbalances to grow larger. A regime change in China is underway, and the new regime will have to address the issue.

Economist Michael Pettis describes the setup perfectly in The Dating Game: Michael Pettis Challenges The Economist to a Bet on China

Implications

For implications on the upcoming China slowdown, please see


Two of the world's foremost experts on China(Michael Pettis and Jim Chanos) will be speakers at my economic conference in Sonoma, California on April 5, 2013.

For details please see Wine Country Conference April 5, 2013 or click on the image below.



Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com