marți, 7 septembrie 2010

SEOmoz Daily SEO Blog

SEOmoz Daily SEO Blog


Latent Dirichlet Allocation (LDA) and Google's Rankings are Remarkably Well Correlated

Posted: 06 Sep 2010 11:42 AM PDT

Posted by randfish

Last week at our annual mozinar, Ben Hendrickson gave a talk on a unique methodology for improving SEO. The reception was overwhelming - I've never previously been part of a professional event where thunderous applause broke out not once but multiple times in the midst of a speaker's remarks.

Ben Hendrickson of SEOmoz speaking at the London Distilled/SEOmoz PRO Training
_
Ben Hendrickson speaking in last Fall at the Distilled/SEOmoz PRO Training London
(he'll be returning this year)

_

I doubt I can recreate the energy and excitement of the 320-person filled room that day, but my goal in this post is to help explain the concepts of topic modeling, vector space models as they relate to information retrieval and the work we've done on LDA (Latent Dirichlet Allocation). I'll also try to explain the relationship and potential applications to the practice of SEO.

A Request: Curiously, prior to the release of this post and our research publicly, there have been a number of negative remarks and criticisms from several folks in the search community suggesting that LDA (or topic modeling in general) is definitively not used by the search engines. We think there's a lot of evidence to suggest engines do use these, but we'd be excited to see contradicting evidence presented. If you have such work, please do publish!

The Search Rankings Pie Chart

Many of us are likely familar with the ranking factors survey SEOmoz conducts every two years (we'll have another one next year and I expect some exciting/interesting differences). Of course, we know that this aggregation of opinion is likely missing out on many factors and may over or under-emphasize the ones it does show.

Here's an illustration I created for a presentation recently to help illustrate the major categories in the overall results:

Illustration of Ranking Factors Survey Data

This suggests that many SEOs don't ascribe much weight to on-page optimization
_

I myself have often felt that from all the metrics, tests and observations of Google's ranking results, the importance of on-page factors like keyword usage or TF*IDF (explained below) is fairly small. Certainly, I've not observed many results, even in low competitive spaces, where one can simply add in a few more repetitions of the keyword, maybe toss in a few synonyms or "related searches" and improve rankings. This experience, which many SEOs I've talked to share, has led me to believe that linking signals are an overwhelming majority of how the engines order results.

But, I love to be wrong.

Some of the work we've been doing around topic modeling, specifically using a process called LDA (Latent Dirichlet Allocation), has shown some surprisingly strong results. This has made me (and I think a lot of the folks who attended Ben's talk last Tuesday) question whether it was simply a naive application of the concept of "relevancy" or "keyword usage" that gave us this biased perspective.

Why Search Engines Need Topic Modeling

Some queries are very simple - a search for "wikipedia" is non-ambiguous, straightforward and can be effectively returned by even a very basic web search engine. Other searches aren't nearly as simple. Let's look at how engines might order two results - a simple problem most of the time that can be somewhat complex depending on the situation.

Query for Batman

Query for Chief Wiggum

Query for Superman

Query for Pianist

For complex queries or when relating large quantities of results with lots of content-related signals, search engines need ways to determine the intent of a particular page. Simply because it mentions the keyword 4 or 5 times in prominent places or even mentions similar phrases/synonyms won't necessarily mean that it's truly relevant to the searcher's query.

Historically, lots of SEOs have put effort into this process, so what we're doing here isn't revolutionary, and topic models, LDA included, have been around for a long time. However, no one in the field, to our knowledge, has made a topic modeling system public or compared its output with Google rankings (to help see how potentially influential these signals might be). The work Ben presented, and the really exciting bit (IMO), is in those numbers.

Term Vector Spaces & Topic Modeling

Term vector spaces, topic modeling and cosine similarity sound like a tough concepts, and when Ben first mentioned them on stage, a lot of the attendees (myself included) felt a bit lost. However, Ben (along with Will Critchlow, whose Cambridge mathematics degree came in handy) helped explain these to me, and I'll do my best to replicate that here:

Simplistic Term Vector Model

In this imaginary example, every word in the English language is related to either "cat" or "dog," the only topics available. To measure whether a word is more related to "dog," we use a vector space model that creates those relationships mathematically. The illustration above does a reasonable job showing our simplistic world. Words like "bigfoot" are perfectly in the middle with no more closeness to "cat" than to "dog." But words like "canine" and "feline" are clearly closer to one that the other and the degree of the angle in the vector model illustrates this (and gives us a number).

BTW - in an LDA vector space model, topics wouldn't have exact label associations like "dog" and "cat" but would instead be things like "the vector around the topic of dogs."

Unfortunately, I can't really visualize beyond this step, as it relies on taking the simple model above and scaling it to thousands or millions of topics, each of which would have its own dimension (and anyone who's tried knows that drawing more than 3 dimensions in a blog post is pretty hard). Using this construct, the model can compute the similarity between any word or groups of words and the topics its created. You can learn more about this from Stanford University's posting of Introduction to Information Retrieval, which has a specific section on Vector Space Models.

Correlation of our LDA Results w/ Google.com Rankings

Over the last 10 months, Ben (with help from other SEOmoz team members) has put together a topic modeling system based on a relatively simple implementation of LDA. While it's certainly challenging to do this work, we doubt we're the first SEO-focused organization to do so, though possibly the first to make it publicly available.

When we first started this research, we didn't know what kind of an input LDA/topic modeling might have on search engines. Thus, on completion, we were pretty excited (maybe even ecstatic) to see the following results:

 

Correlation Between Google.com Rankings and Various Single Metrics
Spearman Correlation of LDA, Linking IPs and TF*IDF

 

(the vertical blue bars indicate standard error in the diagram, which is relatively low thanks to the large sample set)
_

Using the same process we did for our release of Google vs. Bing correlation/ranking data at SMX Advanced (we posted much more detail on the process here), we've shown the Spearman correlations for a set of metrics familiar to most SEOs against some of the LDA results, including:

  • TF*IDF - the classic term weighting formula, TF*IDF measures keyword usage in a more accurate way than a more primitive metric like keyword density. In this case, we just took the TF*IDF score of the page content that appeared in Google's rankings
  • Followed IPs - this is our highest correlated single link-based metric, and shows the number of unique IP addresses hosting a website that contains a followed link to the URL. As we've shown in the past, with metrics like Page Authority (which uses machine learning to build more complex ranking models) we can do even better, but it's valuable in this context to just think and compare raw link numbers.
  • LDA Cosine - this is the score produced from the new LDA labs tool. It measures the cosine similarity of topics between a given page or content block and the topics produced by the query.

The correlation with rankings of the LDA scores are uncanny. Certainly, they're not a perfect correlation, but that shouldn't be expected given the supposed complexity of Google's ranking algorithm and the many factors therein. But, seeing LDA scores show this dramatic result made us seriously question whether there was causation at work here (and we hope to do additional research via our ranking models to attempt to show that impact). Perhaps, good links are more likely to point to pages that are more "relevant" via a topic model or some other aspect of Google's algorithm that we don't yet understand naturally biases towards these.

However, given that many SEO best practices (e.g. keywords in title tags, static URLs and ) have dramatically lower correlations and the same difficulties proving causation, we suspect a lot of SEO professionals will be deeply interested in trying this approach.

The LDA Labs Tool Now Available; Some Recommendations for Testing & Use

We've just recently made the LDA Labs tool available. You can use this to input a word, phrase, chunk of text or an entire page's content (via the URL input box) along with a desired query (the keyword term/phrase you want to rank for) and the tool will give back a score that represents the cosine similarity in a percentage form (100% = perfect, 0% = no relationship).

LDA Topics Tool

When you use the tool, be aware of a few issues:

  • Scores Change Slightly with Each Run
    This is because, like a pollster interviewing 100 voters in a city to get a sense of the local electorate, we check a sample of the topics a content+query combo could fit with (checking every possibility would take an exceptionally long time). You can, therefore, expect the percentage output to flux 1-5% each time you check a page/content block against a query.
  • Scores are for English Only
    Unfortunately, because our topics are built from a corpus of English language documents, we can't currently provide scores for non-English queries.
  • LDA isn't the Whole Picture
    Remember that while the average correlation is in the 0.33 range, we shouldn't expect scores for any given set of search results to go in precisely descending order (a correlation of 1.0 would suggest that behavior).
  • The Tool Currently Runs Against Google.com in the US only
    You should be able to see the same results the tool extracts from by using a personalization-agnostic search string like http://www.google.com/xhtml?q=my+search&pws=0
  • Using Synonyms, "Related Searches" or Wonder Wheel Suggestions May Not Help
    Term vector models are more sophisticated representations of "concepts" and "topics," so while many SEOs have long recommended using synonyms or adding "related searches" as keywords on their pages and others have suggested the importance of "topically relevant content" there haven't been great ways to measure these or show their correlation with rankings. The scores you see from the tool will be based on a much less naive interpretation of the connections between words than these classic approaches.
  • Scores are Relative (20% might not be bad)
    Don't presume that getting a 15% or a 20% is always a terrible result. If the folks ranking in the top 10 all have LDA scores in the 10-20% range, you're likely doing a reasonable job. Some queries simply won't produce results that fit remarkably well with given topics (which could be a weakness of our model or a weirdness about the query itself).
  • Our Topic Models Don't Currently Use Phrases
    Right now, the topics we construct are around single word concepts. We imagine that the search engines have probably gone above and beyond this into topic modeling that leverages multi-word phrases, too, and we hope to get there someday ourselves.
  • Keyword Spamming Might Improve Your LDA Score, But Probably Not Your Rankings
    Like anything else in the SEO world, manipulatively applying the process is probably a terrible idea. Even if this tool worked perfectly to measure keyword relevance and topic modeling in Google, it would be unwise to simply stuff 50 words over and over on your page to get the highest LDA score you could. Quality content that real people actually want to find should be the goal of SEO and Google's almost certainly sophisticated enough to determine the different between junk content that matches topic models and real content that real users will like (even if the tool's scoring can't do that).

If you're trying to do serious SEO analysis and improvement, my suggested methodology is to build a chart something like this:

Analysis of "SEO" SERPs in Google
SERPs analysis of "SEO" in Google.com w/ Linkscape Metrics + LDA (click for larger)

Right now, you can use Keyword Difficulty's export function and then add in some of these metrics manually (though in the future, we're working towards building this type of analysis right into the web app beta).

Once you've got a chart like this, you can get a better sense of what's propping up your competitors rankings - anchor text, domain authority, or maybe something related to topic modeling relevancy (which the LDA tool could help with).

Undoubtedly, Google's More Sophisticated than This

While the correlations are high, and the excitement around the tool both inside SEOmoz and from a lot of our members and community is equally high, this is not us "reversing the algorithm." We may have built a great tool for improving the relevancy of your pages and helping to judge whether topic modeling is another component in the rankings, but it remains to be seen if we can simply improve scores on pages and see them rise in the results.

What's exciting to us isn't that we've found a secret formula (LDA has been written about for years and vector space models have been around for decades), but that we're making a potentially valuable addition to the parts of SEO we've traditionally had little measurement around.

BTW - Thanks to Michael Cottam, who suggested the reference of research work by a number of Googlers on pLDA. There are hundreds of papers from Google and Microsoft (Bing) researchers around LDA-related topics, too, for those interested. Reading through some of these, you can see that major search engines have almost certainly built more advanced models to handle this problem. Our correlation and testing of the tool's usefulness will show whether a naive implementation can still provide value for optimizing pages.

For those who'd like to investigate more, we've made all of our raw data available here (in XLS format, though you'll need a more sophisticated model to do LDA). If you have interest in digging into this, feel free to email Ben at SEOmoz dot org.

How Do I Explain this to the Boss/Client?

The simplest method I've found is to use an analogy like:

If we want to rank well for "the rolling stones" it's probably a really good idea to use words like "Mick Jagger," "Keith Richards," and "tour dates." It's also probably not super smart to use words like "rubies," "emeralds," "gemstones," or the phrase "gathers no moss," as these might confuse search engines (and visitors) as to the topic we're covering.

This tool tries to give a best guess number about how well we're doing on this front vs. other people on the web (or sample blocks of words or content we might want to try). Hopefully, it can help us figure out when we've done something like writing about the Stones but forgetting to mention Keith Richards.

As always, we're looking forward to your feedback and results. We've already had some folks write in to us saying they used the tool to optimize the contents of some pages and seen dramatic rankings boosts. As we know, that might not mean anything about the tool itself or the process, but it certainly has us hoping for great things.

p.s. The next step, obviously, is to produce a tool that can make recommendations on words to add or remove to help improve this score. That's certainly something we're looking into.

p.p.s. We're leaving the Labs LDA tool free for anyone to use for a while, as we'd love to hear what the community thinks of the process and want to get as broad input as possible. Future iterations may be PRO-only.


Do you like this post? Yes No

Daily Snapshot: Renewing and Expanding America's Roads, Rails and Runways

The White House Your Daily Snapshot for
Tuesday, September 7, 2010
 

Photo of the Day

Photo of the Day - September 2, 2010

President Barack Obama waves to the crowd after speaking at the Milwaukee Laborfest in Milwaukee, Wisc., Sept. 6, 2010. (Official White House Photo by Pete Souza)

View more photos.

Today's Schedule

All times are Eastern Daylight Time

10:00 AM: The President and the Vice President receive the Presidential Daily Briefing

10:30 AM: The President and the Vice President receive the Economic Daily Briefing

11:10 AM: The President and the Vice President meet with Secretary Clinton

11:50 AM: The President welcomes NATO Secretary General Rasmussen

12:00 PM: Briefing by Press Secretary Robert Gibbs WhiteHouse.gov/live

12:45 PM: The President and the Vice President have lunch

1:15 PM: The President meets with senior advisors

4:30 PM: The President and the Vice President meet with Secretary Clinton

6:30 PM: The Vice President and Dr. Jill Biden host a Rosh Hashanah reception

WhiteHouse.gov/live  Indicates Events that will be livestreamed on WhiteHouse.gov/live.

In Case You Missed It

Here are some of the top stories from the White House blog

President Obama on Labor Day: The Fight for America's Workers Continues
President Obama attends the Milwaukee Laborfest and announces a new plan for rebuilding and modernizing America’s roads, rails and runways for the long term.

The Right Comparison Between Recoveries
The Wall Street Journal ran a graph claiming, “The private sector is adding jobs … but the recovery is slower than in past cycles.” In fact, even though it is not fast enough, the rate of job growth is actually faster now than was the case at comparable points of the past two recoveries.

Let's Stop Torturing Facts and Start Working Together
Jared Bernstein looks at the latest misleading attack on the Recovery Act and calls for cooperation on further measures to boost the economy.

Get Updates

Sign up for the Daily Snapshot  

Stay Connected

 


 
This email was sent to e0nstar1.blog@gmail.com
Manage Subscriptions for e0nstar1.blog@gmail.com
Sign Up for Updates from the White House

Unsubscribe e0nstar1.blog@gmail.com | Privacy Policy

Please do not reply to this email. Contact the White House

The White House • 1600 Pennsylvania Ave NW • Washington, DC 20500 • 202-456-1111
 

 

Seth's Blog : If you want to learn to do marketing...

[You're getting this note because you subscribed to Seth Godin's blog.]

If you want to learn to do marketing...

then do marketing.

You can learn finance and accounting and media buying from a book. But the best way to truly learn how to do marketing is to market.

You don't have to quit your job and you don't need your boss's permission. There are plenty of ways to get started.

If you see a band you like coming to town, figure out how to promote them and sell some tickets (posters? google ads? PR?). Don't ask, just do it.

If you find a book you truly love, buy 30 and figure out how to sell them all (to strangers).

If you're 12, go door to door selling fresh fruit--and figure out what stories work and which don't.

Set up an online business. Get a candidate you believe in elected to the school board.

The best way to learn marketing is to do it.

[And Chris Guillabeau's new book turns this simple idea into a plan for life].

  • Email to a friend

More Recent Articles

Don't want to get this email anymore? Click the link below to unsubscribe.


Click here to safely unsubscribe now from "Seth's Blog" or change your subscription or subscribe

Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

luni, 6 septembrie 2010

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


Labor Day Insanity from Clinton's Secretary of Labor

Posted: 06 Sep 2010 11:20 AM PDT

It's Labor Day. The markets are closed. Those working for government, banks, schools etc have the day off. All totaled, 17.3 million citizens do not have a job today nor a job they can return to on Tuesday. Another 8.9 million will not work as many hours as they would like, this week, next week, or the week after that.

How NOT to End the Great Recession

In a New York Times Op-Ed, Robert B. Reich, a secretary of labor in the Clinton administration, and professor of public policy at the University of California, Berkeley comes to all the wrong conclusions about where we are, how we got here, and what to do about it.

Please consider How to End the Great Recession
Reich: THIS promises to be the worst Labor Day in the memory of most Americans. Organized labor is down to about 7 percent of the private work force. Members of non-organized labor — most of the rest of us — are unemployed, underemployed or underwater.
Mish Comment: When organized labor is at 0%, both public and private, we will be on our way to prosperity. Organized labor in conjunction with piss poor management bankrupted GM and countless other manufacturing companies. Now, public unions, in cooperation with corrupt politicians have bankrupted countless cities and states.
Reich: The Labor Department reported on Friday that just 67,000 new private-sector jobs were created in August, while at least 125,000 are needed to keep up with the growth of the potential work force.

The national economy isn't escaping the gravitational pull of the Great Recession. None of the standard booster rockets are working: near-zero short-term interest rates from the Fed, almost record-low borrowing costs in the bond market, a giant stimulus package and tax credits for small businesses that hire the long-term unemployed have all failed to do enough.

That's because the real problem has to do with the structure of the economy, not the business cycle. No booster rocket can work unless consumers are able, at some point, to keep the economy moving on their own. But consumers no longer have the purchasing power to buy the goods and services they produce as workers; for some time now, their means haven't kept up with what the growing economy could and should have been able to provide them.
Mish Comment: Consumers no longer have the purchasing power because of a number of factors.

1. Loose monetary policies at the Fed that encouraged asset speculation, including housing.
2. Rampant property price escalation (until the crash) and rampant property tax increases even though wages did not keep up.
3. A sinking dollar because of inane amounts of government spending. The US has troops in 140 countries around the globe, and a military budget as nearly big as the rest of the world combined.

Quite literally we are spending ourselves to death, with absolutely nothing to show for it.



The above chart from The FY 2009 Pentagon Spending Request - Global Military Spending

It's not what one makes that matters, it's how far the dollar goes. Our policies ensure the dollar does not go very far.
Reich: This crisis began decades ago when a new wave of technology — things like satellite communications, container ships, computers and eventually the Internet — made it cheaper for American employers to use low-wage labor abroad or labor-replacing software here at home than to continue paying the typical worker a middle-class wage. Even though the American economy kept growing, hourly wages flattened. The median male worker earns less today, adjusted for inflation, than he did 30 years ago.
Mish Comment: The crisis started when Congress perpetually spent more money than it took in, when social engineering and regulation made it undesirable to do business in the United States, when tax policy encouraged flight of jobs and capital. The internet was an enabler, it is not to blame.
Reich: Eventually, of course, the debt bubble burst — and with it, the last coping mechanism. Now we're left to deal with the underlying problem that we've avoided for decades. Even if nearly everyone was employed, the vast middle class still wouldn't have enough money to buy what the economy is capable of producing.
Mish Comment: The underlying problems still remain. Unfortunately Robert Reich is clueless about what the underlying problems are.
Reich: THE Great Depression and its aftermath demonstrate that there is only one way back to full recovery: through more widely shared prosperity. In the 1930s, the American economy was completely restructured. New Deal measures — Social Security, a 40-hour work week with time-and-a-half overtime, unemployment insurance, the right to form unions and bargain collectively, the minimum wage — leveled the playing field.
Mish Comment: Payment for the absurd policies of FDR are now coming due. Social Security is broke, there is no "lock box" demographics are unfavorable, and acts like Davis Bacon and collective bargaining have wrecked many cities and states.

When it comes to jobs creation, we need to get the most done for the cheapest amount and the way to do that is scrap the Davis-Bacon act. Please see Thoughts on the Davis Bacon Act for details.

Socialists like Robert Reich point out alleged benefits of FDR's policies. The Fact of the matter is FDR's policies were extremely destructive.

The baby boom following WWII is what got the economy humming, not inept policies or unions. We recovered in spite of piss poor policies, not because of them. Indeed unions sewed the seeds of their own destruction which is exactly why only 7 percent of the private work force is unionized. We need to celebrate this fact, not bemoan it.
In the decades after World War II, legislation like the G.I. Bill, a vast expansion of public higher education and civil rights and voting rights laws further reduced economic inequality. Much of this was paid for with a 70 percent to 90 percent marginal income tax on the highest incomes. And as America's middle class shared more of the economy's gains, it was able to buy more of the goods and services the economy could provide. The result: rapid growth and more jobs.

By contrast, little has been done since 2008 to widen the circle of prosperity. Health-care reform is an important step forward but it's not nearly enough.
Mish Comment: Once again Reich does not understand what it takes to create jobs in the real world. Reich lives in academia, insulated in his womb of academic theory, theories that anyone living in the real world can easily see are fatally flawed in today's world.

It would behoove Reich to read Small Business Trends - Yet Another Disaster

From the NFIB ...

The expiration of the Bush tax program and the implementation of the health care bill represent the two largest tax increases in modern history. Add to that serious talk of a VAT and passing cap and trade. Nothing here to create optimism about the future for business owners or consumers. Top that off with government borrowing of $1.8 trillion last year and $1.5 trillion this year and on into the future, it is no surprise that owners are fearful and pessimistic.

What's missing from the "debate" is logic. Policies should not violate common sense and logic, if they do, they are misleading and disguising a hidden agenda. Arguing that more government spending and taxes are needed to re-establish optimism, confidence and growth doesn't meet the common sense test. Saving bankrupt companies to preserve union jobs doesn't make sense either. The list of these "policy inconsistencies" is long.

Bottom line, owners remain pessimistic and nothing is happening in Washington to provide encouragement. Confidence is lost.


Plight of Small Businesses

I have written extensively about the plight of small businesses. Here are some examples Reich needs to consider.
Obama's healthcare "solution" is a huge gripe of numerous small business owners.
Reich: What else could be done to raise wages and thereby spur the economy? We might consider, for example, extending the earned income tax credit all the way up through the middle class, and paying for it with a tax on carbon. Or exempting the first $20,000 of income from payroll taxes and paying for it with a payroll tax on incomes over $250,000.

In the longer term, Americans must be better prepared to succeed in the global, high-tech economy. Early childhood education should be more widely available, paid for by a small 0.5 percent fee on all financial transactions. Public universities should be free; in return, graduates would then be required to pay back 10 percent of their first 10 years of full-time income.
Mish Comment: Small business owners and entrepreneurs are scared to death of the lunacy of Cap-and-Trade. It gives existing businesses the right to sell energy credits they "earned" because they are currently a polluter. New businesses will pay the price.

Cap-and-Trade also opens up ridiculous financial trading of these credits and their derivatives for the benefit of Goldman Sachs and the other broker-dealers.

Cap-and-Trade is preposterous, not only in theory but actual practice. For example, please consider Cap-and-Trade Carbon Credit Extortion Scam In Full Swing.

Here is another example of the stupidity of Cap-and-Trade: Walmart, Costco, US Bank Profit From Energy Credits in US; Carbon Tax Thrown Out By French Court
Reich: Another step: workers who lose their jobs and have to settle for positions that pay less could qualify for "earnings insurance" that would pay half the salary difference for two years; such a program would probably prove less expensive than extended unemployment benefits.

These measures would not enlarge the budget deficit because they would be paid for. In fact, such moves would help reduce the long-term deficits by getting more Americans back to work and the economy growing again.

Policies that generate more widely shared prosperity lead to stronger and more sustainable economic growth — and that's good for everyone. The rich are better off with a smaller percentage of a fast-growing economy than a larger share of an economy that's barely moving. That's the Labor Day lesson we learned decades ago; until we remember it again, we'll be stuck in the Great Recession.
Conclusion

Reich, is blinded by academic theory. He does not understand business in the real word. He cannot distinguish between the problem and the solution. He never once discusses how the "haves" (overpaid unionized public workers), are destroying private ordinary taxpayers.

Reich wants socialistic policies that will provide further incentives for businesses to move overseas.

The one book Reich desperately needs to read is "Plunder" by Steve Greenhut. For a book review, please see Five Thumbs Up for Steve Greenhut's Plunder!

There can be, nor will there be any recovery until the wage discrepancies and pension benefits of the public sector are brought in line with those of the private sector, not by increasing private sector wages but by reducing insane benefits of the public sector. In addition we need better tax policies and we need to rein in absurd military spending.

Finally I would be remiss if I failed to point out the self-serving educational proposals of Reich. Education costs are soaring right now and there are cries for more to do it "for the kids".

We are not doing it for kids, we do it for teachers and administrators. Wages and benefits are preposterous enough already. The irony is most professors seldom teach. Instead students are taught by substitutes while the professors entertain self-serving research projects to justify their inflated salaries and egos.

Thus, Reich cannot possibly be further off in his solutions to the crisis. Such is to be expected from socialist academics living in self-serving academia instead of the real world.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


"Destitute Index"

Posted: 05 Sep 2010 11:23 PM PDT

In response to Reflections on the "Recovery" reader "Thomas" has an interesting question regarding U6 unemployment that I would like to share.

Thomas writes ...
Dear Mish,

Always look forward to your analysis. One small question. How many living, breathing, frustrated, suffering, hopeless, and poverty stricken actual human beings does the U-6 number 17.6% translate to? Thank you. Best Regards,

Thomas
Here is the chart in question once again.



One year ago the official unemployment rate was 9.7%. Today it is 9.6%.One year ago U-6 unemployment was 16.8%. Today U-6 is 16.7%

For links to the actual numbers behind the percentages, please see Jobs Decrease by 54,000, Rise by 60,000 Excluding Census; Unemployment Rises Slightly to 9.6%; A Look Beneath the Surface.

Essential Math

Officially unemployed - 14.9 million unemployed
Marginally Attached Workers - 2.4 million
Part Time For Economic Reasons - 8.9 million

The total is 26.3 million but not all of the above are destitute or in poverty, even though the vast majority of them are suffering in some way.

Unfortunately, the total does not stop there because it does not include children or elderly. Both children and the elderly have been affected by the economic downturn, but neither reflects in unemployment stats.

Food Stamps

Most of the destitute are on food stamps (now called SNAP - Supplemental Nutrition Assistance Program to destigmatize the name).

According to SNAP, there are 41,275,411 on food stamps. However, that total is understated because it does not include the homeless. In addition, one must factor in AFDC (Aid to families with dependent children), Head Start, and numerous other state programs. One cannot add them all up because of obvious overlap.

Homeless

The Coalition for the Homeless addresses the question How Many People Experience Homelessness?
There are several national estimates of homelessness. Many are dated, or based on dated information. For all of the reasons discussed above, none of these estimates is the definitive representation of "how many people are homeless." In a recent approximation USA Today estimated 1.6 million people unduplicated persons used transitional housing or emergency shelters. Of these people, approximately 1/3 are members of households with children, a nine percent increase since 2007. Another approximation is from a study done by the National Law Center on Homelessness and Poverty which states that approximately 3.5 million people, 1.35 million of them children, are likely to experience homelessness in a given year (National Law Center on Homelessness and Poverty, 2007).

These numbers, based on findings from the National Law Center on Homelessness and Poverty, Urban Institute and specifically the National Survey of Homeless Assistance Providers, draw their estimates from a study of service providers across the country at two different times of the year in 1996. They found that, on a given night in October, 444,000 people (in 346,000 households) experienced homelessness – which translates to 6.3% of the population of people living in poverty. On a given night in February, 842,000 (in 637,000 households) experienced homelessness – which translates to almost 10% of the population of people living in poverty. Converting these estimates into an annual projection, the numbers that emerge are 2.3 million people (based on the October estimate) and 3.5 million people (based on the February estimate). This translates to approximately 1% of the U.S. population experiencing homelessness each year, 38% (October) to 39% (February) of them being children (Urban Institute 2000).

It is also important to note that this study was based on a national survey of service providers. Since not all people experiencing homelessness utilize service providers, the actual numbers of people experiencing homelessness are likely higher than those found in the study, Thus, we are estimating on the high end of the study's numbers: 3.5 million people, 39% of which are children (Urban Institute 2000).
That was written in July of 2009. It is safe to assume the number is higher now. For the sake of argument let's assume the count is 3.5 million.

Dynamics of Poverty

Here are a few snips from Income, Poverty, and Health Insurance Coverage in the United States: 2008
Approximately 31.0 percent of the population had at least one spell of poverty lasting 2 or more months during the 4-year period from 2004 to 2007.

Income in the United States

Real median household income declined by 3.6 percent between 2007 and 2008, from $52,163 to $50,303, following 3 years of annual income increases. The decline in income coincides with the recession that started in December 2007.

Real median income declined for both family (3.3 percent) and nonfamily households (4.0 percent) between 2007 and 2008.

Real median earnings of both men and women who worked full-time, year-round declined in 2008, following increases in 2007. Men's earnings declined by 1.0 percent to $46,367 and women's declined by 1.9 percent to $35,745. The 2008 female-to-male earnings ratio, 0.77, was lower than the 2007 ratio of 0.78.
Median Household Income



Median household income was back at 1996-1997 levels for all but Asians. Bear in mind these numbers are for 2008! It is worse now.

I find the above snip in red astonishing: Approximately 31.0 percent of the population had at least one spell of poverty lasting 2 or more months during the 4-year period from 2004 to 2000

Politics and the Upcoming Election

All of the above stats are worse now than a couple years ago. Is it any wonder the population is madder than a hornet at politicians and the protected class of government workers and public unions, whose salaries and benefits have done nothing but go up, and up, and up, since 1996?

Please consider Voters Strongly Favor Non-Incumbent GOP Newcomers in Midterm Elections
The public is fed up with how beholden Obama is to unions. They are fed up with sacrifices they have to make that government workers don't. They are fed up with how well the political class has fared in this "recovery" vs. how well they have fared in this "recovery". They are fed up with never-ending wars.

It's not that people prefer Republicans by some huge margin. They don't. They specifically prefer non-incumbent Republicans hoping for a Change. Obama promised "Change you could believe in", but where is it? We are still bogged down in Afghanistan, Obama did not get us out of Guantanamo Bay as promised, but most importantly he did continue the same bailout strategies and surrounded himself with the same economic philosophy and same Wall Street advisors as Bush.

The public is fed up and rightfully so. The anti-union vote is going to be huge, and deservedly so.

I am increasingly confident that Republicans are going to take the House. So be prepared to Kiss Nancy Pelosi goodbye and be prepared to welcome John Boehner as the new House speaker. Perhaps we can get some real change. If not, gridlock is better than what we have seen under Obama.
Returning to the original question, U6 does not directly translate to determining the number of frustrated, suffering, hopeless, and poverty stricken human beings affected by this economic slump. One must add up various numbers, taking care to not double count. One must also consider the fact that people go into and out of poverty, while others straddle the line, just beyond the threshold of being counted.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


SEOmoz Daily SEO Blog

SEOmoz Daily SEO Blog


Two Quick, Simple Social Media Tips

Posted: 05 Sep 2010 04:31 PM PDT

Posted by RobOusbey

Today, I want to share two pieces of advice that are particularly useful to certain types of business - and will be exceptionally quick to implement. I've also created a free download that might help some people implement one of these ideas even more quickly.

About two years ago, I made a recommendation to a client in the UK, and I've just seen it used by a hotel in the USA. If your business offers public computers with internet access - such as those in hotel lobbies, libraries, etc - this is for you:

Tip 1: Put up a sign, next to your public computers, with a call to action; typically this could be something like 'Find us on Facebook' or 'Follow us on Twitter'.

Here's such a poster in use, at the Ledgestone Hotel in Yakima. (Click the image to embiggen.)

Sadly, it doesn't look like the Ledgestone is doing much with their Twitter account; this probably disappoints people who go to their page, and so they don't end up with as many followers as they could do. Remember - getting people to your Twitter page (or Facebook, or whatever else you're asking them to do) is only the first stage - there has to be something there for them when they arrive.

The second tip is more for people who offer wi-fi - this could be all manner of hotels, conference venues, airports, aeroplanes, train stations, coffee shops, etc. For places that offer free wi-fi, this can work even better:

Tip 2: You control the first page visitors see after logging on to your wi-fi. Don't waste this with a dull message; make the page interesting, and put some calls to action on there.

People have probably logged on to do something - but many will welcome a distraction - particularly if you keep the request brief. Create a nicely styled, but simple page, and add a couple of message on there. Some examples could include:

  • Follow us on Twitter / Like us on Facebook: you could incentivize this, for example: if you're a coffee shop, then offer a free latte to new followers
  • Sign up to our email newsletter: this will only take them a second if you make sure the form is right there on the page, and again this can be incentivized
  • Don't forget to check in on foursquare: ideal for almost any location, and this is as good a time as any to remind them to check in
  • If you're enjoying your stay, please review us: particularly useful for hotels, where online reviews can increase visibility; I'll go into a little more detail about this below.

There can be some issues with sites noticing that a lot of people from the same IP are visiting, particularly when it comes to review services. Local search expert David Mihm advised me that he's heard Yelp in particular does try to filter our multiple reviews from the same IP, and that TripAdvisor's fraud rules do include clauses that might get you into trouble (such as offering incentives for people to write reviews is not permitted.)

I'd recommend that there are two steps around this type of issue:

  1. Try to appeal for reviews only from people who already have accounts on those sites (e.g.: "If you're a Yelp member, please review us here...." or "If you have a Google account, please leave a review here..."
  2. Make this 'post-wifi-login' page available on the public internet; review sites should be able to recognize that lots of people are being referred to your page from the same URL - if it's public then they'll be able to visit that page, and should figure out what is going on.

I've built a quick free template for you to to download as a starting point. You can visit the file, or download it, by clicking this link: free wifi login CTA page.

(That was created based on a template from LayoutGala; I'm not going to add any licence to it, other than use it however you want. You should change the image that are in it to be local files at the very least.)

Honestly, it doesn't take long to print off a couple of small posters (or even to publish a nice wifi login page) so I'll hope to see social-media CTAs cropping up all over the place soon. :)


Do you like this post? Yes No

Seth's Blog : Design with intent

[You're getting this note because you subscribed to Seth Godin's blog.]

Design with intent

Designwithintent

Neat idea, free PDF... will differently make you think. HT to Lucas.

  • Email to a friend

More Recent Articles

Don't want to get this email anymore? Click the link below to unsubscribe.


Click here to safely unsubscribe now from "Seth's Blog" or change your subscription or subscribe

Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498