vineri, 25 noiembrie 2011

Seth's Blog : Pre digital

Pre digital

A brief visit to the emergency room last month reminded me of what an organization that's pre-digital is like. Six people doing bureaucratic tasks and screening that are artifacts of a paper universe, all in the service of one doctor (and the need to get paid and not get sued). A 90-minute experience so we could see a doctor for ninety seconds.

Wasteful and even dangerous.

Imagine what this is like in a fully digital environment instead. Of course, they'd know everything about your medical history and payment ability from a quick ID scan at the entrance. And you'd know the doctor's availability before you even walked in, and you would have been shuttled to the urgent care center down the street if there was an uneven load this early in the morning. No questions to guess at the answer (last tetanus shot? Allergies to medications?) because the answers would be known. The drive to the pharmacy might be eliminated, or perhaps the waiting time would be shortened. If this accident or illness is trending, effecting more of the population, we'd know that right away and be able to prevent more of it... Triage would be more efficient as well. The entire process might take ten minutes, with a far better outcome.

School is pre-digital. Elections. Most of what you do in your job. Even shopping. The vestiges of a reliance on geography, lack of information, poor interpersonal connections and group connection (all hallmarks of the pre-digital age) are everywhere.

Perhaps the most critical thing you can say of a typical institution: "That place is pre-digital."

All a way of saying that this is just the beginning, the very beginning, of the transformation of our lives.

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

joi, 24 noiembrie 2011

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


EFSF "Grand Leverage Plan" All But Dead

Posted: 24 Nov 2011 09:45 PM PST

All the schemes and maneuverings by eurocrat clowns and misguided economists hoping to get 4-1 or even 8-1 leverage on the EFSF bailout fund while keeping the fund's AAA rating intact have officially died on the vine. Even the EFSF committee admits as such.

The Financial Times reports Euro rescue fund's impact in doubt
European leaders hailed a scheme to offer insurance on losses for investors buying troubled eurozone bonds as a means of leveraging the €250bn spare capacity of the rescue fund four or five fold, to more than €1,000bn.

But the dramatic spike in borrowing costs for Italy since the summit is likely to force the European Financial Stability Facility to sweeten the deal offered to investors, which will limit the number of bonds the insurance would cover.

Klaus Regling, head of the EFSF, earlier this month said that overcoming investor concerns with improved guarantees would mean the fund was likely to have only three to four times the firepower – an admission that underlined the challenge European leaders face in steadying sovereign debt markets.

But three senior eurozone officials said even this lower target may be difficult to reach, and expect the eventual firepower to be between two and three times the remaining buying capacity of the fund. "It is falling well short of its billing," said one. Concerns over leverage will be a key item on the agenda of eurozone finance ministers meeting on Tuesday.
Please recall the huge number of economic illiterates who protested that 4-1 leverage was "not enough". Also note, the market has emphatically stated that any leverage may be too much.

Ironically, the "grand plan" for the EFSF hatched by German Chancellor Angela Merkel and French President Nicolas Sarkozy last month may die before terms are even set for its use.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


Only True Role of Central Banks is to Print Money – Why Else Have It?

Posted: 24 Nov 2011 12:24 PM PST

Steen Jakobsen, chief economist at Saxo Bank in Denmark has a few thoughts via email I would like to share.

Steen writes ...
The German "near miss" failed auction yesterday has set off several new developments and pressures – This is a major thing clearly, and some media even speculate it could change the mighty Bundesbanks' perception of reality.

Close to home, Germany now trades ABOVE Denmark on 10 year government debt by 10 basis points! However, there is reason to believe this more a function of the illiquid markets than an endorsement of the Danish economy, where the fiscal imbalances continues to expand negatively (Unlike Sweden!)

Spread difference 10 year Denmark minus 10 year Germany



It has also increased the pressure from France to get the ECB more involved – I fail to see link between the German auction and this, but never the less the French political machine is always firmly behind the concept of "Dirigism" and in today's meeting between Merkel, Sarkozy and Monti the topic surely will be touched again.

It's important in historic context to remember that the only true role a central has is to … print money – why else have it?

I remain committed to my Chapter 11 concept as one of the few ways to break this deadlock.

Safe travels,

Steen
For a discussion of the "Chapter 11" concept, please see Perfect Storm the Most Likely Scenario; Is Europe Set to Declare a Chapter 11 in Early 2012?

Steen is correct regarding the only true role of central banks. It is precisely why they they should be eliminated. Far from being "inflation fighters" they are the very source of inflation.

More correctly: Fractional Reserve Lending and Central Bank Printing do not "cause" inflation, they "are" inflation.

Deflation is the destruction of credit and debt from the preceding boom.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


Run on the Eurozone has Started; Horrendous Idea that Will Not Die; Hitler Enters the Equation; Merkel Reiterates the Obvious; No Hope or Future for Eurobonds

Posted: 24 Nov 2011 09:22 AM PST

I remain in "awe" of the amazing arrogance of politicians and news writers who simply cannot take "no" for answer no matter how many times it is spelled out. The horrendous "eurobond" idea simply will not go away, even though Merkel and the German supreme court buried it long ago.

Even if Merkel was willing to give in on the idea, Finland and Austria wouldn't, and more importantly neither would the German supreme court.

Yet people keep wasting time debating the merits of it. It's like debating the merits of perpetual motion. No matter what the merits might be, perpetual motion is not going to happen, so there is no point in debating it.

Run on the Eurozone has Started

Eurointelligence proclaims Run on the Eurozone has Started
Jose Manuel Barroso warned yesterday the euro would be "difficult or impossible" to sustain without further economic integration. German newspapers this morning produced a whole string of poisonous comments about the European Commission's proposals for Eurobonds. The eurozone is now in a position where crisis resolution requires a much firmer political commitment than member states had expected to provide.

German could accept Eurobonds under certain conditions
 
Taken at face value the German opposition against Eurobonds seems to be as strong as ever and most German papers such as Süddeutsche Zeitung and Handelsblatt report on the topic in these terms. Nevertheless, the resistance against the Commission is less categorical than it appears, Financial Times Deutschland writes. Angela Merkel did not rule out Eurobonds but rather said the timing of José Manuel Barroso's proposal was "inappropriate". Among the condition she enumerated were changes of the EU treaties and a much stronger commitment of member states to condolidate. Norbert Barthle, the budgetary spokesperson of Merkel's CDU/CSU parliamentary group, told FTD: "We never say never. All we say is: no Eurobonds under current conditions". As a result there is scope for a deal at the EU summit December 9.
The idea is preposterous. There is no scope for a deal and no time for a deal even if there was scope. Moreover, and even if there was scope and time, it would require a German referendum and treaty changes by all 17 Eurozone countries.

Facts do not stop politicians or writers.

Hitler Enters the Equation

Writer Mark Schieritz in Nazi Adolf, inflation and the euro crisis blames the rise of Hitler on the gold standard and deflation.
The hyperinflation of the twenties led to so that the Weimar Republic was entirely prescribed a hard currency strategy - regardless of losses. Others were wiser because:

After leaving the gold standard, the UK saw its unemployment rate decline by about a third from 1931 to 1933, while Germany's rose over the same period Significantly. If Germany had been willing to follow the UK in inflating, and its unemployment rate had followed a similar trajectory, it would have stood at 17% rather than 33%.

In other words, perhaps the greatest catastrophe in human history could have been prevented if the Germans had allowed a little more inflation.

The euro in its present form is in many ways comparable to the gold standard.
The cause of the great depression was the runup in credit that preceded it. Blaming gold for the rise of Hitler or for the great depression is preposterous. Begging for inflation is equally preposterous.

Economies go through these massive boom-bust cycles because of inflation, fractional reserve lending, and rampant credit expansion. The cure cannot be the same as the disease no matter how one tries to distort the facts with untenable correlations.

Central banks, governments, fiat currencies, and fractional reserve lending are responsible for every major economic bust in history and fools come back begging for more.

Enough! Eurobonds are not going to happen (nor should they happen).

For a detailed discussion of why Eurobonds and ECB printing are piss poor ideas, please see Understanding the Problem, Understanding the Solution, and Understanding Who is to Blame are Three Different Things.

I wrote that last evening but failed to post it. At the time US futures were up over 1%. Now I see they were flat. The reason? I presume this:

Merkel Reiterates the Obvious

Bloomberg reports European Stocks, Euro Fall on Merkel Comments
The euro weakened, Italian bonds declined and the cost of insuring European government against default rose to a record after German Chancellor Angela Merkel ruled out joint euro-area borrowing. European stocks fluctuated.

Euro bonds are "not needed and not appropriate," Merkel said at a press conference with Italian Prime Minister Mario Monti and French President Nicolas Sarkozy in Strasbourg, France.

"The market sees a 'no' and reacts to it," said Martin Huefner, chief economist at Assenagon GmbH in Munich, which manages more than $4.7 billion of client assets. "We're going to see a deterioration of equity markets in the coming months to the point where something will have to be done. The market would be euphoric to get euro bonds. Apparently the pressure is not big enough yet."
No Hope or Future for Eurobonds

Huefner has it backwards. The market hears "yes" and reacts to it, even when it is damn obvious the answer is no.

There is no hope or future for Eurobonds and there never was.

No matter how many times this is explained or reiterated, some eurocratic fool or some fool writer finds some lame excuse to attempt to revive the dead. The latest (yesterday) was preposterous analysis by the Financial Times suggesting Merkel did not "really" mean no. This was followed up with the ludicrous idea by Mark Schieritz who blamed gold and lack of inflation for the rise of Hitler.

Sheeesh.

Now that Eurobonds are finally dead (hopefully), can we please start a rational discussion as to how best to break up the Eurozone?

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


Damn Cool Pics

Damn Cool Pics


Star Wars Engagement Photo Shoot

Posted: 23 Nov 2011 12:30 PM PST

Photographer Michael James took these awesome Star Wars-themed engagement photos of a Bay Area couple who really, really like Star Wars. Featuring a pair sporting Sith and Obi-Wan Kenobi garb, James shot these two in various Lucasian action positions in parts of the forest near the UC Santa Cruz campus in the Santa Cruz Mountains.










































Introducing SEOmoz's Updated Page Authority and Domain Authority

Introducing SEOmoz's Updated Page Authority and Domain Authority


Introducing SEOmoz's Updated Page Authority and Domain Authority

Posted: 23 Nov 2011 06:39 AM PST

Posted by Matt Peters

Here at Moz, we take metrics and analytics seriously and work hard to ensure that our metrics are first rate. Among our most important link metrics are Page Authority and Domain Authority. Accordingly, we have been working to improve these so that they more accurately reflect a given page or domain's ability to rank in search results. This blog entry provides an overview of these metrics and introduces our new Authority models with a deep technical dive.

What are Page and Domain Authority?

Page and Domain Authority are machine learning ranking models that predict the likelihood of a single page or domain to rank in search results, regardless of page content. Their input is the 41 link metrics available in our Linkscape URL Metrics API call and their output is a score on a scale from 1 to 100. They are keyword agnostic because they do not use any information about the page content.

Why are Page and Domain Authority being updated?

Since these models predict search engine position, it is important to update them periodically to capture changes in the search engines' ranking algorithms. In addition, this update includes some changes to the underlying models resulting in increased accuracy. Our favorite measure of accuracy is the mean Spearman Correlation over a collection of SERPs. The next chart compares the correlations on several previous indices and the next index release (Index 47).

The new model out performs the old model on the same data using the top 30 search results, and performs better if more results are used (top 50). Note that these are out of sample predictions.

When will the models change? Will this affect my scores?

The models will be updated when we roll out the next Linkscape index update, sometime during the week of November 28. Your scores will likely change a little, and may potentially change by as many as 20 points or more. I'll present some data later in this post that shows most PRO and Free Trial members with campaigns will see a slight increase in their Page Authority.

What does this mean if I use Page Authority and Domain Authority data?

First, the metrics will be better at predicting search position, and Page Authority will remain the single highest correlated metric with search position that we have seen (including mozRank and the other 100+ metrics we examined in our Search Engine Ranking Factors study). However, since we don't yet have a good web spam scoring system, sites that manipulate search engines will slip by us (and look like an outlier), so a human review is still wise.

Before presenting some details of the models, I'd like to illustrate what we mean by a "machine learning ranking model." The table below shows the top 26 results for the keyword "pumpkin recipes" with a few of our Linkscape metrics (Google-US search engine; this is from an older data set and older index, but serves as a good illustration).

Pumpkin Recipes SERP result

As you can see, there is quite a spread among the different metrics illustrated, with some of the pages having a few links and others 1,000+ links. The Linking Root Domains are also spread from only 46 Linking Root Domains to 200,000+. The Page Authority model takes these link metrics as input (plus 36 other link metrics not shown) and predicts the SERP ordering. Since it only takes into account link metrics (and explicitly ignores any page or keyword content), but search engines take many ranking factors into consideration, the model cannot be 100% accurate. Indeed, in this SERP, the top result benefits from an exact domain match to the keyword and helps explain its #1 position despite its relatively low link metrics. However, since Page Authority only takes link metrics as input, it is a single aggregate score that explains how likely a page is to rank in search based only on links. Domain Authority is similar for domain wide ranking. The models are trained on a large collection of Google-US SERP results.

Despite restricting to only link metrics, the new Page and Domain Authority models do a good job of predicting SERP ordering and improve substantially over the existing models. This increased accuracy is due in part to the new model's ability to better separate pages with moderate Page Authority values into higher and lower scores.

This chart shows the distribution of the Page Authority values for the new and old models over a data set generated from 10,000+ SERPs that includes 200,000+ unique pages (similar to the one used in our Search Engine Ranking Factors). As you can see, the new model has "fatter tails" and moves some of the pages with moderate scores to higher and lower values resulting in better discriminating power. The average Page Authority for both sets is about the same, but the new model has a higher standard deviation, consistent with a larger spread. In addition to the smaller SERP data set, this larger spread is also present in our entire 40+ billion page index (plotted with the logarithm of page/domain count to see the details in the tails):

One interesting comparison is the change in Page Authority for the domains, subdomains and sub-folders PRO and Free Trial members are tracking in our campaign based tools.

The top left panel in the chart shows that the new model shifts the distribution of Page Authority for the active domains, subdomains and sub-folders to the right. The distribution of the change in Page Authority is included in the top right panel, and shows that most of the campaigns have a small increase in their scores (average increase is 3.7), with some sites increasing by 20 points or more. A scatter plot of the individual campaign changes is illustrated in the bottom panel, and shows that 82% of the active domains, subdomains and sub-folders will see an increase in their Page Authority (these are the dots above the gray line). It should be noted that these comparisons are based solely on changes in the model, and any additional links that these campaigns have acquired since the last index update will act to increase the scores (and conversely, any links that have been dropped will act to decrease scores).

The remainder of this post provides more detail about these metrics. To sum up this first part, the models underlying the Page and Domain Authority metrics will be updated with the next Linkscape index update. This will improve their ability to predict search position, due in part to the new model's better ability to separate pages based on their link profiles. Page Authority will remain the single highest correlated metric with search position that we have seen.

 


The rest of the post provides a deeper look at these models, and a lot of what follows is quite technical. Fortunately, none of this information is needed to actually use these Authority scores (just as understanding the details of Google's search algorithm is not necessary to use it). However, if you are curious about some of the details then read on.

The previous discussion has centered around distributions of Page Authority across a set of pages. To gain a better understanding of the models' characteristics, we need to explore its behavior on the inputs. However, the inputs are a 41 dimensional space and it's impossible (for me at least!) to visualize anything in 41 dimensions. As an alternative, we can attempt to reduce the dimensionality to something more manageable. The intuition here is that pages that have a lot of links probably have a lot of external links, followed links, a high mozRank, etc. Domains that have a lot of linking root domains probably have a lot of linking IPs, linking subdomains, a high domain mozRank, etc. One approach we could take is simply to select a subset of metrics (like the table in the "pumpkin recipes" SERP above) and examine those. However, this throws away the information from the other metrics and will inherently be more noisy then something that uses all of them. Principal Component Analysis (PCA) is an alternate approach that uses all of the data. Before diving into the PCA decomposition of the data, I'll take a step back and explain what PCA is with an example.

Principal Component Analysis is a technique that reduces dimensionality by projecting the data onto Principal Components (PC) that explain most of the variability in the original data.  This figure illustrates PCA on a small two dimensional data set:

This sample data looks roughly like an ellipse. PCA computes two principal components illustrated by the red lines and labeled in the graph that roughly align with the axes of the ellipse.& One representation of the data is the familiar (x, y) coordinates. A second, equivalent representation is the projection of this data onto the principal components illustrated by the labeled points. Take the upper point (7.5, 6). Given these two values, it's hard to determine where it is in the ellipse. However, if we project it onto the PCs we get (4.5, 1.2) which tells us that it is far to the right of the center along the main axis (the 4.5 value) and a little up along the second axis (the 1.2 value).

We can do the same thing with the link metrics, only instead of using two inputs we use all 41 inputs. After doing so, something remarkable happens:

Two principal components naturally emerge that collectively explain 88% of the covariance in the original data! Put another way, almost all of the data lies in some sort of strange ellipse in our 41 dimensional space. Moreover, these PCs have a very natural link to our intuition. The first PC, which I'll call the Domain/Subdomain PC projects strongly onto the domain and subdomain related metrics (upper panel, blue and red lines), and has a very small projection onto the page metric (upper panel green lines). The second PC has the opposite property and projects strongly onto page related metrics with a small projection onto Domain/Subdomain metrics.

Don't worry if you didn't follow all of that technical mumbo jumbo in the last few paragraphs. Here's the key point: instead of talking about number of links, followed external links to domains, linking root domains, etc. we can instead talk about just two things - an aggregate domain/subdomain link metric and an aggregate page link metric and recover most of the information in the original 41 metrics.

Armed with this new knowledge, we can revisit the 10K SERP data and analyze it in with these aggregate metrics.

This chart shows the joint distribution of the 10K SERP data projected onto these PCs, along with the marginal distribution of each on the top and right hand side. At the bottom left side of the chart are pages with low values for each PC signifying that the page doesn't have many links and they are on domains without many links. There aren't many of these in the SERP data since these are unlikely to rank in search results. In the upper right are heavily linked to pages on heavily linked to domains, the most popular pages on the internet. Again, there aren't many of these pages in the SERP data because there aren't many of them on the internet (e.g. twitter.com, google.com, etc.) Interestingly, most of the SERP data falls into one of two distinct clusters. By examining the follow figure we can identify these clusters:

This chart shows the average folder depth of each search result, where folder depth is defined as the number of slashes (/) after the home page (with 1 defined to be the home page). By comparing with the previous chart, we can identify the two distinct clusters as home pages and pages deep on heavily linked to domains.

To circle back to search position, we can plot the average search position:

We see a general trend toward higher search position as the aggregate page and domain metrics increase. This data set only collected the top 30 results for each keyword, so values of average search position greater than 16 are in the bottom half of our data set. Finally, we can visually confirm that our Page and Domain Authority models capture this behavior and gain further insight into the new vs old model differences:

This is a dense figure, but here are the most important pieces. First, Page Authority captures the overall behavior seen in the Average Search position plot, with higher scores for pages that rank higher and lower scores for pages that rank lower (top left). Second, comparing the old vs new models, we see that the new model predicts higher scores for the most heavily linked to pages and lower scores for the least heavily linked to pages, consistent with our previous observation that the new model does a better job discriminating among pages.


Do you like this post? Yes No