luni, 23 decembrie 2013

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


Obamacare Support at Record Low 35%; Obamacare Named "Lie of the Year"; Bait and Switch; Obamacare Roundup; Meltdown Coming?

Posted: 23 Dec 2013 07:14 PM PST

It's been a while since I commented on Obamacare. Given news comes out every day, nearly all of it negative, I have shown restraint. Tonight, here's a recap of what you may have missed.

Obamacare Support at Record Low 35%

Today, December 23, a CNN poll finds Obamacare support at all-time low.

Poll Details

  • 35% support the healthcare law, a 5 percentage point drop in less than a month
  • 62% oppose the law, up 4 percentage points from November.
  • 60% of women oppose Obamacare, up 6 percentage points from November
  • 43% oppose Obamacare because it is too liberal
  • 15% Oppose Obamacare because it isn't liberal enough
  • 63% believe the new law will increase what they will have to pay for medical care
  • 42% believe they will be worse off under Obamacare personally.

I have gathered a huge number of Obamacare links in the past few weeks that I did not have time to comment on. Here are a few of them. This is by no means a complete list.

Obamacare Roundup, Last Two Weeks


The final link above is interesting. The URL title is "obama-administration-secretly-extends-health-care-enrollment-deadline".

I guess it's not much of a secret.

Please note the political spectrum in the above links encompasses everything between the Huffington Post and Fox News. That's quite an accomplishment! 

Meltdown Coming?

Let's finish up with a highlight from the Washington Post:

Sen. Joe Manchin (D-W.Va.) says Obamacare could suffer 'complete meltdown'
Sen. Joe Manchin (D-W.Va.) said Sunday that Obamacare could be headed for a "complete meltdown" if costs rise too fast and people are unhappy with their coverage.

"If it's so much more expensive than what we anticipated, and if the coverage is not as good as what we've had, you've got a complete meltdown at that time," Manchin said on CNN's "State of the Union."

The senator said such a situation would result in the law collapsing under "its own weight."

Manchin has been pushing for a one-year delay in the individual mandate -- the requirement that people carry health insurance or pay a penalty. He said that delay would allow the product some time to work its way into the market.
Even Democrats are distancing themselves from this disaster.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

Wall Street Is My Landlord; Blackstone's Home Rental Bonds Yet Another Sign of Renewed Credit Bubble

Posted: 23 Dec 2013 10:11 AM PST

Blackstone Group LP, the world's largest private equity firm, became the largest owner of rental homes in the U.S. , acquiring 41,000 homes in the past two years. In October, Blackstone offered the first-ever "rental-home-backed" security on Wall Street. The bond is backed by just a fraction — 3,207 — of the rental properties owned by Blackstone. Monthly rent checks from the properties will be used to service the $479.1 million security.

See Bloomberg's Blackstone's Big Bet on Rental Homes for a giant infographic.

Inquiring minds may also be interested in a related Bloomberg article, Wall Street Is My Landlord.

Let's focus on the credit aspect of what's in the Blackstone/Deutsche "Invitation Homes", first-ever rent-based bond offering. Here is a snip from the gigantic infographic in the first link above.

Invitation Homes Bond Offering



What do investors get for their money?

A Bond Credit Rating Table courtesy of Wikipedia will help explain.

Moody'sS&PFitchRating Description
Long-termShort-termLong-termShort-termLong-termShort-term
AaaP-1AAAA-1+AAAF1+Prime
Aa1AA+AA+High grade
Aa2AAAA
Aa3AA-AA-
A1A+A-1A+F1Upper medium grade
A2AA
A3P-2A-A-2A-F2
Baa1BBB+BBB+Lower medium grade
Baa2P-3BBBA-3BBBF3
Baa3BBB-BBB-
Ba1Not primeBB+BBB+BNon-Investment Grade Speculative
Ba2BBBB
Ba3BB-BB-
B1B+B+Highly Speculative
B2BB
B3B-B-
Caa1CCC+CCCCCSubstantial Risks
Caa2CCCExtremely Speculative
Caa3CCC-Default Imminent with Little Prospect for Recovery
CaCC
C
CD/DDD/In Default
/DD
/D

Whether or not one really believes the Aaa tranche truly deserves that rating, all those investors get is a coupon rate of 1.314%.

Those buying the class "D" offering, rated Baa2, get a "lower medium grade" bond, two steps above junk (again assuming the class really deserves that rating).  Those investors get a 2.314% return.

Classes E and F, both "unrated" are highly likely to be pure garbage in my estimation. 

Blackstone Lures Investors to Home-Rental Bonds

Feel warm and fuzzy with these bonds?

If so please consider Blackstone Lures Investors to Home-Rental Bonds.
Investors in Blackstone (BX) Group LP's debut sale of bonds backed by U.S. rental homes are agreeing to accept more risk than in traditional mortgage deals by at least two measures -- along with an unproven business.

Blackstone's Invitation Homes borrowed more through yesterday's deal relative to the value of the houses serving as collateral for the bonds than recent residential-mortgage securities, according to data from ratings companies. The cushion to cover interest payments is also smaller than in deals tied to apartment complexes. Monthly rent checks from 3,207 properties will be used to service the $479.1 million of debt.

"Given the rapid increase in prices in the areas in which the properties are concentrated, the amount of protection for bond investors doesn't make one feel all that warm and fuzzy," David Liu, co-manager of a $340 million securitized-asset fund run by New York-based TIG Advisors LLC, said last week, adding that he expected the offering to find buyers easily given the momentum in housing and other features of the debt.

Fitch Ratings disagreed with rivals that any rental-home securities should get top ratings now, citing in part the "limited track record" of big institutions in the business and incomplete historical data on how rents, vacancies and other considerations can vary over economic cycles.

Along with potential default risks, investors need to grapple with the borrower being able to repay the debt after 12 months or extend the maturity for a year as many as three times under certain conditions, said Bryan Whalen, a managing director at TCW Group Inc., which oversees about $130 billion from Los Angeles.

"That's a lot of optionality you're giving to Blackstone" and if bondholders can't be paid off through a refinancing or home sales at the end of five years, they may need to wait even longer for the return of their principal, Whalen said.

The estimated value of the homes in the deal represents 75 percent of the debt's balances. The ratio is based on the opinions of real estate brokers, rather than the licensed appraisers who are more reliable and traditionally used for residential mortgage-backed securities deals, according to Moody's.

One of the key risks of the Blackstone deal is its "heavy geographic concentration," Bank of America Corp. analysts led by Chris Flanagan in New York wrote in a Nov. 1 report.

The breakdown includes 34 percent from the Phoenix area, 17 percent from the Riverside-San Bernardino-Ontario region in southern California and the rest from other parts of the state, as well as Florida, Georgia and Illinois.

When it's time to repay the debt, rental-home owners may be unable to refinance or sell the bond collateral in bulk and flood the markets, Fitch analysts Suzanne Mistretta,Dan Chambers and Rui Pereira in New York wrote in a statement last month.

The transactions can also be "highly vulnerable to unknown variables" including property taxes, restrictions from homeowner associations and actions by local governments, they said.

Deal Covenants

The deal's covenants can allow individual properties to be sold, for between 105 percent and 120 percent of their allocated share of the total loan, according to information in the Kroll report, a release of liens that may leave the worst properties backing the remaining securities.

The largest danger may be that Blackstone will be allowed to sell the Invitation Homes business or take it public before the securities mature, said TCW's Whalen, who described the deal as "well-structured and really well thought in terms of the way they put it together" in general. 
13-Point Deal Summary

  1. Moody's rates 42% of the deal as Aaa Prime, Fitch rates none of it AAA prime. Whom do you believe? I believe Fitch.
  2. 18.27 percent of the deal is not rated at all, and highly likely pure garbage.
  3. Cushion to cover interest payments is smaller than in deals tied to apartment complexes
  4. Collateral for the bonds lower than recent residential-mortgage securities
  5. No track record for these securities
  6. Regional risk - properties concentrated in Phoenix and California
  7. Transactions "highly vulnerable to unknown variables" including property taxes, restrictions from homeowner associations and actions by local governments
  8. When it's time to repay the debt, rental-home owners may be unable to refinance or sell the bond collateral 
  9. The offering is loaded with default risk, normal repair risk, renter damage risk
  10. Estimated current value of homes includes rapid price appreciation (brought on by Blackstone snapping up foreclosed houses en masse)
  11. Home value estimates made by real estate brokers, not licensed appraisers
  12. Individual properties can be sold, leaving garbage in the loan portfolio
  13. Blackstone allowed to sell the Invitation Homes business or take it public before the securities mature
Biggest Risk

What the biggest risk? Whalen says "The largest danger may be that Blackstone will be allowed to sell the Invitation Homes business or take it public before the securities mature".

I disagree. I think the biggest deal-based risk is that individual properties can be sold over time, leaving increasing amounts of garbage in the loan portfolio. That thought alone makes me suspicious as to how Blackstone picked properties to go into this pool in the first place.

Final Thoughts

Without a doubt, Blackstone picked individual properties carefully, and every property placed in the pool was to maximize value for Blackstone, not investors. That is to be expected, but Blackstone went steps further, nearly to the point of detailing how investors are likely to be gored by this deal.

Investors chasing this deal for paltry returns are picking up pennies in front of a steamroller. This is the way it is at the peak of every credit bubble.

The only surprising thing is how quickly investors were willing to repeat their last mistake.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

China Interest Rate Crisis Continues: 7-Day Interest Rate Doubles to 10% in One Week; China Bans Words "Cash Crunch"

Posted: 23 Dec 2013 01:21 AM PST

A "cash crunch" is on in China. But don't call it that, because China banned use of the term last week.

The New York Times reports China Rates Approach Crisis Levels Despite Central Bank Measures.
An exceptional bid by China's central bank to curb soaring interest rates and relieve pressure on the financial system appeared to have come up short on Monday, as Chinese money market rates shrugged off the measure and continued to approach the crisis levels seen in June.

The central bank, the People's Bank of China, said late Friday that it had provided more than 300 billion renminbi, or about $50 billion, in short-term funds to selected banks over a three-day period that week.

Rates continued to surge on Monday, however, in China's money markets — a key source of short-term funding for commercial banks and also for financial institutions engaged in risky, off-balance-sheet shadow lending.

One key rate, the seven-day repurchase rate, rose as high as 10 percent on Monday. That was double the rate of a week earlier and the highest level since June, when the People's Bank of China allowed rates to surge in an effort to curb speculative investment in the country's sprawling shadow banking sector. 

China's banks are scrambling for short-term cash to meet month-, quarter- and year-end regulatory requirements. At the same time, demand for cash is high among Chinese companies seeking to meet year-end payments.
China Bans Words "Cash Crunch"

Last Friday, the Financial Times reported China presses media to tone down cash crunch story.
Chinese propaganda officials have ordered financial journalists and some media outlets to tone down their coverage of a liquidity crunch in the interbank market, in a sign of how worried Beijing is that the turmoil will continue when markets reopen on Monday.

Money market rates surged again on Friday, even after China's central bank announced on Thursday evening that it had carried out "short-term liquidity operations" to alleviate the problem.

The benchmark Shanghai Composite Index fell 2 per cent on Friday, its ninth consecutive day of losses and its longest losing streak in 19 years.

In response Chinese censors have warned financial reporters not to "hype" the story of problems in the interbank market, and in some cases have forbidden them from using the Chinese words for "cash crunch" in their stories, according to two people with direct knowledge of the matter who asked not to be named.

Don't worry. There isn't a cash crunch because China says so. Heck, China even banned the words. That should be proof enough. Besides, a quick check shows the Shanghai composite index is up a bit today at the time of this writing.

Clearly, central bankers everywhere have everything perfectly under control. For now.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

Machine Learning for SEOs

Machine Learning for SEOs


Machine Learning for SEOs

Posted: 22 Dec 2013 03:16 PM PST

Posted by Tom-Anthony

Since the Panda and Penguin updates, the SEO community has been talking more and more about machine learning, and yet often the term still isn't well understood. We know that it is the "magic" behind Panda and Penguin, but how does it work? Why didn't they use it earlier? What does it have to do with the periodic "data refreshes" we see for both of these algorithms?

I think that machine learning is going to be playing a bigger and bigger role in SEO, and so I think it is important that we have a basic understanding of how it works.

Disclaimer: Firstly, I'm no expert on machine learning. Secondly, I'm going to intentionally simplify aspects in places and brush over certain details that I don't feel are necessary. The goal of this post is not to give you a full or detailed understanding of machine learning, but instead to give you a high-level understanding that allows you to answer the questions in my opening paragraph should a client ask you about them. Lastly, Google is a black box, so obviously it is impossible to know for sure exactly how they are going about things, but this is my interpretation of the clues the SEO community has stitched together over time.

Watermelon farming

Machine learning is appropriate to use when there is a problem that does not have an exact answer (i.e. there isn't a right or wrong answer) and/or one that does not have a method of solution that we can fully describe.

Examples where machine learning is not appropriate would be a computer program that counts the words in a document, simply adds some numbers together, or counts the hyperlinks on a page.

Examples where machine learning would be appropriate are optical character recognition, determining whether an email is spam, or identifying a face in a photo. In all of these cases it is almost impossible for a human (who is most likely extremely good at these tasks) to write an exact set of rules for how to go about doing these things that they can feed into a computer program. Furthermore, there isn't always a right answer; one man's spam is another man's informative newsletter.

Explaining Machine Learning with Will Critchlow at SearchLove 2013 in London. I like watermelons.

The example I am going to use in this post is that of picking watermelons. Watermelons do not continue to ripen once they are picked, so it is important to pick them when they are perfectly ripe. Anyone who has been picking watermelons for years can look at a watermelon, give it a feel with their hands, and from its size, colour and from how firm it feels they can determine whether it is under-ripe, over-ripe or just right. They can do this with a high degree of accuracy. However, if you asked them to write down a list of rules or a flow chart that you or I could use to determined whether a specific watermelon was ripe, then they would almost certainly fail - the problem doesn't have a clean cut answer you can write into rules. Also note that there isn't necessarily a right or wrong answer - there may even be disagreement among the farmers.

You can imagine that the same is true about how to identify whether a webpage is spammy or not; it is hard or impossible to write an exact set of rules that work well, and there is room for disagreement.

Robo-farmers

However, this doesn't mean that it is impossible to teach a computer to find ripe watermelons; it is absolutely possible. We simply need a method that is more akin to how humans would learn this skill: learning by experience. This is where machine learning comes in.

Supervised learning

We can set up a computer (there are various methods, we don't need to know the details at this point, but the method you've likely heard of is artificial neural networks) such that we can feed it information about one melon after another (size, firmness, color, etc.), and we also tell the computer whether that melon is ripe or not. This collection of melons is our "training set," and depending the complexity of what is being learnt it needs to have a lot of "melons" (or webpages or whatever) in it.

Over time, the computer will begin to construct a model of how it thinks the various attributes of the melon play into it being ripe or not. Machine learning can handle situations where these interactions could be relatively complex (e.g. the firmness of a ripe melon may change depending on the melon's color and the ambient temperature). We show each melon in the training set many times in a round robin fashion (imagine this was you; now that you've noticed something you didn't before you can go back to previous melons and learn even more from them).

Once we're feeling confident that the computer is getting the hang of it, then we can give it a test by showing it melons from another collection it has not yet seen (we call this set of melons the "validation set"), but we don't share whether these melons are ripe or not. Now the computer tries to apply what it has learnt and predict whether the melons are ripe or not (or even how ripe they may or may not be). We can see from how many melons the computer accurately identifies how well it has learnt. If it didn't learn well we may need to show it more melons or we may need to tweak the algorithm (the "brain") behind the scenes and start again.

This type of approach is called supervised learning, where we supply the learning algorithm with the details about whether the original melons are ripe or not. There do exist alternative methods, but supervised learning is the best starting point and likely covers a fair bit of what Google is doing.

One thing to note here is that even after you've trained the computer to identify ripe melons well, it cannot write that exhaustive set of rules we wanted from the farmer any more than the farmer could.

Caffeine infrastructure update

So how does all this fit with search?

First we need to rewind to 2010 and the rollout of the Caffeine infrastructure update. Little did we know it at the time, but Caffeine was the forefather of Panda and Penguin. It was Caffeine that allowed Panda and Penguin to come into existence.

Caffeine allowed Google to update its index far faster than ever before, and update PageRank for parts of the web's link graph independently of the rest of the graph. Previously, you had to recalculate PageRank for all pages on the web at once; you couldn't do just one webpage. With Caffeine, we believe that changed and they could estimate, with good accuracy, updated PageRank for parts of the web (sub-graphs) to account for new (or removed) links.

This meant a "live index" that is constantly updating, rather than having periodic updates.

So, how does this tie in with machine learning, and how does it set the stage for Panda and Penguin? Lets put it all together...

Panda and Penguin

Caffeine allowed Google to update PageRank extremely quickly, far faster than ever before, and this is likely the step that allowed them finally apply machine learning at scale as a major part of the algorithm.

The problem that Panda set out to solve is very similar to the problem of determining whether a water melon is ripe. Anyone reading this blog post could take a short look at a webpage, and in most cases tell me how spammy that page is with a high degree of accuracy. However, very few people could write me an exact list of rules to judge that characteristic for pages you've not yet seen ("if there are more than x links, and there are y ads taking up z% of the screen above the fold..."). You could give some broad rules, but nothing that would be effective for all the pages where it matters. Consider also that if you (or Google) could construct such a list of strict rules, it would become easier to circumvent them.

So, Google couldn't write specific sets of rules to judge these spammy pages, which is why for years many of us would groan when we looked at a page that was clearly (in our minds) spammy but which was ranking well in the Google SERPs.

The exact same logic applies for Penguin.

The problems Google was facing were similar to the problem of watermelon farming. So why weren't they using machine learning from day one?

Training

Google likely created a training set by having their teams of human quality assessors give webpages a score for how spammy that page was. They would have had hundreds or thousands of assessors all review hundreds or thousands of pages to produce a huge list of webpages with associated spam scores (averaged from multiple assessors). I'm not 100% sure on exactly what format this process would have taken, but we can get a general understanding using the above explanation.

Now, recall that to learn how ripe the watermelons are we have to have a lot of melons and we have to look at each of them multiple times. This is a lot of work and takes time, especially given that we have to learn and update our understanding (we call that the "model") of how to determine ripeness. After that step we need to try our model out on the validation set (the melons we've not seen before) to assess whether it is working well or not.

In Google's case, this process is taking place across its whole index of the web. I'm not clear on the exact approach they would be using here, of course, but it seems clear that applying the above "learn and test" approach across the whole index is immensely resource intensive. The types of breakthroughs that Caffeine enabled with a live index and faster computation on just parts of the graph are what made Machine Learning finally viable. You can imagine that previously if it took hours (or even minutes) to recompute values (be it PageRank or a spam metric) then doing this the thousands of times necessary to apply Machine Learning simply was not possible. Once Caffeine allowed them to begin, the timeline to Panda and subsequently Penguin was pretty quick, demonstrating that once they were able they were keen to utilise machine learning as part of the algorithm (and it is clear why).

What next?

Each "roll out" of subsequent Panda and Penguin updates was when a new (and presumably improved) model had been calculated, tested, and could now be applied as a signal to the live index. Then, earlier this year, it was announced that Panda would be continuously updating and rolling out over periods of around 10 days, so the signs indicate that they are improving the speed and efficiency with which they can apply Machine Learning to the index.

Hummingbird seems to be setting the stage for additional updates.

I fully expect we will see more machine learning being applied to all areas of Google over the coming year. In fact, I think we are already seeing the next iterations of it with Hummingbird, and at Distilled we are viewing the Hummingbird update in a similar fashion to Caffeine. Whilst Hummingbird was an algorithm update rather than an infrastructure update, we can't shake the feeling that it is setting the foundations for something yet to come.

Wrap-up

I'm excited by the possibilities of machine learning being applied at this sort of scale, and I think we're going to see a lot more of it. This post set out to give a basic understanding of what is involved, but I'm afraid to tell you I'm not sure the watermelon science is 100% accurate. However, I think understanding the concept of Machine Learning can really help when trying to comprehend algorithms such as Panda and Penguin.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Seth's Blog : "In an abundance of caution"

 

"In an abundance of caution"

Do we have a caution shortage?

Is it necessary to have caution in abundance?

When a lawyer or a doctor tells you to do something in an abundance of caution, what they're actually doing is playing on your fear. Perhaps we could instead for an abundance of joy or an abundance of artistic risk or an abundance of connection. Those are far more productive (and fun).

Also: The things we have the most abundance of caution about are rarely the things that are actual risks. They merely feel like risks.

       

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your email subscriptions, powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498