luni, 21 octombrie 2013

More than just a website

The White House Monday, October 21, 2013
 

More than just a website

On October 1st, even as the federal government was shutting down, HealthCare.gov -- the new Health Insurance Marketplace where people can find and apply for affordable health insurance plans as part of Obamacare -- opened for business nationwide.

As you may have heard, the website launch hasn’t gone nearly as smoothly as it should have. And with nearly 20 million visits to the site in the first several weeks, the problems were aggravated by the website's popularity.

But even in spite of those problems, thousands of Americans are currently signing up and saving money. Nearly half a million Americans have already applied for health insurance through the federal and state Marketplaces. It's not hard to see why: Once people get through the door, they overwhelmingly like the benefits and cost of the products available to them -- especially when compared with the discriminatory private market that was previously the only game in town.

Some of the best and brightest are now doing everything they can to fix the site as quickly as possible. People are working overtime, 24/7, and we've brought in some of the best IT experts from across the country to join the team at the federal agency responsible for running the Marketplace. And as President Obama said today, "Nobody's madder than me about the fact that the website isn't working as well as it should, which means it's going to get fixed."

It's also important to remember that the website is not the only way to enroll. You can dial the call center at 1-800-318-2596 and apply directly over the phone. Typically, it takes about 25 minutes for an individual or about 45 for a family. You can also find local "navigators" who can help you enroll in person at LocalHelp.HealthCare.gov.

Learn more about what we're doing to make the site better and enrollment easier.

The President is the first to admit that the website's problems are unacceptable. But Obamacare is more than just a website. Most Americans already have health insurance through their employer, Medicare, or Medicaid, and they don't need to go to the Marketplace for new coverage. Instead, the health care law provides new benefits and increased protection: Young people can stay on their parents' insurance plans until they are 26, seniors are receiving more affordable prescription medicine, and preventive care like mammograms are free.

So while there are major improvements to make on the new website, millions of Americans are already benefiting from the health care law. And we're making sure that those who don't have access to affordable health insurance today can sign up -- online, on the phone, and even in person.

Here are some of the things we've done in the meantime to make the process easier. Take a look, and pass this message along to those you know who are trying to sign up for health care:

  • You can now preview plans and prices available in your area without filling out the online application.
  • You can find out, with an improved calculator, whether your income and household size may qualify you for lower costs on your monthly premiums and out-of-pocket costs.
  • You can apply for coverage 4 ways: by phone, online, by mail with a paper application, or with the help of an in-person assister.

Thanks, and stay tuned for more updates.

Stay Connected

 

This email was sent to e0nstar1.blog@gmail.com
Sign Up for Updates from the White House
Unsubscribe | Privacy Policy
Please do not reply to this email. Contact the White House

The White House • 1600 Pennsylvania Ave NW • Washington, DC 20500 • 202-456-1111

 

Making HealthCare.gov Better

Here's What's Happening Here at the White House
 
 
 
 
 
 
  Featured

Making HealthCare.gov Better

On October 1st, HealthCare.gov opened for business, offering Americans a new way to compare health insurance plans and enroll in coverage. The site has received nearly 20 million visits, and Americans all over the country are signing up for affordable, high-quality health insurance. But, as President Obama explained today, the web site hasn’t worked as well as it’s supposed to.

See what improvements are being made to HealthCare.gov.

Making HealthCare.gov Better

 

  Top Stories

Weekly Address: Working Together on Behalf of the American People

In this week's address, President Obama said that now that the federal government is reopened and the threat of default is lifted from the economy, there are three places Washington can take action to serve the American people.

READ MORE

White House Announces Public Tours

The White House is pleased to announce the resumption of East Wing and Executive Residence tours, beginning on November 5th.

READ MORE

Here's What You Missed During the Shutdown

For more than two weeks, President Obama worked diligently to reopen the government and remove the dangers of default from our economy. He met with members of Congress from both parties, talked with Americans impacted by the shutdown, and spoke to the press. Thanks to those efforts and the work of Democrats and responsible Republicans in Congress, the President signed legislation late Wednesday night to end the shutdown.

READ MORE

 
 
  Today's Schedule

All times are Eastern Time (ET)

10:30 AM: The President and Vice President receive the Presidential Daily Briefing

11:25 AM: The President delivers remarks on the Affordable Care Act

12:30 PM: Press Briefing by Press Secretary Jay Carney

Did Someone Forward This to You? Sign Up for Email Updates

This email was sent to e0nstar1.blog@gmail.com

Unsubscribe | Privacy Policy
Please do not reply to this email. Contact the White House

The White House • 1600 Pennsylvania Ave NW • Washington, DC 20500 • 202-456-1111


It's Penguin-Hunting Season: How to Be the Predator and Not the Prey

It's Penguin-Hunting Season: How to Be the Predator and Not the Prey


It's Penguin-Hunting Season: How to Be the Predator and Not the Prey

Posted: 20 Oct 2013 04:12 PM PDT

Posted by russvirante

Penguin changed everything. For most search engine optimizers like myself, especially those who operate in the gray areas of optimization, we had long grown comfortable with using "ratios" and "percentages" as simple litmus tests to protect ourselves against the wrath of Google. I can't tell you how many times I both participated in and was questioned about what our current "anchor text ratio" was. Many of you probably remember having the same types of discussions back in the keyword-stuffing days.

We now know unequivocally that Google has used and continues to use statistical tools far more advanced than simply looking at where an individual ranking factor sits on a dial. (We certainly have more than enough Remove'em users to prove that.) My understanding of Penguin and its content-focused predecessor Panda is that Google now employs machine-learning techniques across large data sets to uncover patterns of over-optimization that aren't easily discerned by the human eye or the crude algorithms of the past. It is with this understanding that I and my company, Virante, Inc., undertook the Open Penguin Data project, and ultimately formed our Penguin Vulnerability Score.

The Open Penguin Data Project

Matt Cutts occasionally gives us a heads-up about future updates, and in the Spring of 2013 we were informed that within a few weeks Penguin 2.0 would roll out. I remember exactly when the idea hit me. I was reading "How is Big Data Different from Previous Data" by Bryan Eisenberg, and it occurred to me that the kind of stuff we were doing at Remove'em to detect bad links just didn't keep muster with the sophistication of the "big data" analysis Google was using at the time. So Virante went to work. We started monitoring a huge number of keywords, so that when Penguin 2.0 hit we could catch winners and losers. In the end, we used data from three different awesome providers: Authority Labs (for the initial data set), Stat Search Analytics (for cross-validation) and SerpMetrics (for determining that we weren't just picking up manual penalties). We identified around 600 losing URL/keyword pairs and matched them with their competitors who did not lose rankings.

We then opened the data up to the community at the Open Penguin Data project website and asked members of the community to contribute their ideas for factors that might influence the Penguin algorithm. You can go there right now and download the latest data set, although at present I know there is a bug in the mozRank and mozTrust columns that needs to be fixed. We have identified over 70 factors that may influence Penguin and are still building upon them, with the latest variable update being October 14th. Unfortunately, only certain variables can be added now as fresh data won't be relevant. The data behind the factors came from a large number of sources beginning with Moz of course, and including Majestic SEO, Ahrefs, Grep/Words, and Archive.org

We then began to analyze the data in a number of ways. The first was through standard correlation coefficients to help determine direction of influence (assuming there was any influence at all). It is important that I deal with the issue of correlation vs. causation here, because I am sure one of you will bring it up.

Correlation vs. causation

The purpose of the Open Penguin Data Project was not and is not to determine which factors cause a Penguin penalty. Rather, we want to determine which factors predict a Penguin penalty so that we can build a reasonable model of vulnerability. Once we know a website's vulnerability to Penguin, we can start applying different techniques to lower that vulnerability that fall closer to the realm of causal factors.

For example, we will talk about the difference of mozTrust and mozRank as being a fairly good predictor of Penguin. No one in their right mind believes that Google consumes Moz's data to determine who and who not to penalize. However, once we know that a site is likely to be penalized (because we know the mozTrust and mozRank differential), we can start to apply tactics that will likely counter Penguin, such as using the disavow tool or removing spammy links. We aren't talking about causation, we are talking about prediction.

The analysis of the risk factors

We then began analyzing the data using a couple of methods. First, we used standard mean Spearman correlations to give us an idea of the lay of the land. This allowed us to also build a crude regression model that actually works quite well without much tweaking. This model essentially comes from adding up the correlation coefficients for each of the factors. Obviously, more sophisticated modeling is better than this, but to build a crude overview, this works quite nicely and can be done on the fly. The real magic happens, though, when we apply the same sorts of machine-learning techniques to the data set that Google uses in building models like Penguin.

Let me be clear, I do not presume to know what statistical techniques Google used to build their model. However, there are certain types of techniques that are regularly used to answer these types of multivariate classification problems and I chose to use them. In particular, I chose to use a gradient boosting algorithm. You can read up on the methodology or the specific implementation we used via scikit-learn, but I'll save you the headache and tell you what you need to know.

Most of us think about statistical analysis as putting some variables in Excel and making a nice graph with a linear regression that shows an upward or downward trend. You can see this below. Unfortunately, this grossly over-simplifies complex problems and often produces a crude result where everything above the line is considered different from that below the line, when clearly they are not. As you see in the example graph below, there are plenty of penalized sites that get missed by falling below the line and completely decent sites that are above the line that get hit.

Classification systems work differently. We aren't necessarily concerned with higher or lower numbers, we are concerned with patterns that might predict something. In this case, we know sites that were hit by Penguin, so now we use a whole bunch of factors and see how the patterns between them might accurately predict them. We don't need to draw an arbitrary line, we can individually analyze the points using machine learning, as you see in the example graph below.

The hard part is that machine learning tells us a lot about prediction, but not a lot about how we came to that prediction. That is where some extra work comes into play. With the Open Penguin Data project, we grouped some of the factors by common characteristics and measured the effectiveness of their predictions in isolation from the other factors. For example, we grouped trust metrics together and anchor text metrics together. We then grouped them in combinations as well. This then gave us a model we could use to determine not only increased Penguin vulnerability, but also what factors contributed to that vulnerability and to what degree.

So, let's talk through some of them here.

Anchor text

By now, everyone and their paid search guy knows that manipulated commercial anchor text is a risk factor for both algorithmic and manual penalties. So, of course, we looked at this closely from the start. We actually broke down the anchor text into three subcategories: exact-match anchor text (meaning the keyword is exactly the keyword for which you would like to rank), phrase-match anchor text (meaning the keyword for which you would like to rank occurs somewhere within the anchor text) and commercial anchor text (the anchor text has a high CPC value).

Exact-match anchor text

We broke exact-match anchor text down into a couple of metrics:

  1. The most common anchor to the page is exact match
  2. The highest mozRank passed anchor to the page is exact match
  3. There is at least one exact match anchor to the page
  4. The most common anchor to the domain is exact match
  5. The highest mozRank passed anchor to the domain is exact match
  6. There is at least one exact match anchor to the domain
Across the board, every single metric related to anchor text provided some positive predictive power except for highest mozRank passed anchor to the domain. Importantly, no single factor had a particularly strong mean Spearman correlation coefficient. For example, the highest was that the domain merely had a single link with the exact match anchor text (.11 correlation coefficient). This is a very weak signal, but our analysis looks to find patterns in these weak signals, so we are not necessarily hindered because each measurement is not sufficiently predictive.

For the biggest victims of Penguin, we often see that exact match anchor text is the second- or third-largest predictor. For example, the below webmaster's predictive vulnerability score could be lowered by 50% simply by impacting exact match anchor text links. For this particular webmaster, the anchor text hit most positive signals we measure regarding anchor text.

Now let me say it one more time: I am not saying that Google is using anchor text to determine who to penalize, rather that it is a strong predictor. Prediction is not causation. However, we can say that the groupings of exact-match anchor text metrics allow us to detect Penguin vulnerability quite well.

Phrase-match anchor text

We broke down phrase-match anchor text in the exact same fashion. This was one of the more surprising features we noticed. In many cases, phrase-match anchor text metrics appeared to be more predictive than exact-match anchor text. Many SEOs, myself included, have long depended on what we call "brand blend" to protect against over-optimization penalties. Instead of just building links for the keyword "SEO", we might build links for "Virante SEO" or "SEO by Virante". This may have insulated us against manual anchor text over-optimization penalties, but it does not appear to be the case with Penguin.

In the example I mentioned above, the webmaster hit nearly every exact match anchor text metric. They also hit every phrase match metric as well. The combination of these factors increased their prediction of being impact by Penguin by a full 100%.

Shoving your high-value keywords inside other phrases doesn't guarantee you any protection. Now, there are a lot of potential takeaways from this. It could be an artifact of merely doubling the exact match influence (i.e. if you score high on exact match, you will also score high on phrase match). We do see some of this occurring, but it doesn't appear to explain all of the additional predictive power. It could be that they are targeting other related keywords and thereby increase their exposure to other parts of the Penguin algorithm. All we know, though, is that the predictive power of the model increases greatly when we take into account phrase-match anchor text. Nothing more, nothing less.

Commercial anchor text

This is my favorite measure of all, as it shows how Google can use one of its most powerful ancillary data sets, bid prices for keywords, to detect manipulation of the link graph. We built 4 metrics around commercial anchor text.

  1. The page has a high-value anchor in a single link
  2. The majority of the anchors are valuable
  3. The majority of links are very high-value anchors
  4. Has a high CPC site-wide
Both having high-value anchors and very high-value anchors had strong predictive values of penguin vulnerability. In keeping with the example we have been using so far, you can see that removing commercial anchor text would have a profound impact on our prediction as to whether or not the site will be impacted by Penguin.

If you've been paying close attention, you may have noticed that a lot of these are related. Having exact-match and phrase-match anchor text likely means you have highly commercial anchors. All of these metrics are related to one another and it is their combined weak signals that make it easier to detect Penguin vulnerability.

Link sources

The next issue we tried to target was the quality of link sources. The most obvious step was trying to detect commonly spammed link sources: directories, forums, guestbooks, press releases, articles, and comments. Using a set of footprints to identify these types of links and spidering all of the backlinks of the training set, we were able to build a few metrics identifying sites that either simply had these types of links or had a preponderance of these types of links.

First, it was interesting that every type of link was positively correlated, but only very weakly. You can't just look at a bunch of article directory submissions and assume that is the cause of a Penguin penalty. However, the combinationâ€"that is a site that would rely on four or five of these types of techniques for nearly all of their PageRankâ€"would appear to have a greater risk factor.

At this point, I want to stop and draw attention to something: Each of these groupings of factors appear to have some good predictive value, but none of them comes even close to explaining the whole vulnerability. Fixing your exact-match anchor text links, or phrase-match links, or commercial anchor links, or poor link sources by themselves will not insulate you from detection. It is the combination of these factors that appears to increase the vulnerability to Penguin. Most sites that we see hit by Penguin have vulnerability scores that are 250%+, although in Penguin 2.1 we saw them as low as 150%. To get to these levels you have to trip a wide variety of factors, but you don't have to be egregiously violating any one single SEO tactic.

Site-wides

This was one of the most disappointing features we used. I was certain, as were many, that site-wide links would be the nail in the coffin. Clearly site-wide links are the culprit behind the Penguin penalty, right? Well, the data just doesn't bear that out.

Site-wides are just too common. The best sites on the web enjoy tons of site-wide links, often in the form of Blog-Rolls. In fact, high site-wide rates correlate negatively with Penguin penalties. Certainly this doesn't mean you should run out and try to get a bunch of site-wide links, but it does beg the question: Are site-wides really all that bad?

Here is where we find the real difference: anchor text. Commercial anchor text site-wides positively correlate with Penguin penalties. While we cannot say they cause them, there is definitely a predictive leap between just any old site-wide link and a site-wide link with specific, commercially valuable anchor text.

This also helps illustrate another issue we SEOs often run into: anecdotal evidence. It is really easy to look at a link profile, see that site-wide, and immediately assume it is the culprit. It is then seemingly reinforced when we scratch the surface with too simple an analysis like looking at the preponderance of that feature among sites that are penalized. It can and does often lead us down the wrong path.

Trust, trust, trust

Of all the eye-opening, mind-blowing discoveries revealed by the Open Penguin Data project, this one was the biggest. At minimum, we all need to tip our hats to the folks at Moz and Majestic for providing us with great link statistics. Two of the strongest metrics we found in helping predict Penguin vulnerability were MozRank greater than MozTrust (Moz) and Domain Citation Flow over Domain Trust Flow (Majestic).

Both Moz and Majestic give us statistics that mimic to a certain degree the raw flow of PageRank (MozTrust and Citation Flow) and an alternative often referred to as Trust Rank (MozRank and Trust Flow). They are essentially the same thing, except Trust metrics start with a trusted set of URLs like .govs and .edus and gives extra value to sites that get links from these trusted sources. These metrics by themselves, while useful in other endeavors, don't really give us much information about Penguin.

However, if we flag URLs and domains where the trust metrics are lower than the raw link metrics, we score some of the highest correlations of all factors tested. Even cruder metrics like whether or not the domain has a single .gov link help predict Penguin vulnerability. While it would be insane to conclude that Google has a subscription to Moz and Majestic and use them to build their Penguin algorithm, this appears to be true: In the aggregate, cheap, low quality links are a Penguin risk factor.

What we should learn

There are some really amazing takeaways that we can build from this kind of analysisâ€"the kind of takeaways that should change your understanding of Penguin and Google's algorithm for many of you who are not yet seasoned professionals. So let's dive in...

Penguin isn't spam detection, it's you detection

Try this fact on for size. If you hit every anchor text trigger in the Open Penguin data set, our predictive model actually DROPS in effectiveness. At first glance this seems counter-intuitive. Certainly Google should catch these extreme spammers. The reality is, though, that cruder algorithms generally clear out this type of search spam. If you have done any traditional off-site SEO in the last three years, it will probably create additional Penguin vulnerability. The Penguin update is targeted at catching patterns of optimization that aren't so easily detected. The most egregious offenders are more likely to be caught by other algorithms than Penguin. So when the next Penguin update comes out and you hear people complain about how some spam site wasn't affected, you can be confident that this isn't a flaw in Penguin, rather a deliberate choice on Google's behalf to create separate algorithms to target different types of over-optimization.

The rise of the link assassin

It was Ian Curl, a former Virante employee and now head of Link Assassins who first pointed out to me the clear future of SEO: pruning the link graph. Google has essentially given us the tools via GWT to both view our links and disavow them. A new class of link removal and disavow professionals has grown over the last year: SEOs who can spot a toxic link and guide you through the process of not just cleaning up a penalty but proactively managing your link profile to avoid penalties in the first place. These "link assassins" will play a vital role in the future of SEO in just the same way that one would expect a professional gardener to prune back excessive growth.

The demise of cheap, scalable white-hat link building

Let me be clear: If it works, Google wants to stop it. We have already heard the shots across the bow for lily-white link building techniques like guest posting from Matt Cutts. Right now, the only hold-out I see left is broken link building which is only scalable under certain circumstances. Google is doing its best to identify the exact same footprints you use to link-build and adding them into their own link pattern detection. It isn't an easy task, which is why Penguin only rolls out every few months, but it appears to be one to which Google is committed.

The growth of integrated SEO

There is no way around it. If you are interested in long term, effective, white-hat SEO, you are going to have to build integrated campaigns largely focused around content marketing that include multiple forms of advertising. There is a great write up on this by Chris Boggs over at Internet Marketing Ninjas on Integrating Content Marketing into Traditional Advertising Campaigns. As Google continues to get better at detecting unnatural patterns, it will be harder and harder to get away with simply turning one dial at a time.

Next steps

The average webmaster or SEO needs to really step back and make an honest account of their current SEO footprint. I don't mean to be fear-mongering; only a fraction of a percent of all websites will ever get hit by Penguin. 75% of adult males who smoke a pack a day will never get lung cancer, but that doesn't mean you should keep on smoking because the odds are in your favor. While the odds are greatly in your favor that Penguin will never strike your site, there is no reason to not take simple precautions to determine whether your tactics are putting your site at risk.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Seth's Blog : The problem with "just"

 

The problem with "just"

A few people have dropped me notes referring to the notion that I encourage people to just ship it.

Ship it, certainly. If you don't meet the market, if you don't open yourself to the input and reaction of those you seek to serve and influence, you've done nothing much.

But, "just"?

Not going to let you off the hook with that. The just implies a throwaway. The just has a, "what the hell," element to it. With "just" in the mix, the alternatives seem to be: polish, improve, focus on quality OR just throw it out there.

Nope.

You ship. You ship your best work, when it's ready. Not after it's ready, not when it's too late to make a difference, and yes, of course, not when it's sloppy or unformed.

But you ship. You're on the hook, you made this, it's ready. Ship. Without excuses.

       

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

duminică, 20 octombrie 2013

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


"Bubblicious" High End Flipping Up 350%, Overall Flipping Down 13%

Posted: 20 Oct 2013 09:10 AM PDT

Flipping houses went out of fashion around 2008, along with flip phones and sub-prime mortgages. But real estate data shows the practice is on the upswing — among million-dollar homes.

At the low end, where flipping might actually make some economic sense, flipping is down.

MarketWatch reports House flipping makes a comeback
For the market as a whole, flips of single family homes fell 13% in the third quarter, according to new research from RealtyTrac, with investors earning a gross profit of nearly $55,000 on each property. But at the higher end of the market, homes seem to flip as quickly as a griddle full of hamburgers. Flipping increased 34% among homes worth $750,000 and over, 42% among $1 to $2 million houses, and 350% for properties worth between $2 and $5 million.

Flipping tends to be most common in cities with a large supply of expensive homes. (To qualify as a flip, a home must be purchased and subsequently sold again within six months.) In fact, more than three-quarters of all high-end flipping took place in five markets: The New York metro area and Los Angeles, San Francisco, San Jose and San Diego.

What's behind these quick turnarounds? "Flipping happens when prices are rising rapidly even if price levels are low," says Jed Kolko, chief economist for real-estate firm Trulia.

It's possible to double the value of a home, says Jeff Salgado, a San Francisco-based realtor. Investors could buy a dilapidated home for $1.2 million, invest $600,000 and sell it for $2.4 million, he says. "Our buying community is driven by the biotech and high tech sectors. These people are brilliant at what they do, but a good portion of them don't know the difference between a screwdriver and a hammer."
Bubblicious

When asked to comment, Bernanke replied "It's not a bubble, it's bubblicious". Well ... not quite.

But signs of a bubble are all over the place. And just as in 2000 with the dotcom bust, and 2007 with the real estate bust, the Fed is 100% oblivious of massive bubbles now.

Alternatively, the Fed harbors a "We Just Don't Care" attitude, hoping employment picks up before the stock market, corporate bond market, and housing bubbles burst.

Regardless, bubbles never end well (except for bank executives and corporate insiders who cash out stock options and sell every share on the way up).

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

Seth's Blog : The complaining customer doesn't want a refund

 

The complaining customer doesn't want a refund

He wants a connection, an apology and some understanding. He wants to know why you made him feel stupid or ripped off or disrespected, and why it's not going to happen again.

If you have a department that sends out form letters and refund coupons, what you've done is built the ability, at scale, to get rid of people who are giving you a second chance.

When the refund for the broken M&M's or the artificially flavored nuts that should have been delicious, or the $20 inconvenience fee in exchange for the torture you put a frequent flyer through arrives, you've basically sent a form letter that says, "goodbye."

Which is your choice, of course, but if you think that this expression of goodwill is going to be seen as goodwill, you're wrong.

Try candor or inviting them to an online focus group. Perhaps try being human. Try giving them a chance to be a voice of the concerned, energetic customer, a voice that needs to be heard by people who actually make decisions.

       

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

sâmbătă, 19 octombrie 2013

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


J.P. Morgan Reaches $13 Billion Deal with Justice Department; Is This a Fair Deal?

Posted: 19 Oct 2013 06:21 PM PDT

The Wall Street Journal reported today J.P. Morgan Reaches $13 Billion Tentative Deal with Justice Department. However a criminal investigation is not yet closed.
J.P. Morgan Chase & Co. has reached a tentative deal to pay a record $13 billion to the Justice Department to settle a number of outstanding probes of its residential mortgage-backed securities business, according to a person familiar with the decision.

The deal, which was struck Friday night, doesn't resolve a continuing criminal probe of the bank's conduct, which could result in charges against individuals or the bank itself and possibly increase the penalty tab. The two sides continued to disagree over an admission of wrongdoing that would end the criminal probe and decided instead to resolve the civil allegations related to the mortgage securities.

The deal includes $4 billion to settle claims by the Federal Housing Finance Agency that J.P. Morgan misled Fannie Mae and Freddie Mac about the quality of loans it sold them in the run-up to the 2008 financial crisis, another $4 billion in consumer relief, and $5 billion in penalties paid by the bank, according to a second person close to the talks. How the consumer relief and penalties get dispersed and distributed is largely up to the government, and those details are still unclear, this person said.

The tentative settlement comes as J.P. Morgan tries to put as many legal woes behind it as possible. Earlier this week, J.P. Morgan agreed to pay $100 million and acknowledge wrongdoing to settle allegations by the Commodity Futures Trading Commission related to its botched "London whale" trades. Last month, the bank agreed to pay $920 million to settle similar charges with U.S. and U.K. regulators related to that 2012 trade.

The task force issued a series of subpoenas to various financial companies, seeking internal documents. Those documents held a number of promising leads, one of which was assigned to federal prosecutors in Sacramento.

Investigators in that case discovered an email by a bank employee, warning her higher-ups that the bank was vastly overstating the value of the mortgages being securitized, according to people familiar with the probe. That employee, who has since left the company, has been cooperating with federal prosecutors, who expect to call her as a witness if the case ever goes to trial, according to people familiar with the case.

While the Justice Department considers the evidence in that case to be strong, officials at the bank strongly disagree, according to people familiar with the negotiations.

In late September, as the Justice Department neared its own deadline to file a civil lawsuit in the case, the bank offered $3 billion to settle the case. Attorney General Eric Holder rejected that offer, and government lawyers prepared to file the suit. The bank then offered billions more, if the government was willing to throw into the settlement separate cases, raising the total price and resolving more of the bank's legal headaches.

As the negotiations intensified in September, Mr. Dimon sought a face-to-face meeting with Mr. Holder to try to resolve the remaining sticking points. The two met Sept. 26 at the Justice Department, but the meeting failed to settle the outstanding issues. As talks continued over the remaining weeks, the size of the deal swelled, but the two sides continued to disagree over an admission of wrongdoing that would end the criminal probe.

On Friday night, Mr. Dimon and Mr. Holder decided they were just not going to come to terms on the criminal issue–and take the deal on the terms where they did agree.
Is This a Fair Deal?

For starters, I am astonished at the massive settlement. $13 billion sounds huge (and it is compared to the typical whitewashing affairs we see).

However, things could have been much worse.

CNN Money notes JPM held "$23 billion in reserves for potential litigation expenses. In a footnote to its SEC filing, the bank said legal costs could be nearly $6 billion above that figure in a worst case scenario."

Perhaps $23 billion, $40 billion, or any amount that wipes out JP Morgan litigation reserves is "fair".

Clearly "fair" is in the eyes of the beholder. I will consider it "fair" if executives of the largest banks are tried and convicted in criminal court. Don't count on it. As astonished as I was about the amount of the settlement, I will be even more astonished if any bank executives are criminally convicted.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com