luni, 6 februarie 2012

Damn Cool Pics

Damn Cool Pics


Freaky FFXIII Shiva Sisters Cosplay

Posted: 06 Feb 2012 03:22 PM PST

Video game designers are constantly trying to design characters that are hard to duplicate in real life. And every time, creative cosplayers find a way to do it. Take a look at the Shiva Sisters, Styria and Nix, from the game Final Fantasy XIII. That's impressive, to be sure. That's the work of "the-mirror-melts".










Via: unrealitymag


The Wild, Wild Web: Wrestling Online Privacy [infographic]

Posted: 06 Feb 2012 01:26 PM PST



According to Carnegie Mellon researchers, information listed on social media may be enough to guess a social security number, the key to identity theft. And with mobile banking apps, more and more people are logging sensitive information from their smart phones. Add confusing Terms of Service agreements into the mix (they take an average of 10 minutes each to read!), and it's easy to see why online privacy can feel mystifying.

The following infographic helps explain some of the biggest issues in web safety and gives tips on how to keep yourself protected, from passwords to privacy policies. With a few steps, you can be confident that you control what you share online.

Click on Image to Enlarge.

Via: frudaldad


Let's Move! is Turning Two

The White House

Your Daily Snapshot for
Monday, February 6, 2012

 

Let's Move! is Turning Two  

This week marks the second anniversary of Let’s Move!, the First Lady’s initiative to solve the problem of childhood obesity within a generation. Today at 2:30 p.m. EST, join Sam Kass, Assistant Chef and Senior Policy Advisor for Healthy Food Initiatives, for a special session of Let’s Move! Office Hours.

Want to ask Sam a question? Find out how to get involved.

Behind the Scenes: December 2011 Photos

Follow our Flickr Photostream to see some behind-the-scenes photos of the President as he welcomes home Iraq veterans, takes Bo for a walk, and does the coin toss for the Army-Navy game.

WH Behind the Scenes December 2011

President Barack Obama plays with Bo, the Obama family dog, aboard Air Force One during a flight to Hawaii, Dec. 23, 2011. (Official White House Photo by Pete Souza)

In Case You Missed It

Here are some of the top stories from the White House blog:

Weekly Address: It’s Time for Congress to Act to Help Responsible Homeowners
President Obama continues his call for a return to American values, including fairness and equality, as part of his blueprint for an economy built to last.

Weekly Wrap Up: Hanging Out with America
A glimpse at what happened this week at WhiteHouse.gov.
 
From the Archives: Startup America White Board
A White House White Board released for the launch of Startup America last year explains how the initiative will help entrepreneurs avoid the "valley of death" when starting new ventures.

Today's Schedule

All times are Eastern Standard Time (EST).

11:00 AM: The Vice President visits Florida State University WhiteHouse.gov/live

12:00 PM: The President receives the Presidential Daily Briefing

12:30 PM: Press Briefing by Press Secretary Jay Carney WhiteHouse.gov/live

2:30 PM: The President meets with senior advisors

2:45 PM: The Vice President attends a campaign event 
 
4:30 PM: The Vice President attends a campaign event

WhiteHouse.gov/live Indicates that the event will be live-streamed on WhiteHouse.gov/Live

Get Updates

Sign up for the Daily Snapshot

Stay Connected

 

This email was sent to e0nstar1.blog@gmail.com
Manage Subscriptions for e0nstar1.blog@gmail.com
Sign Up for Updates from the White House

Click here to unsubscribe | Privacy Policy

Please do not reply to this email. Contact the White House

The White House • 1600 Pennsylvania Ave NW • Washington, DC 20500 • 202-456-1111

 

Find Your Site's Biggest Technical Flaws in 60 Minutes

Find Your Site's Biggest Technical Flaws in 60 Minutes


Find Your Site's Biggest Technical Flaws in 60 Minutes

Posted: 05 Feb 2012 01:14 PM PST

Posted by Dave Sottimano

I've deliberately put myself in some hot water to demonstrate how I would do a technical SEO site audit in 1 hour to look for quick fixes, (and I've actually timed myself just to make it harder). For the pros out there, here's a look into a fellow SEO 's workflow; for the aspiring, here's a base set of checks you can do quickly.

I've got some lovely volunteers who have kindly allowed me to audit their sites to show you what can be done in as little as 60 minutes.

I'm specifically going to look for crawling, indexing and potential Panda threatening issues like:

  1. Architecture (unnecessary redirection, orphaned pages, nofollow)
  2. Indexing & Crawling (canonical, noindex, follow, nofollow, redirects, robots.txt, server errors)
  3. Duplicate content & On page SEO (repeated text, pagination, parameter based, dupe/missing titles, h1s, etc..)

Don't worry if you're not technical, most of the tools and methods I'm going to use are very well documented around the web.

Let's meet our volunteers!

Here's what I'll be using to do this job:

  1. SEOmoz toolbar - Make sure highlight nofollow links is turned on - so you can visibly diagnose crawl path restrictions
  2. Screaming Frog Crawler - Full website crawl with Screaming Frog (User agent set to Googlebot) - Full user guide here
  3. Chrome, and Firefox (FF will have Javascript, CSS disabled and User Agent as Googlebot) - To look for usability problems caused by CSS or Javascript
  4. Google search queries - to check the index for issues like content duplication, dupe subdomains, penalties etc..

Here are other checks I've done, but left out in the interest of keeping it short:

  1. Open Site Explorer - Download a back link report to see if you're missing out on links pointing to orphaned, 302 or incorrect URLs on your site. If you find people linking incorrectly, add some 301 rules on your site to harness that link juice
  2. http://www.tomanthony.co.uk/tools/bulk-http-header-compare/ - Check if the site is redirecting Googlebot specifically 
  3. http://spyonweb.com/ - Any other domains connected you should know about? Mainly for duplicate content
  4. http://builtwith.com/ - Find out if the site is using Apache, IIS, PHP and you'll know which vulnerabilities to look for automatically
  5. Check for hidden text, CSS display:none funniness, robots.txt blocked external JS files, hacked / orphaned pages

My essential reports before I dive in:

  1. Full website crawl with Screaming Frog (User agent set to Googlebot)
  2. A report of everything in Google's index using the site: (1000 results per query unfortunately - this is how I do it)

Down to business...

Architecture Issues

1) Important broken links

We'll always have broken links here and there, and in an ideal world they would all work. Just make sure for SEO & usability that important links (homepage) are always in good shape. The following broken link is on webrevolve homepage that should be pointing to their blog, but returns a 404. This is an important link because it's a great feature and I definitely do want to read more of their content.

   

Fix: Get in there and point that link to the correct page which is http://www.webrevolve.com/our-blog/

How did I find it: Screaming Frog > response codes report

2) Unnecessary Redirection

This happens a lot more than people like to believe. The problem is that when we 301 a page to a new home we often forget to correct the internal links pointing to the old page (the one with the 301 redirect). 

This page http://www.lexingtonlaw.com/credit-education/foreclosure.html 301 redirects to http://www.lexingtonlaw.com/credit-education/foreclosure-2.html

However, they still have internal links pointing to the old page.

  • http://www.lexingtonlaw.com/credit-education/bankruptcy.html?linkid=bankruptcy
  • http://www.lexingtonlaw.com/blog/category/credit-repair/page/10
  • http://www.lexingtonlaw.com/credit-education/bankruptcy.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-education/collections.html

Fix: Get in that CMS and change the internal links to point to http://www.lexingtonlaw.com/credit-education/foreclosure-2.html

How did I find it: Screaming Frog > response codes report

3) Multiple subdomains - Canonicalizing the www or non-www version

One of the first basic principles of SEO, and there are still tons of legacy sites that are tragically splitting their link authority by not using redirecting the www to non-www or vice versa.

Sorry to pick on you CVSports :S

  • http://cvcsports.com/
  • http://www.cvcsports.com/

Oh, and a couple more have got their way into Google's index that you should remove too:

  • http://smtp.cvcsports.com/
  • http://pop.cvcsports.com/
  • http://mx1.cvcsports.com/
  • http://ww.cvcsports.com/
  • http://www.buildyourjacket.com/
  • http://buildyourjacket.com/

Basically, you have 7 copies of your site in the index..

Fix: I recommend using www.cvcsports.com as the main page, and you should use your htaccess file to create 301 redirects for all of these subdomains to the main www site.

How did I find it? Google query "site:cvcsports.com -www" (I also set my results number to 100 for check through the index quicker)

4) Keeping URL structure consistent 

It's important to note that this only becomes a problem when external links are pointing to the wrong URLs. *Almost* every back link is precious, and we want to ensure that we get maximum value from each one. Except we can control how we get linked to; without www, with capitals, or trailing slashes for example. Short of contacting the webmaster to change it, we can always employ 301 redirects to harness as much value as possible. The one place this shouldn't happen is on your own site.

We all know that www.example.com/CAPITALS is different to www.example.com/captials when it comes to external link juice. As good SEOs we typically combat human error by having permanent redirect rules to enforce only one version of a URL (ex. forcing lowercase), which may cause unnecessary redirects if someone links in contradiction to redirects.

Here are some examples from our sites:

  • http://www.lexingtonlaw.com/credit-education/rebuild-credit 301's to trailing slash version
  • http://webrevolve.com/web-design-development/conversion-rate-optimisation/ Redirects to the www version

Fix: Determine your URL structure, should they all have trailing slashes, www, lowercase? Whatever you decide, be consistent and you can avoid future problems. Crawl your site, and fix these 

Indexing & Crawling

1) Check for Penalties

None of our volunteers have any immediately noticeable penalties, so we can just move on. This is a 2 second check that you must do before trying to nitpick at other issues.

How did I do it? Google search queries for exact homepage URL and brand name. If it doesn't show up, you'll have to investigate further.

2) Canonical, noindex, follow, nofollow, robots.txt

I always do this so I understand how clued up SEO-wise the developers are, and to gain more insight into the site. You wouldn't check for these tags in detail unless you had just cause (ex. A page that should be ranking isn't

I'm going to combine this section as it requires much more than just a quick look, especially on bigger sites. First and foremost check robots.txt and look through some of the blocked directories, try and determine why they are being blocked and which bots they are blocking them from. Next, get Screaming Frog in the mix as it's internal crawl report will automatically check each URL for Meta Data (noindex, header level nofollow & follow) and give you the canonical URL if there happens to be one.

If you're spot checking a site, the first thing you should do is understand what tags are in use and why they're using them.

Take Webrevolve for instance, they've chosen to NOINDEX,FOLLOW all of their blog author pages.

  • http://www.webrevolve.com/author/tom/ 
  • http://www.webrevolve.com/author/paul/

This is a guess but I think these pages don't provide much value, and are generally not worth seeing in search results. If these were valuable, traffic driving pages, I would suggest they remove NOINDEX but in this case I believe they've made the right choice.

They also implement self-serving canonical tags (yes I just made that up), basically each page will have a canonical tag that points to itself. I generally have no problem with this practice as it usually makes it easier for developers.

Example: http://www.webrevolve.com/our-work/websites/ecommerce/

3) Number of pages VS Number of pages indexed by Google

What we really want to know here is how many pages Google has indexed. There's 2 ways of doing this, using Google Webmaster Tools by submitting a sitemap you'll get stats back on how many URLs are actually in the index.

OR you can do it without having access but it's much less efficient. This is how I would check...

  1. Run a Screaming Frog Crawl (make sure you obey robots.txt)
  2. Do a site: query
  3. Get the *almost never accurate* results number and compare them to total pages in crawl

If the numbers aren't close, like CVCSports (206 pages vs 469 in the index) you probably want to look into it further.

   

I can tell you right now that CVCSports has 206 pages (not counting those that have been blocked by robots.txt). Just by doing this quickly I can tell there's something funny going on and I need to look deeper.

Just to cut to the chase, CVCsports has multiple copies of the domain on subdomains which is causing this.

Fix: It varies. You could have complicated problems, or it might just be as easy as using canonical, noindex, or 301 redirects. Don't be tempted to block the unwanted pages by robots.txt as this will not remove pages from the index, and will only prevent these pages from being crawled.

Duplicate Content & On Page SEO

Google's Panda update was definitely a game changer, and it caused massive losses for some sites. One of the easiest ways of avoiding at least part of Panda's destructive path is to avoid all duplicate content on your site.

1) Parameter based duplication

URL parameters like search= or keyword= often cause duplication unintentionally. Here's some examples:

  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/mortgage-lenders-rejecting-more-applications.html
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/mortgage-lenders-rejecting-more-applications.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-repair-news/credit-report-news/california-ruling-sets-off-credit-fraud-concerns.html
  • http://www.lexingtonlaw.com/credit-repair-news/credit-report-news/california-ruling-sets-off-credit-fraud-concerns.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/one-third-dont-save-for-christmas.html
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/one-third-dont-save-for-christmas.html?select_state=1&linkid=selectstate
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/financial-issues-driving-many-families-to-double-triple-up.html
  • http://www.lexingtonlaw.com/credit-repair-news/economic-and-credit-trends/financial-issues-driving-many-families-to-double-triple-up.html?select_state=1&linkid=selectstate

Fix: Again, it varies. If I was giving general advice I would say use clean links in the first place - depending on the complexity of the site you might consider 301s, canonical tags or even NOINDEX. Either way, just get rid of them !

How did I find it? Screaming Frog > Internal Crawl > Hash tag column

Basically, Screaming Frog will create a unique hexadecimal number based on source code. If you have matching hash tags, you have duplicate source code (exact dupe content). Once you have your crawl ready, use excel to filter it out (complete instructions here).

2) Duplicate Text content

Having the same text on multiple pages shouldn't be a crime, but post Panda it's better to avoid it completely. I hate to disappoint here, but there's no exact science to finding duplicate text content.

Sorry CVCSports, you're up again ;)

http://www.copyscape.com/?q=http%3A%2F%2Fwwww.cvcsports.com%2F

Don't worry, we've already addressed your issues above, just use 301 redirects to get rid of these copies

Fix: Write unique content as much as possible. Or be cheap and stick it in an image, that works too. 

How did I find it? I used http://www.copyscape.com, but you can also copy & paste text into Google search

3) Duplication caused by pagination

Page 1, Page 2, Page 3... You get the picture. Over time, sites can accumulate thousands if not millions of duplicate pages because of those nifty page links. I swear I've seen a site with 300 pages for one product page.

Our examples:

  • http://cvcsports.com/blog?page=1
  • http://cvcsports.com/blog?page=2

Are they being indexed? Yes.

Another example?

  • http://www.lexingtonlaw.com/blog/page/23
  • http://www.lexingtonlaw.com/blog/page/22

Are they being indexed? Yes.

Fix: General advice is to use the NOINDEX, FOLLOW directive. (This tells Google not to add this page to the index, but crawl through the page). An alternative might be to use the canonical tag but this all depends on the reason why pagination exists. For example, if you had a story that was separated across 3 pages, you definitely would want them all indexed. However, these example pages are pretty thin and *could* be considered as low quality for Google.

How did I find it? Screaming Frog > Internal links > Check for pagination parameters 

Open up the pages and you'll quickly determine if they are auto generated, thin pages. Once you know the pagination parameter or structure of the URL you can check Google's index like so: site:example.com inurl:page=


Time's up! There's so much more I wish I could do, but I was strict about the 1 hour time limit. A big thank you to the brave volunteers who put their sites forward for this post. There was one site that just didn't make the cut, mainly because they've done a great job technically, and, um, I couldn't find any technical faults.

Now it's time for the community to take some shots at me! 

  • How did I do?
  • What could I have done better? 
  • Any super awesome tools I forgot?
  • Any additional tips for the volunteer sites?

Thanks for reading, you can reach me on Twitter @dsottimano if want to chat and share your secrets ;)


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Seth's Blog : Who is your customer?

Who is your customer?

Rule one: You can build a business on the foundation of great customer service.

Rule two: The only way to do great customer service is to treat different customers differently.

The question: Who is your customer?

It's not obvious.

Zappo's is a classic customer service company, and their customer is the person who buys the shoes.

Nike, on the other hand, doesn't care very much at all about the people who buy the shoes, or even the retailers. They care about the athletes (often famous) that wear the shoes, sometimes for money. They name buildings after these athletes, court them, erect statues...

Columbia Records has no idea who buys their music and never has. On the other hand, they understand that their customer is the musician, and they have an entire department devoted to keeping that 'customer' happy. (Their other customer was the program director at the radio station, but we know where that's going...)

Many manufacturers have retailers as their customer. If Wal-Mart is happy, they're happy.

Apple had just one customer. He passed away last year.

And some companies and politicians choose the media as their customer.

If you can only build one statue, who is it going to be a statue of?

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

duminică, 5 februarie 2012

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


Country Specific Blog Censorship by Google; Twitter Employs Censorship as Well; Echo Comments Not Working on Redirects

Posted: 05 Feb 2012 02:53 PM PST

Blog Redirects

Today I learned my blog is being redirected to another URL name in some countries. This is a new "feature" in Blogger that Google added beginning a few weeks back.

Instead of exposing a single "Blogger" to the world then censoring it to meet the requirements of local governments, Google decided to mirror content into country-specific domains then redirect users from foreign countries to the mirror associated with their country. If that country decides to censor something, it will somehow be noted on any page so the reader knows they're seeing a filtered view.

Readers can also try surfing to the original blog URL by appending /ncr (No Country Redirect) after the main name, such as

http://globaleconomicanalysis.blogspot.com/ncr

The above approach assumes a country doesn't filter on that pattern and block the request. For example: my blog is blocked in China so appending /ncr is unlikely to accomplish anything.

Google's Explanation

Why does my blog redirect to a country-specific URL?
Q: Why is this happening?
A: Migrating to localized domains will allow us to continue promoting free expression and responsible publishing while providing greater flexibility in complying with valid removal requests pursuant to local law. By utilizing ccTLDs, content removals can be managed on a per country basis, which will limit their impact to the smallest number of readers. Content removed due to a specific country's law will only be removed from the relevant ccTLD.

Q: How will this change affect my blog?
A: Blog owners should not see any visible differences to their blog other than the URL redirecting to a ccTLD. URLs of custom domains will be unaffected.

Q: Will this affect search engine optimization on my blog?
A: After this change, crawlers will find Blogspot content on many different domains. Hosting duplicate content on different domains can affect search results, but we are making every effort to minimize any negative consequences of hosting Blogspot content on multiple domains.

The majority of content hosted on different domains will be unaffected by content removals, and therefore identical. For all such content, we will specify the blogspot.com version as the canonical version using rel=canonical. This will let crawlers know that although the URLs are different, the content is the same. When a post or blog in a country is affected by a content removal, the canonical URL will be set to that country's ccTLD instead of the .com version. This will ensure that we aren't marking different content with the same canonical tag.
Echo Comments Not Working on Redirects

I was unaware this was happening until today when readers in New Zealand and Australia informed me that comments were no longer working.

Sites With Lost Functionality So Far


The "key" within Echo's database that associates comments to a blog entry is the full blog URL (site name + post permanent URL).

Filtering off the language code alone is insufficient because for some reason Google changed the suffix for New Zealand from ".com" to ".co".

I will get an email into Google to see if they can implement a scheme to only add a suffix. Then I still need to get Echo to do something or alternatively write a Java script to strip off the language code.

Anyone using Echo with blogger is going to have these same issues.

Twitter Employs "Filtering" as Well

Tech Week Europe reports Google To Censor Blogger Sites On Country-By-Country Basis
Google follows Twitter's lead and will use country-code top level domains to censor content as required.

Google has revealed that its Blogger service will now be able to block content on a country-by-country basis, just one week after Twitter announced that it would implementing a similar filtering strategy.

"Migrating to localised domains will allow us to continue promoting free expression and responsible publishing while providing greater flexibility in complying with valid removal requests pursuant to local law," wrote Google on a help page. "By utilizing ccTLDs, content removals can be managed on a per country basis, which will limit their impact to the smallest number of readers. Content removed due to a specific country's law will only be removed from the relevant ccTLD."

Twitter's decision to introduce this selective censorship attracted criticism from users last week, suggesting that the site was effectively aiding oppressive regimes squash freedom of speech. Google's implementation has been lower key, and whilst critics will argue the same points, the company has emphasised that the measure will prevent blanket censorship of content whilst keeping them in line with the law.

The BBC reports that Google will initially roll out the changes to Australia, New Zealand and India, but plan to apply the measures globally.
More Tech Week "Tweet" Articles

Twitter Can Now Censor Tweets In Individual Countries

Twitter said that it must begin censoring tweets, if the company was going to continue to continue its international expansion. Twitter has been blocked by a number of governments, including China and the former Egyptian regime after it was used to ignite anti-government protests.

Twitter Faces Protest Over Censorship Move

Judge Rules Twitter Must Hand Over Account Data to Wikileaks Prosecutors

Big Brother is watching. However, it's too late to worry about 1984. The worry now is if the next stop is the Year 2525 where "Everything you think do and say is in the pill you took today".

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List


Postponed Till "Tomorrow"; Juncker Issues ultimatum "Comply or Default"

Posted: 05 Feb 2012 12:02 PM PST

It's Groundhog Day once again as Greek crisis talks for debt deal pushed to Monday
Coalition backers held a five-hour meeting late Sunday with Prime Minister Lucas Papademos to hammer out a deal with debt inspectors representing eurozone countries and the International Monetary Fund — but again failed to reach an agreement.

Leaders of parties supporting the Greece's coalition government say crisis talks for massive new debt deals will continue Monday.
Juncker Issues ultimatum "Comply or Default"

The theater of the absurd continues for yet another day with Juncker's ultimatum: Comply or default
Jean-Claude Juncker, the head of the Eurogroup, warned Greece through an interview to a German magazine that it will either comply with its creditors' requirements or default, as it should not expect any additional support from its peers.

Earlier, the head of the ruling coalition's third partner, Giorgos Karatzaferis, stated in Thessaloniki that he would not tolerate any ultimatums.

"We need to examine whether the creditors' demands are in favor of growth for the sake of the Greek people, otherwise we will not get the support package. I am not going to sign up to that,» said the leader of Popular Orthodox Rally (LAOS).
This is beyond ridiculous. The EU and IMF Need to Stand Up and Announce "Too Late" Deal is OFF, then work with Greece to plan a return to the Drachma.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com
Click Here To Scroll Thru My Recent Post List