marți, 31 martie 2015

Damn Cool Pics

Damn Cool Pics


Meet Johnny Depp's Daughter Lily Rose Melody Depp

Posted: 31 Mar 2015 01:19 PM PDT

Johnny Depp's daughter Lily Rose Melody Depp is quickly building up her own little fanbase. She's only a teenager but she's starting to get everyone's attention. Could there be another star in the making for the Depp family?






















via facebook

The 30 Most Terrifying Movie Villains To Ever Appear On Screen

Posted: 31 Mar 2015 12:18 PM PDT

Because it's always fun to root for the bad guy.

Darth Vadar – Star Wars series



Gary Oldman as Norman Stansfield – Leon: The Professional (1994)



Sam Rockwell as Wild Bill Wharton – The Green Mile (1999)



Jack Nicholson as The Joker – Batman (1989)



Joe Pesci as Tommy DeVito – Goodfellas (1990)



Michael Myers – Halloween series



Daniel-Day Lewis as Bill the Butcher – Gangs of New York (2002)



Alan Rickman as Hans Gruber – Die Hard (1988)



Micheal Douglas as Gordon Gekko – Wall Street (1987)



Robert Patrick as T-1000 – Terminator 2: Judgement Day (1991)



Yi-Tae Yu as Woo-Jin Lee – Oldboy (2003)



Jason Voorhess – Friday the 13th series



Kevin Spacey as Verbal Kint – The Usual Suspect (1995)



Hugo Weaving as Agent Smith – Matrix Trilogy



Al Pacino as Tony Montana – Scarface (1983)



Dolph Lundgren as Ivan Drago – Rocky IV (1985)



Louise Fletcher as Nurse Ratched – One Flew Over the Cuckoo's Nest (1975)



Freddy Krueger – A Nightmare on Elm Street series



Lionel Barrymore as Mr. Potter – It's a Wonderful Life (1946)



Ralph Fiennes as Amon Goeth – Schindler's List (1993)



Dennis Hopper as Frank Booth – Blue Velvet (1986)



Kathy Bates as Annie Wilkes – Misery (1990)



Malcolm McDowell as Alex Delarge – Clockwork Orange (1971)



Kevin Spacey as John Doe – Se7en (1995)



Jack Nicholson as Jack Torrance – The Shining (1980)



Christoph Waltz as Col. Hans Landa – Inglorious Bastards (2009)



Javier Bardem as Anton Chigurh – No Country for Old Men (2007)



Anthony Perkins as Norman Bates – Psycho (1960)



Heath Ledger as The Joker – The Dark Knight (2008)



Anthony Hopkins as Dr. Hannibal Lecter – Silence of the Lambs (1991)

Seth's Blog : Different kinds of magic

Different kinds of magic

A stunning video about what school can mean.

A beautiful book about art and meaning.

A different kind of management tome.

Rethinking your career. Or this way.

And worth thinking hard about: two brilliant social histories by David Graeber. Debt and Bureaucracy.

 

       

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.



Email subscriptions powered by FeedBlitz, LLC, 365 Boston Post Rd, Suite 123, Sudbury, MA 01776, USA.

Spam Score: Moz's New Metric to Measure Penalization Risk - Moz Blog


Spam Score: Moz's New Metric to Measure Penalization Risk

Posted on: Monday 30 March 2015 — 13:00

Posted by randfish

Today, I'm very excited to announce that Moz's Spam Score, an R&D project we've worked on for nearly a year, is finally going live. In this post, you can learn more about how we're calculating spam score, what it means, and how you can potentially use it in your SEO work.

How does Spam Score work?

Over the last year, our data science team, led by  Dr. Matt Peters, examined a great number of potential factors that predicted that a site might be penalized or banned by Google. We found strong correlations with 17 unique factors we call "spam flags," and turned them into a score.

Almost every subdomain in  Mozscape (our web index) now has a Spam Score attached to it, and this score is viewable inside Open Site Explorer (and soon, the MozBar and other tools). The score is simple; it just records the quantity of spam flags the subdomain triggers. Our correlations showed that no particular flag was more likely than others to mean a domain was penalized/banned in Google, but firing many flags had a very strong correlation (you can see the math below).

Spam Score currently operates only on the subdomain level—we don't have it for pages or root domains. It's been my experience and the experience of many other SEOs in the field that a great deal of link spam is tied to the subdomain-level. There are plenty of exceptions—manipulative links can and do live on plenty of high-quality sites—but as we've tested, we found that subdomain-level Spam Score was the best solution we could create at web scale. It does a solid job with the most obvious, nastiest spam, and a decent job highlighting risk in other areas, too.

How to access Spam Score

Right now, you can find Spam Score inside  Open Site Explorer, both in the top metrics (just below domain/page authority) and in its own tab labeled "Spam Analysis." Spam Score is only available for Pro subscribers right now, though in the future, we may make the score in the metrics section available to everyone (if you're not a subscriber, you can check it out with a free trial). 

The current Spam Analysis page includes a list of subdomains or pages linking to your site. You can toggle the target to look at all links to a given subdomain on your site, given pages, or the entire root domain. You can further toggle source tier to look at the Spam Score for incoming linking pages or subdomains (but in the case of pages, we're still showing the Spam Score for the subdomain on which that page is hosted).

You can click on any Spam Score row and see the details about which flags were triggered. We'll bring you to a page like this:

Back on the original Spam Analysis page, at the very bottom of the rows, you'll find an option to export a disavow file, which is compatible with Google Webmaster Tools. You can choose to filter the file to contain only those sites with a given spam flag count or higher:

Disavow exports usually take less than 3 hours to finish. We can send you an email when it's ready, too.

WARNING: Please do not export this file and simply upload it to Google! You can really, really hurt your site's ranking and there may be no way to recover. Instead, carefully sort through the links therein and make sure you really do want to disavow what's in there. You can easily remove/edit the file to take out links you feel are not spam. When Moz's Cyrus Shepard disavowed every link to his own site, it took more than a year for his rankings to return!

We've actually made the file not-wholly-ready for upload to Google in order to be sure folks aren't too cavalier with this particular step. You'll need to open it up and make some edits (specifically to lines at the top of the file) in order to ready it for Webmaster Tools

In the near future, we hope to have Spam Score in the Mozbar as well, which might look like this: 

Sweet, right? :-)

Potential use cases for Spam Analysis

This list probably isn't exhaustive, but these are a few of the ways we've been playing around with the data:

  1. Checking for spammy links to your own site: Almost every site has at least a few bad links pointing to it, but it's been hard to know how much or how many potentially harmful links you might have until now. Run a quick spam analysis and see if there's enough there to cause concern.
  2. Evaluating potential links: This is a big one where we think Spam Score can be helpful. It's not going to catch every potentially bad link, and you should certainly still use your brain for evaluation too, but as you're scanning a list of link opportunities or surfing to various sites, having the ability to see if they fire a lot of flags is a great warning sign.
  3. Link cleanup: Link cleanup projects can be messy, involved, precarious, and massively tedious. Spam Score might not catch everything, but sorting links by it can be hugely helpful in identifying potentially nasty stuff, and filtering out the more probably clean links.
  4. Disavow Files: Again, because Spam Score won't perfectly catch everything, you will likely need to do some additional work here (especially if the site you're working on has done some link buying on more generally trustworthy domains), but it can save you a heap of time evaluating and listing the worst and most obvious junk.

Over time, we're also excited about using Spam Score to help improve the PA and DA calculations (it's not currently in there), as well as adding it to other tools and data sources. We'd love your feedback and insight about where you'd most want to see Spam Score get involved.

Details about Spam Score's calculation

This section comes courtesy of Moz's head of data science, Dr. Matt Peters, who created the metric and deserves (at least in my humble opinion) a big round of applause. - Rand

Definition of "spam"

Before diving into the details of the individual spam flags and their calculation, it's important to first describe our data gathering process and "spam" definition.

For our purposes, we followed Google's definition of spam and gathered labels for a large number of sites as follows.

  • First, we randomly selected a large number of subdomains from the Mozscape index stratified by mozRank.
  • Then we crawled the subdomains and threw out any that didn't return a "200 OK" (redirects, errors, etc).
  • Finally, we collected the top 10 de-personalized, geo-agnostic Google-US search results using the full subdomain name as the keyword and checked whether any of those results matched the original keyword. If they did not, we called the subdomain "spam," otherwise we called it "ham."

We performed the most recent data collection in November 2014 (after the Penguin 3.0 update) for about 500,000 subdomains.

Relationship between number of flags and spam

The overall Spam Score is currently an aggregate of 17 different "flags." You can think of each flag a potential "warning sign" that signals that a site may be spammy. The overall likelihood of spam increases as a site accumulates more and more flags, so that the total number of flags is a strong predictor of spam. Accordingly, the flags are designed to be used together—no single flag, or even a few flags, is cause for concern (and indeed most sites will trigger at least a few flags).

The following table shows the relationship between the number of flags and percent of sites with those flags that we found Google had penalized or banned:

ABOVE: The overall probability of spam vs. the number of spam flags. Data collected in Nov. 2014 for approximately 500K subdomains. The table also highlights the three overall danger levels: low/green (< 10%) moderate/yellow (10-50%) and high/red (>50%)

The overall spam percent averaged across a large number of sites increases in lock step with the number of flags; however there are outliers in every category. For example, there are a small number of sites with very few flags that are tagged as spam by Google and conversely a small number of sites with many flags that are not spam.

Spam flag details

The individual spam flags capture a wide range of spam signals link profiles, anchor text, on page signals and properties of the domain name. At a high level the process to determine the spam flags for each subdomain is:

  • Collect link metrics from Mozscape (mozRank, mozTrust, number of linking domains, etc).
  • Collect anchor text metrics from Mozscape (top anchor text phrases sorted by number of links)
  • Collect the top five pages by Page Authority on the subdomain from Mozscape
  • Crawl the top five pages plus the home page and process to extract on page signals
  • Provide the output for Mozscape to include in the next index release cycle

Since the spam flags are incorporated into in the Mozscape index, fresh data is released with each new index. Right now, we crawl and process the spam flags for each subdomains every two - three months although this may change in the future.

Link flags

The following table lists the link and anchor text related flags with the the odds ratio for each flag. For each flag, we can compute two percents: the percent of sites with that flag that are penalized by Google and the percent of sites with that flag that were not penalized. The odds ratio is the ratio of these percents and gives the increase in likelihood that a site is spam if it has the flag. For example, the first row says that a site with this flag is 12.4 times more likely to be spam than one without the flag.

ABOVE: Description and odds ratio of link and anchor text related spam flags. In addition to a description, it lists the odds ratio for each flag which gives the overall increase in spam likelihood if the flag is present).

Working down the table, the flags are:

  • Low mozTrust to mozRank ratio: Sites with low mozTrust compared to mozRank are likely to be spam.
  • Large site with few links: Large sites with many pages tend to also have many links and large sites without a corresponding large number of links are likely to be spam.
  • Site link diversity is low: If a large percentage of links to a site are from a few domains it is likely to be spam.
  • Ratio of followed to nofollowed subdomains/domains (two separate flags): Sites with a large number of followed links relative to nofollowed are likely to be spam.
  • Small proportion of branded links (anchor text): Organically occurring links tend to contain a disproportionate amount of banded keywords. If a site does not have a lot of branded anchor text, it's a signal the links are not organic.

On-page flags

Similar to the link flags, the following table lists the on page and domain name related flags:

ABOVE: Description and odds ratio of on page and domain name related spam flags. In addition to a description, it lists the odds ratio for each flag which gives the overall increase in spam likelihood if the flag is present).

  • Thin content: If a site has a relatively small ratio of content to navigation chrome it's likely to be spam.
  • Site mark-up is abnormally small: Non-spam sites tend to invest in rich user experiences with CSS, Javascript and extensive mark-up. Accordingly, a large ratio of text to mark-up is a spam signal.
  • Large number of external links: A site with a large number of external links may look spammy.
  • Low number of internal links: Real sites tend to link heavily to themselves via internal navigation and a relative lack of internal links is a spam signal.
  • Anchor text-heavy page: Sites with a lot of anchor text are more likely to be spam then those with more content and less links.
  • External links in navigation: Spam sites may hide external links in the sidebar or footer.
  • No contact info: Real sites prominently display their social and other contact information.
  • Low number of pages found: A site with only one or a few pages is more likely to be spam than one with many pages.
  • TLD correlated with spam domains: Certain TLDs are more spammy than others (e.g. pw).
  • Domain name length: A long subdomain name like "bycheapviagra.freeshipping.onlinepharmacy.com" may indicate keyword stuffing.
  • Domain name contains numerals: domain names with numerals may be automatically generated and therefore spam.

If you'd like some more details on the technical aspects of the spam score, check out the  video of Matt's 2012 MozCon talk about Algorithmic Spam Detection or the slides (many of the details have evolved, but the overall ideas are the same):

We'd love your feedback

As with all metrics, Spam Score won't be perfect. We'd love to hear your feedback and ideas for improving the score as well as what you'd like to see from it's in-product application in the future. Feel free to leave comments on this post, or to email Matt (matt at moz dot com) and me (rand at moz dot com) privately with any suggestions.

Good luck cleaning up and preventing link spam!


Not a Pro Subscriber? No problem!



Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

You are subscribed to the newsletter of Moz Blog sent from 1100 Second Avenue, Seattle, WA 98101 United States
To stop receiving those e-mails, you can unsubscribe now.
Newsletter powered by FeedPress
FeedPress is a service edited by Beta&Cie, www.betacie.com

So What Makes A Good PPC Account Structure?

So What Makes A Good PPC Account Structure?

Link to White.net » Blog

So What Makes A Good PPC Account Structure?

Posted: 25 Mar 2015 01:30 AM PDT

I’m often asked, ‘what makes a good PPC account structure?’… well in truth, it varies from account to account. There is no real champion account structure as such. Instead, what I want to share with you is the back-bone of an account structure which should typically bode well for any account. Following the simple steps below cuts out a large amount of wasted time on what historically appear to be over complicated accounts.

Alpha/Beta PPC Account Structure

Firstly, if you have a good website structure, use this as a template (N.B. If you don’t have a good website structure/navigation then perhaps looks at that first; there is nothing more frustrating to a potential customer than an badly navigable website!). There are many reasons for doing this; the two main ones are that it aligns campaigns/ad groups to landing pages, as well as for ease of reporting.

The main purpose of the Alpha/Beta PPC account structure is to maximise quality score and conversion rate. As quality scores are also calculated at campaign level, it deems valuable that the campaign structure reflects and takes advantage of this. The price of a click depends on a metric known as 'quality score', which we know denotes how relevant Google thinks your combination of ad copy, keyword and landing page are to a searcher’s search query. Increasing quality score would reduce the price of clicks (and therefore the price of conversions) or improve ad rank (that is, the position the ad appears on the search results page). As we know, maximising quality score translates to higher ad positions and lower cost per clicks. By separating keywords with the highest quality score into a separate 'cloned' Beta ad groups, the campaign's quality score will be further maximised and a higher ROI will be observed.

 

In A Nutshell

Let's call the bare-bone ad groups our 'Alpha' ad groups. To implement this strategy, we would build out 'Beta' ad groups, containing the most successful keywords alongside the most successful ads and landing pages. The ad group should be a complete clone of the original Alpha ad group but it should only contain the winning criterion. With this likely single-keyword ad group, we can tailor the ad creative to be hyper relevant to the search term and user, and additionally can test new hyper relevant variations of these best performing ads.

Read my my blog post on Considering Ad Copy As A Group Of Swappable Elements on how to variant test ad copy.

The exact match keyword should then be added as a negative to the corresponding Alpha Ad group to ensure that that exact search query triggers the ad in the Beta Ad group only.

Here’s A Visual Of What This Structure Looks Like

ppc account structure

This such Alpha-Beta PPC account structure will allow us to focus on the most valuable keywords that provide the most ROI. It will allow us to carefully monitor and manage the budgets where the spend is going to give the biggest ROI. Additionally, it will provide a sound platform on which we can carefully create and test hyper relevant ad creatives, to further optimise the campaigns and overall account.

Use A Practical Naming Convention

Naming your campaigns and ad groups in a practical way from the start will mean ease of navigation down the line. I see so many accounts where the campaigns are ‘campaign 1′ or ‘Sarah campaign’… they do not mean anything to anyone (OK, well maybe to Sarah) but you get my point, they do not instantly identify the theme. If you want to assign campaigns to certain staff then consider using labels.

Consider a naming convention that includes the targeted network, theme of the ad groups and the keyword match type:

Search – Theme 1 – BMM

Search – Theme 1 – Exact

Search – Theme 2 – BMM

Search – Them 2 – Exact

Display – Theme 1 – Image

Display – Theme 1 – Text

Re-targeting – Theme 1 – RLSA

Re-targeting – Theme 2 – RLSA

The size of the account can get a bit out of hand with the addition of further campaigns, ad groups, intent etc., especially when you factor in all the different types of campaigns you may want to be running:

  • Shopping
  • Search Text Ads
  • Display (Topics, Interests, Placements, etc)
  • Remarketing
  • Dynamic Remarketing
  • Remarketing Lists for Search Ads (RLSA)
  • Dynamic Search Ads (DSA)
  • Remarketing for Dynamic Search Ads (RDSA)

The trick here is with naming conventions and to label everything. The more clearly defined and separated the above campaign types are, the easier the account will be to navigate and manage.

So, we’ve looked at the structure, but I want to just jump back slightly here and touch on what the difference is between a keyword and a search query, and what match types are available along with their uses.

Keywords vs. Search Queries

We hear a lot of talk around keywords and search queries, but these are sometimes either misunderstood or misrepresented. So let’s clear this up:

  • Keywords are words or phrases that a marketer buys on Google
  • Search Queries are words or phrases that a user (potential customer) types into the Google search bar

Match Type

Next I want to run through what match-types are available, and what they entail:

 

Broad Match

This match type allows us to control how aggressively Google matches keywords to queries.
e.g. Formal Shoes also matches to Formal Footwear, Evening Footwear, and Men's Dress Wingtips etc.
This form or match type gives Google almost total discretion.

 

Broad Match Modified

This match type allows us to specify words that must appear in a search, while capturing misspellings and different orderings of the words.
e.g. +Formal +Shoes also matches Formal Shoes, Formal Evening Shoes, Formal Black Dress Shoes.
BMM match type prevents synonyms.

 

Phrase Match

This match type allows us to control how aggressively Google matches keywords to queries.
e.g. "Formal Shoes" also matches to Black Formal Shoes, Formal Shoes for Men, Formal Shoes for Women.
Phrase match type requires the complete phrase to appear in the query.

 

Exact Match

This match type allows us to stop Google matching our keywords to certain queries.

e.g. [Formal Shoes] matches to Formal Shoes only.
Exact match type will only show the exact phrase.

 

Negative Match

This match type allows us to stop Google matching our keywords to certain queries
e.g. -Formal -Shoes matches to Men's Trainers, Men's Flip-Flops, Ladies High-Heels
Negative match type excludes any word or phrase.

 

I recommend using Broad March Modified (BMM) to discover new and potentially profitable search queries, Exact Match to isolate the top performing queries and bids accordingly and Negative Match to exclude the unprofitable queries. Generally I do not recommend the use of Broad Match or Phrase Match: Broad Match gives the least control and so potentially allows ads to appear on irrelevant searches (especially if you do not have a tightly optimised negative keyword list applied); Phrase Match would capture fewer searches than BMM as it requires precise spelling and so is less useful for exploration.

I have seen many an account where the PPC manager has used the same keyword on Broad, Phrase and Exact match, and I think to myself, you’ve wasted so much of your time by doing that. Also, I’ve see a lot of ad groups with long-tail (5+ keywords per line) with little to no data attributed, why? Because they had jumped the gun and second guessed what the customer would be searching rather than run BMM ad groups and analyse the search query reports (the queries that potential customers are actually using).

By using Broad Match Modify and Exact match types only, we are able to data-mine for new an relevant queries that we want to bid on and identify those that we don’t. Using Phrase and Broad match types only complicate things.

Creating & Analysing Alpha/Beta Campaigns

So to recap, below is a step-by-step process, which if followed correctly, will produce your A/B structure:

  1. Create the Alpha ad group with all keywords on BMM.
  2. Review Search Query Reports on the Alpha ad groups.
  3. Identify the performing and ill-performing queries via Search Query Reports.
  4. Create the Beta ad group and move the performing queries into Single Keyword ad groups.
  5. Ensure all Beta ad group queries are on Exact Match.
  6. Create targeted ad creatives and landing pages for the Single Keyword ad groups.
  7. Add all ill-performing queries to the corresponding Alpha ad groups as Exact Match Negatives.
  8. Add all Beta queries to the corresponding Alpha ad groups as Exact Match Negatives – This prevents Google from matching an Alpha keyword to a performing Beta query.
  9. A schedule is then created to re-run through steps 2-8 for continuous optimisation of the account.

 

By implementing the Alpha/Beta structure we can ensure that the Exact Match keyword is triggered by the users search query, resulting in a tailored and specifically related ad creative being entered into the auction to be shown on the SERP (SERP is the search engine results page, the page the user is directed to have clicking search on their search query). This will ensure that the users search query is matched most relevantly to a keyword and ad creative most likely to provide higher engagement and potential conversion.

 

  1. This should result in a higher Quality Score and Ad Rank thus resulting in a lower CPC to that of the same keyword using a different match type.
  2. This structure ensures that highly relative keywords, queries and ad creative are married up.
  3. This structure should also lower bounce rate and increase user engagement (Pages/Visits).

 

So What Makes A Perfect PPC Account Structure?

I am not saying the above is the answer to all our structural prayers, instead, it gives you a bare-bone idea of how to re-structure or build out your account to maximise efficiency in more than one way. Getting the structure right means you have more time to concentrate on optimisation and variant testing. Take a look at our account, how does it look? From campaign level can you clearly identify what each campaign entails? No? Well if you can’t then no-one else can. Have you segmented your campaigns / ad groups? If not, ask yourself why not.

I hope you have found this useful. I’d love to hear your feedback, and your experiences. What structure works well for you?

The post So What Makes A Good PPC Account Structure? appeared first on White.net.