sâmbătă, 13 iunie 2015

SEO Myths Busted – One week on

SEO Myths Busted – One week on

Link to White.net » Blog

SEO Myths Busted – One week on

Posted: 11 Jun 2015 04:40 AM PDT

Myths. They’re everywhere, and they range from those that crop up in everyday life (cracking your knuckles gives you arthritis) to the downright odd (Tom Jones insured his chest hair for $7 million). Try as we may we can’t escape these falsehoods, and unfortunately it’s no different in the world of digital marketing.

myth
1. A widely held but false belief or idea:
keyword research is all about choosing big volume keywords

Last week we launched our new resource: ‘SEO Myths Busted by the Experts & You!‘, aimed at creating a space where we can collate all the industry myths we discover, and attempt to debunk them once and for all.

seo myths

So, one week on, I wanted to explain the reasons behind the piece, as well as its hopefully exciting future!

The inspiration

We had the idea of creating content around SEO myths a while ago, and the inspiration for the piece originated from the website uxmyths.com, which presents 34 myths, as well as explanations for why each of the statements are in fact just myths. The site does a great job of simply presenting these myths to the user, and I personally found them to be a great learning resource. This got me thinking.

As the saying goes, “practice what you preach”, so instead of creating just another blog post, or putting together another ebook, we decided to approach the task of myth busting in a whole new format. Instead of just using our own knowledge, we decided to get in contact with a number of industry experts, and ask them for their own SEO myths – who better to ask than our peers with experienced minds!

Inspiration for the design came from the posters that were designed and created for the UX Myths project. The main poster presented each myth in different sized boxes, and each myth has its own poster, that includes explanatory copy. So we took the inspiration of the core poster, and applied it to our design, in turn working in our own interactive features, such as the more popular myth being in the bigger box, as well as the added Twitter handle, image, and total share count.

Of course, the overarching inspiration for this whole project is to share this curated expert knowledge with the rest of the industry, with the hope that we can start putting these myths to bed, or at least educate those just starting out in SEO.

Future wise, we don’t want to give away too much, but we’re certain that you’ll be seeing more of our SEO myths project. We will also be releasing some more myths on the page soon and hopefully on a fairly regular basis, as well as the ability to download each myth as an awesome wallpaper for your computer, or even your office wall, so keep your eyes peeled!

Fancy seeing your own myth on our board?

seo myths

If you feel you have a myth that you want to share with the community, please don’t hesitate to drop me an email on bobby [at] white.net. As you may have seen from the piece, we’re looking for roughly 120 words to help put your myth to bed. We can’t guarantee that every myth will make it up to our board, but if we like it we’ll get in touch with you.

Come and join the conversation over on Twitter with @whitedotnet, or myself, @bobbyjmcgill. Alternatively leave a comment in the box below; we’d love to hear what you think of our SEO myths project, or on a myth that grinds your gears!

The post SEO Myths Busted – One week on appeared first on White.net.

7 Ways You Might Have Botched Your Rel=Canonical Implementation

Posted: 28 May 2015 08:13 AM PDT

I confess: when I’m carrying out a technical audit on a website I basically act like I’m running a police investigation. I know that there will be mysteries to solve, and it’s my job to find the clues that will lead me in the right direction.

And with the right tools in hand, I’ll usually sniff something out when I get to the strange occurrences of rel=canonical.

What is rel=canonical?

In a nutshell, rel=canonical is a way to clean up duplicate URLs on a website. I know what you’re thinking, it would be much easier if duplicate content just didn’t exist at all. This would make my job all sunshine, rainbows and flowers rather than the sweat and tears it generally involves, but this is the real world and duplicate content is sometimes unavoidable.

This is especially true when it comes to ecommerce sites which pose some of the most complex mysteries for SEO forces all across the nation. The way that many of these sites present information or products to users means that some pretty wacky things can happen to the URL – all designed to provide the most relevant results to users through the use of parameters.

Guides for beginners

Moz has a great guide on canonicalisation which I’d urge you to read if you’re new to the concept, as the purpose of this blog post is to guide you with proper implementation rather than a full explanation of what it is.

Alternatively you could shimmy on over to the blog of Matt Cutts; he wrote a post in 2009 called “Learn about the Canonical Link Element in 5 minutes” which is just as relevant today as it was back then.

Make sure to revisit this post when you’re familiar with the topic as you’ll find it much more valuable then!

Rel=canonical: The good, the bad and the ugly

If I’ve captured the attention of your inner geek, sit back as I share some of my recommendations for rel=canonical best practice. The reality is that I’ve seen lots of cases recently where issues have gone undetected for far too long, and I want you to be able to check that you’re not being taken for a ride by your own website.

The source of duplicate content

The first thing you’re going to need to do is identify the culprits that are causing duplicate content. My preferred sidekick for this job is the ever-dependable Screaming Frog SEO Spider.

Once you have performed a crawl, you should be able to use the overview report on the right-hand side of the tool to give you a quick insight into where issues might be occurring. Is it showing results for duplicate page titles, URI or meta descriptions? If so, these may indicate where there are duplicate pages which all share the same content and meta data. Use this a starting point for deeper investigations by manually visiting each version and checking out the source code of each.

Scroll down to the ‘Directives’ folder to see what is being acknowledged by the tool in terms of canonicalisation for more quick hints. Although it’s from the main ‘Directives’ tab in the top navigation where you can really start drilling down into individual issues. At this point you may start to spot strange occurrences that require a bit of manual investigation. Or a lot.

But then it does help to know what you’re actually looking for. Here are the common causes for why multiple URLs can load the same content:

  1. A product has dynamic URLs as a result of user search preference or user session
  2. Your blog automatically saves multiple URLs when you publish the same post in multiple sections
  3. Your server is configured to serve the same content for the www / non-www subdomain or the http/s protocol

Example 1 – a product has dynamic URLs as a result of user search preference or user session

Canonicalisation of URLs

Example 2 – the blog automatically saves multiple URLs when you publish the same post in multiple sections

Blog post category canonical issues

Example 3 – the server is configured to serve the same content for the www / non-www subdomain or the http/s protocol

Http protocol causing duplicate content

Overcoming duplicate content issues

When these issues occur, it’s important to choose a preferred URL for indexation by search engines. This is where the rel=canonical link comes in.

As a side note, there are other ways you can do this, including using 301 redirects, indicating how search engines should handle dynamic parameters, etc. but this is deserves a post of its own, something I’ll come back to in the near future.

The Google Webmaster Central blog has a great summary of rel=canonical:

“Including a rel=canonical link in your webpage is a strong hint to search engines about your preferred version to index among duplicate pages on the web. It's supported by several search engines, including Yahoo!, Bing, and Google. The rel=canonical link consolidates indexing properties from the duplicates, like their inbound links, as well as specifies which URL you'd like displayed in search results.”

The whole purpose of indicating a preferred URL with the rel=canonical link element is so that search engines are more likely to show users your chosen URL structure as opposed to any duplicates. It is important to remember that rel=canonical elements can be ignored, especially when there are conflicting instructions, making accurate implementation all the more important.

Implementation

Check out this example from the Google Webmaster Central blog; it sums up correct implementation pretty well:

Suppose you want https://blog.example.com/dresses/green-dresses-are-awesome/ to be the preferred URL, even though a variety of URLs can access this content. You can indicate this to search engines as follows:

Mark up the canonical page and any other variants with a rel=”canonical” link element.

Add a <link> element with the attribute rel=”canonical” to the <head> section of these pages:

<link rel=”canonical” href=”https://blog.example.com/dresses/green-dresses-are-awesome” />

Have you got rel=canonical implementation right?

Whilst the concept of rel=canonical is easy enough to understand, it’s the implementation that can cause strange occurrences that require investigation (and probably a headache or two along the way).

There are some common mistakes that webmasters and SEOs make when it comes to rel=canonical, although there are some excellent blog posts and guides out there already which may prove immensely helpful for you. Start off with 5 common mistakes with rel=canonical from the Webmaster Central Blog, and then read through Yoast’s rel=canonical: what it is and how (not) to use it.

To help you avoid the common mistakes, I’ve put together a helpful list of 7 things you should remember when implementing rel=canonical. You can refer back to this blog post, or grab the PDF version here: PDF of rel=canonical guide

7 Things To Remember When Implementing Rel=Canonical

rel=canonical recommendations

Why are these considerations important?

  •  Specify only one rel=canonical link per URL

When more than one is specified, all rel=canonicals will be ignored! This can occur with some SEO plugins that insert a default rel=canonical link, so be sure to understand what plugins you have installed and how they behave.

  • Use an absolute URL

It’s possible to insert a relative URL into the <link> tag, but this almost certainly won’t do what you want it to. A relative URL includes a path that is “relative” to the current page. This means you need to add in the lot, including http:// (or https://).

  • Don’t canonicalise a paginated archive to page one

You will risk some content not being indexed if you specify that page-one is the preference. Put it this way, are the other pages duplicates of page one? It’s highly unlikely.

  • Add rel=canonical link to the <head> of the HTML document

Rel=canonical designations in the <body> are disregarded, so it’s best to include the tag as early as possible in the <head>.

  • Watch out for self-referencing conflicts

If your site can load on both http and https versions, check that you don’t have an automatically generated self-referencing rel=canonical. This could mean that both https://www.example.com/red-dresses and http://www.example.com/red-dresses are denoted as the preference.

  • Rel=canonical specified link should work, so no 404s!

It’s fairly obvious that you want the search engines to index URLs that provide actual value and a positive experience to users…

  • Use trailing slash/non trailing slash preference consistently

It helps if you pick a preference for use across the site to minimise the chances of referencing a URL in this way; ensure it is included in all internal links and within the rel=canonical tag element.

  • Bonus: Twitter and Facebook honour your rel=canonical links

This is something I learned from the Yoast blog post referenced above. He has put it quite eloquently, so I’ve included it here for your reference:

“If you share a URL on Facebook that has a canonical pointing elsewhere, Facebook will share the details from the canonical URL. In fact, if you add a like button on a page that has a canonical pointing elsewhere, it will show the like count for the canonical URL, not for the current URL. Twitter works in the same way.”

Now it’s your turn to get on the case and investigate whether your own site has any of these issues with rel=canonical. I’d love to hear if you uncover any hidden culprits, and I’m also happy to put on my investigator hat to answer any questions you may have on the topic too – please leave me a comment below or get in touch through Twitter.

Hopefully we can then utter a collective “case closed”, and move our focus to other technical issues instead!

The post 7 Ways You Might Have Botched Your Rel=Canonical Implementation appeared first on White.net.

Seth's Blog : Every marketing challenge revolves around these questions

Every marketing challenge revolves around these questions

WHO are you trying to reach? (If the answer is 'everyone', start over.)

HOW will they become aware of what you have to offer?

WHAT story are you telling/living/spreading?

DOES that story resonate with the worldview these people already have? (What do they believe? What do they want?)

WHERE is the fear that prevents action?

WHEN do you expect people to take action? If the answer is 'now', what keeps people from saying, 'later'? It's safer that way.

WHY? What will these people tell their friends?

       

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.



Email subscriptions powered by FeedBlitz, LLC, 365 Boston Post Rd, Suite 123, Sudbury, MA 01776, USA.

vineri, 12 iunie 2015

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


More Obamacare Sticker Shock: HMO Rates Up 20%, EPO Up 18%, 12% Overall; Death Spiral for Insurers?

Posted: 12 Jun 2015 02:19 PM PDT

If you did not have insurance before Obamacare, but do now, or if you are heavily subsidized, you may consider Obamacare a blessing.

If you are not in those select groups, then you are highly likely to be paying more for insurance now than before. And it's going to get worse. Some plan types really take a premium hit.

Premiums Jump 12% On Average

Health Pocket reports Obamacare Insurers Propose 12% Higher Premiums for 2016.
Rates Up 20% for Health Maintenance Organizations and 18% for Exclusive Provider Organizations.



[Mish Comment: You can save a lot of money on a bronze plan, but you better not get major sick or in a major accident. As anyone with any bit of common sense expected, more people need more services so rates are going up rapidly].

On average the proposed premiums for 2016 Obamacare plans were 12% higher than the 2015 premiums. Silver and gold plans had the greatest average rate increases of 14% and 16% respectively, while bronze rates increased 9% and platinum rates increased 6%.

Average Rates

Plan Types

  • Bronze pays 60% of covered costs 
  • Silver pays 70% of covered costs
  • Gold  pays 80% of covered costs
  • Platinum pays 90% of covered costs

Network Types

  • Health maintenance organizations (HMOs)
  • Exclusive Provider Organizations (EPOs)
  • Point of Service (POS) plans
  • Preferred Provider Organizations (PPOs)

PPOs and POS plans cover out-of-network care, while HMOs and EPOs do not. EPO and PPO plans do not require referrals from primary care doctors to see specialists, but HMO and POS plans require referrals.

Sticker Shock Coming Up

67% have silver plans. The rates above are for a single 40-year old nonsmoker. Those 67% will see premium hikes ranging from 11% to 20% depending on the network type.

Bronze plans constitute another 22% of all plans. Those in Bronze network plans other than PPOs will see rates go up 15% to 20%.

Bronze PPO plans go up the least, only 4%, but PPO premiums started higher than the others.

There are many additional charts in the above link.

Plan B? States Have None

Later this month the Supreme Court will rule on King v. Burwell. The issue is whether tax credit subsidies purchased through federal government healthcare exchanges are legal.

The SCOTUS Blog assessed the odds in plain English on March 4, in Will concern for states' rights win out in subsidies battle? The key point involving a death spiral.

Death Spiral for Insurers?
What may eventually prove to be the key line of questioning may have been kicked off by Justice Sonia Sotomayor, who expressed concern about the consequences of a ruling for the challengers.  If a state's residents don't receive subsidies, she told Carvin, it will lead to a "death spiral": because a large group of people in those states will no longer be required to buy health insurance, but insurers will still be required to offer insurance to everyone, only sick people will buy health insurance. And that will cause everyone's insurance costs to rise, leading more people to drop out of the insurance market. States will then feel like they have no choice other than to establish their own exchanges to ward off the "death spiral" – a scenario that is so coercive that it violates the Constitution.

Perhaps critically for the government, Justice Anthony Kennedy – who is often regarded as a strong supporter of states' rights – also expressed concern about the possibly coercive effect of a ruling for Carvin's clients. There is, he told Carvin, "something very powerful to the point" that if the challengers prevail, the states have to choose between the death spiral and creating an exchange. "There's a serious constitutional problem," he concluded. (Carvin tried to downplay this concern by telling Kennedy that the government had not raised this issue, but Kennedy quickly retorted that "we sometimes think of things the government doesn't argue.")
One More Vote Needed

The blog points out there are four solid votes for the government: Justices Elena Kagan, Sonia Sotomayor, Stephen Breyer, and Ruth Bader Ginsburg. There are two likely challenger votes: Justices Antonin Scalia and Samuel Alito, so team Obama needs one more vote out of three (Chief Justice John Roberts, Justice Anthony Kennedy, Justice Clarence Thomas).

The above death spiral scenario seems to make it likely, yet the blog concludes it is by no means a given.
Between the near-complete radio silence from the Chief Justice and the sometimes conflicting questions from Justice Kennedy, the case is a tough call. Overall, the government can probably be cautiously optimistic (but only cautiously), because on net Kennedy's concerns about the potentially coercive effect of the challengers' rule seemed to outweigh his qualms about the government's reading of the statute. And even if Kennedy does not swing his support to the government in the end, the Chief Justice might remain in play, as he was during the 2012 battle over the individual mandate. But we probably won't know until the Court issues its decision later this year; when it does, we'll be back to explain it all in Plain English.
No Plan B - 6 Million at Risk on Subsidies

The Washington Post reports States have 'No B plan' if Supreme Court Scraps Health-Care Subsidies.
Any day now, the Supreme Court will announce its decision in King v. Burwell, the latest high-stakes fight over the Affordable Care Act. If the government loses, more than 6 million residents of the 34 states that declined to establish their own health-care exchanges could lose subsidies that help them purchase insurance.

In principle, those 34 states could restore subsidies by creating their own insurance exchanges. Political leaders will certainly come under intense pressure to do so, although time is short to get an exchange up and running for 2016.

To investigate these questions, we undertook, with financial support from the Commonwealth Fund, a study of five states that could lose tax credits: Florida, Michigan, New Hampshire, North Carolina and Utah.

What we found was both striking and worrisome. Dozens of interviews conducted by our research team with political leaders, agency officials and advocacy organizations in those states indicate that the states are almost completely underprepared for the Supreme Court's decision in King. As North Carolina Gov. Pat McCrory (R) said in March, "There's no B plan."

Policymakers also expressed frustration with the Obama administration's silence about its plans. Most states expect the administration to make it easier for them to transition to state exchanges. But they are reluctant to make concrete plans when they don't know what the federal government expects of them.

In the states that have failed to lay the groundwork, it will probably be impossible to set up an exchange in time for 2016.

Compounding the timing challenge, only one legislature of the five states we studied (Michigan), and eight of the 34 affected nationally, will be in session after the court's ruling. Although a special session appears likely in Utah, creating an exchange may not be on the agenda. It's far from clear that special sessions will be called elsewhere.

But the bottom line is grim. The states aren't prepared for King, and any debates over whether to create state exchanges will be turbulent and difficult. In the meantime, millions of people stand to lose their health insurance.
I suspect Obamacare proponents will scrape up a vote, but if not, states will act to avoid the alleged "death spiral".

We find out soon.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

Final Forced Exchange Rate: 175 Quadrillion Zimbabwean Dollars (175,000 Trillion) = $5.00

Posted: 12 Jun 2015 11:21 AM PDT

For bank account holders in Zimbabwe, the government will do a forced exchange of Zimbabwean dollars to US dollars at the rate 175 Quadrillion Zimbabwean Dollars Per $5.00.
The Zimbabwean dollar will be taken from circulation, formalizing a multi-currency system introduced in 2009 to help stem inflation and stabilize the economy.

The central bank will offer $5 for every 175 quadrillion, or 175,000 trillion, Zimbabwean dollars, Governor John Mangudya said in an e-mailed statement from the capital, Harare. While it marks the official dropping of the currency, transactions in the southern African nation have been made using mainly the U.S. dollar and rand of neighboring South Africa for six years.

The economy plunged into crisis after the government started a campaign in 2000 of violent seizures of white-owned commercial farms to distribute to black subsistence growers, slashing exports of tobacco and other crops. Inflation surged to 500 billion percent and the economy shrank during a near decade-long recession that ended in 2009. Under policies implemented by a coalition government, the economy began expanding and the recognition of foreign currencies as legal tender helped tame inflation. Consumer prices fell an annual 2.7 percent in April, according to the statistics agency.

Zimbabweans can convert their local dollars between June 15 and Sept. 30 at commercial banks, building societies and postal agencies, Mangudya said.

Savers with Zimbabwe dollars in their bank accounts will get a flat $5 for anything up to 175 quadrillion Zimbabwean dollars. They can convert any cash they have "on a no questions asked basis" at a rate of $1 to 250 trillion Zimbabwe dollars for notes printed before 2009, Mangudya said.
100 Trillion Bill



At the rate of 175,000 trillion per $5.00, the 100 trillion Zimbabwean note (the highest denomination bill) is worth about 0.285 US cents, (slightly more than a 1/4 of a penny).

I suspect the ink and paper cost more than the note is worth.

That is hyperinflation.

Nut cases have been predicting similar results for the US for years.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

Why We Can't Do Keyword Research Like It's 2010 - Whiteboard Friday - Moz Blog

Why We Can't Do Keyword Research Like It's 2010 - Whiteboard Friday

Posted by randfish

Keyword Research is a very different field than it was just five years ago, and if we don't keep up with the times we might end up doing more harm than good. From the research itself to the selection and targeting process, in today's Whiteboard Friday Rand explains what has changed and what we all need to do to conduct effective keyword research today.

Why We Can't Do Keyword Research Like It's 2010 Whiteboard

For reference, here's a still of this week's whiteboard. Click on it to open a high resolution image in a new tab!

What do we need to change to keep up with the changing world of keyword research?

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat a little bit about keyword research, why it's changed from the last five, six years and what we need to do differently now that things have changed. So I want to talk about changing up not just the research but also the selection and targeting process.

There are three big areas that I'll cover here. There's lots more in-depth stuff, but I think we should start with these three.

1) The Adwords keyword tool hides data!

This is where almost all of us in the SEO world start and oftentimes end with our keyword research. We go to AdWords Keyword Tool, what used to be the external keyword tool and now is inside AdWords Ad Planner. We go inside that tool, and we look at the volume that's reported and we sort of record that as, well, it's not good, but it's the best we're going to do.

However, I think there are a few things to consider here. First off, that tool is hiding data. What I mean by that is not that they're not telling the truth, but they're not telling the whole truth. They're not telling nothing but the truth, because those rounded off numbers that you always see, you know that those are inaccurate. Anytime you've bought keywords, you've seen that the impression count never matches the count that you see in the AdWords tool. It's not usually massively off, but it's often off by a good degree, and the only thing it's great for is telling relative volume from one from another.

But because AdWords hides data essentially by saying like, "Hey, you're going to type in . . ." Let's say I'm going to type in "college tuition," and Google knows that a lot of people search for how to reduce college tuition, but that doesn't come up in the suggestions because it's not a commercial term, or they don't think that an advertiser who bids on that is going to do particularly well and so they don't show it in there. I'm giving an example. They might indeed show that one.

But because that data is hidden, we need to go deeper. We need to go beyond and look at things like Google Suggest and related searches, which are down at the bottom. We need to start conducting customer interviews and staff interviews, which hopefully has always been part of your brainstorming process but really needs to be now. Then you can apply that to AdWords. You can apply that to suggest and related.

The beautiful thing is once you get these tools from places like visiting forums or communities, discussion boards and seeing what terms and phrases people are using, you can collect all this stuff up, plug it back into AdWords, and now they will tell you how much volume they've got. So you take that how to lower college tuition term, you plug it into AdWords, they will show you a number, a non-zero number. They were just hiding it in the suggestions because they thought, "Hey, you probably don't want to bid on that. That won't bring you a good ROI." So you've got to be careful with that, especially when it comes to SEO kinds of keyword research.

2) Building separate pages for each term or phrase doesn't make sense

It used to be the case that we built separate pages for every single term and phrase that was in there, because we wanted to have the maximum keyword targeting that we could. So it didn't matter to us that college scholarship and university scholarships were essentially people looking for exactly the same thing, just using different terminology. We would make one page for one and one page for the other. That's not the case anymore.

Today, we need to group by the same searcher intent. If two searchers are searching for two different terms or phrases but both of them have exactly the same intent, they want the same information, they're looking for the same answers, their query is going to be resolved by the same content, we want one page to serve those, and that's changed up a little bit of how we've done keyword research and how we do selection and targeting as well.

3) Build your keyword consideration and prioritization spreadsheet with the right metrics

Everybody's got an Excel version of this, because I think there's just no awesome tool out there that everyone loves yet that kind of solves this problem for us, and Excel is very, very flexible. So we go into Excel, we put in our keyword, the volume, and then a lot of times we almost stop there. We did keyword volume and then like value to the business and then we prioritize.

What are all these new columns you're showing me, Rand? Well, here I think is how sophisticated, modern SEOs that I'm seeing in the more advanced agencies, the more advanced in-house practitioners, this is what I'm seeing them add to the keyword process.

Difficulty

A lot of folks have done this, but difficulty helps us say, "Hey, this has a lot of volume, but it's going to be tremendously hard to rank."

The difficulty score that Moz uses and attempts to calculate is a weighted average of the top 10 domain authorities. It also uses page authority, so it's kind of a weighted stack out of the two. If you're seeing very, very challenging pages, very challenging domains to get in there, it's going to be super hard to rank against them. The difficulty is high. For all of these ones it's going to be high because college and university terms are just incredibly lucrative.

That difficulty can help bias you against chasing after terms and phrases for which you are very unlikely to rank for at least early on. If you feel like, "Hey, I already have a powerful domain. I can rank for everything I want. I am the thousand pound gorilla in my space," great. Go after the difficulty of your choice, but this helps prioritize.

Opportunity

This is actually very rarely used, but I think sophisticated marketers are using it extremely intelligently. Essentially what they're saying is, "Hey, if you look at a set of search results, sometimes there are two or three ads at the top instead of just the ones on the sidebar, and that's biasing some of the click-through rate curve." Sometimes there's an instant answer or a Knowledge Graph or a news box or images or video, or all these kinds of things that search results can be marked up with, that are not just the classic 10 web results. Unfortunately, if you're building a spreadsheet like this and treating every single search result like it's just 10 blue links, well you're going to lose out. You're missing the potential opportunity and the opportunity cost that comes with ads at the top or all of these kinds of features that will bias the click-through rate curve.

So what I've seen some really smart marketers do is essentially build some kind of a framework to say, "Hey, you know what? When we see that there's a top ad and an instant answer, we're saying the opportunity if I was ranking number 1 is not 10 out of 10. I don't expect to get whatever the average traffic for the number 1 position is. I expect to get something considerably less than that. Maybe something around 60% of that, because of this instant answer and these top ads." So I'm going to mark this opportunity as a 6 out of 10.

There are 2 top ads here, so I'm giving this a 7 out of 10. This has two top ads and then it has a news block below the first position. So again, I'm going to reduce that click-through rate. I think that's going down to a 6 out of 10.

You can get more and less scientific and specific with this. Click-through rate curves are imperfect by nature because we truly can't measure exactly how those things change. However, I think smart marketers can make some good assumptions from general click-through rate data, which there are several resources out there on that to build a model like this and then include it in their keyword research.

This does mean that you have to run a query for every keyword you're thinking about, but you should be doing that anyway. You want to get a good look at who's ranking in those search results and what kind of content they're building . If you're running a keyword difficulty tool, you are already getting something like that.

Business value

This is a classic one. Business value is essentially saying, "What's it worth to us if visitors come through with this search term?" You can get that from bidding through AdWords. That's the most sort of scientific, mathematically sound way to get it. Then, of course, you can also get it through your own intuition. It's better to start with your intuition than nothing if you don't already have AdWords data or you haven't started bidding, and then you can refine your sort of estimate over time as you see search visitors visit the pages that are ranking, as you potentially buy those ads, and those kinds of things.

You can get more sophisticated around this. I think a 10 point scale is just fine. You could also use a one, two, or three there, that's also fine.

Requirements or Options

Then I don't exactly know what to call this column. I can't remember the person who've showed me theirs that had it in there. I think they called it Optional Data or Additional SERPs Data, but I'm going to call it Requirements or Options. Requirements because this is essentially saying, "Hey, if I want to rank in these search results, am I seeing that the top two or three are all video? Oh, they're all video. They're all coming from YouTube. If I want to be in there, I've got to be video."

Or something like, "Hey, I'm seeing that most of the top results have been produced or updated in the last six months. Google appears to be biasing to very fresh information here." So, for example, if I were searching for "university scholarships Cambridge 2015," well, guess what? Google probably wants to bias to show results that have been either from the official page on Cambridge's website or articles from this year about getting into that university and the scholarships that are available or offered. I saw those in two of these search results, both the college and university scholarships had a significant number of the SERPs where a fresh bump appeared to be required. You can see that a lot because the date will be shown ahead of the description, and the date will be very fresh, sometime in the last six months or a year.

Prioritization

Then finally I can build my prioritization. So based on all the data I had here, I essentially said, "Hey, you know what? These are not 1 and 2. This is actually 1A and 1B, because these are the same concepts. I'm going to build a single page to target both of those keyword phrases." I think that makes good sense. Someone who is looking for college scholarships, university scholarships, same intent.

I am giving it a slight prioritization, 1A versus 1B, and the reason I do this is because I always have one keyword phrase that I'm leaning on a little more heavily. Because Google isn't perfect around this, the search results will be a little different. I want to bias to one versus the other. In this case, my title tag, since I more targeting university over college, I might say something like college and university scholarships so that university and scholarships are nicely together, near the front of the title, that kind of thing. Then 1B, 2, 3.

This is kind of the way that modern SEOs are building a more sophisticated process with better data, more inclusive data that helps them select the right kinds of keywords and prioritize to the right ones. I'm sure you guys have built some awesome stuff. The Moz community is filled with very advanced marketers, probably plenty of you who've done even more than this.

I look forward to hearing from you in the comments. I would love to chat more about this topic, and we'll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Your Daily SEO Fix: Week 4

Posted by Trevor-Klein

This week, we've got the fourth (and second-to-last) installment of our short (< 2-minute) video tutorials that help you all get the most out of Moz's tools. They're each designed to solve a use case that we regularly hear about from Moz community members.

Here's a quick recap of the previous round-ups in case you missed them:

  • Week 1: Reclaim links using Open Site Explorer, build links using Fresh Web Explorer, and find the best time to tweet using Followerwonk.
  • Week 2: Analyze SERPs using new MozBar features, boost your rankings through on-page optimization, check your anchor text using Open Site Explorer, do keyword research with OSE and the keyword difficulty tool, and discover keyword opportunities in Moz Analytics.
  • Week 3: Compare link metrics in Open Site Explorer, find tweet topics with Followerwonk, create custom reports in Moz Analytics, use Spam Score to identify high-risk links, and get link building opportunities delivered to your inbox.

In this installment, we've got five brand new tutorials:

  • How to Use Fresh Web Explorer to Build Links
  • How to Analyze Rank Progress for a Given Keyword
  • How to Use the MozBar to Analyze Your Competitors' Site Markup
  • How to Use the Top Pages Report to Find Content Ideas
  • How to Find On-Site Errors with Crawl Test

Hope you enjoy them!

Fix 1: How to Use Fresh Web Explorer to Build Links

If you have unique data or a particularly excellent resource on your site, that content can be a great link magnet. In this Daily SEO Fix, Felicia shows you how to set up alerts in Fresh Web Explorer to track mentions of relevant keyword phrases, find link opportunities, and build links to your content.


Fix 2: How to Analyze Rank Progress for a Given Keyword

Moz's Rank Tracker tool retrieves search engine rankings for pages and keywords, storing them for easy comparison later. In this fix, James shows you how to use this helpful tool to track keywords, save time, and improve your rankings.


Fix 3: How to Use the MozBar to Analyze Your Competitors' Site Markup

Schema markup helps search engines better identify what your (and your competitors') website pages are all about and as a result can lead to a boost to rankings. In this Daily SEO Fix, Jordan shows you how to use the MozBar to analyze the schema markup of the competition and optimize your own site and pages for rich snippets.


Fix 4: How to Use the Top Pages Report to Find Content Ideas

With Moz's Top Pages report in Open Site Explorer, you can see the pages on your site (and the competitions' sites!) that are top performers. In this fix, Nick shows you how to use the report to analyze your competitors' content marketing efforts and to inform your own.


Fix 5: How to Find On-Site Errors with Crawl Test

Identifying and understanding any potential errors on your site is crucial to the life of any SEO. In this Daily SEO Fix Sean shows you how to use the Crawl Test tool in Moz Analytics to pull reports and identify any errors on your site.


Looking for more?

We've got more videos in the previous three weeks' round-ups!

Your Daily SEO Fix: Week 1

Your Daily SEO Fix: Week 2

Your Daily SEO Fix: Week 3


Don't have a Pro subscription? No problem. Everything we cover in these Daily SEO Fix videos is available with a free 30-day trial.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

You are subscribed to the Moz Blog newsletter sent from 1100 Second Avenue, Seattle, WA 98101 United States
To stop receiving those e-mails, you can unsubscribe now.
Newsletter powered by FeedPress

Seth's Blog : Overcoming the extraction mindset

Overcoming the extraction mindset

For generations, places with significant oil production have developed a different culture than other places. This extraction mindset occurs in environments where profits are taken from a captive resource. It doesn't matter if it's coal, tickets or tuition, the mindset is the same.

It's not about oil, it's about the expectation.

They're not making any more oil, of course, and the race is on to get it all. Get it now, or someone else will take it. Take it all, because there's no reason to leave it there. Make sure others don't take it, because what they take isn't something you can take. And when the reserve is exhausted, move on. To the next field, to the next market. 

Not everyone in any given community has an extraction mindset, but the worldview is: Anything that slows down, impedes or interferes with more extraction is nothing but a challenge to be overcome.

Debt amplifies this urgency. And so some industrial farmers race to dig deeper wells to take the last remaining water because if they don't, the mortgage due on their farm might wipe them out. And so public companies race to maximize their short-term profits (and CEO bonuses), even if it comes at the cost of the long term.

Thirty years ago, I asked the fabled rock promoter Bill Graham a question that I thought was brilliant, but he pwned me in his response. "Bill, given how fast a Bruce Springsteen concert sells out, why don't you charge $100 a seat and keep all the upside?" (In those days, $100 was considered a ridiculous sum for a concert ticket).

"Well, I could do that, but the thing is, I'm here all year round, and my kids only have a limited budget to spend on concerts. If I charged that much for one concert, they wouldn't be able to come to the other shows I book..."

Bill wasn't just spreading the money out over time. He was investing in a community that could develop a habit of music going, a community that would define itself around what he was building.

Joel Salatin is a farmer, but he doesn't seek to extract first, instead, he's building a network, creating a long-term, sustainable culture that feeds itself as it benefits him.

In the words of Kevin Kelly: Feed the network first.

The network, of course, doesn't always want to do what you want it to do, as fast as you want it to happen.

This chasm between the mindset of extracting and the alternative of feeding becomes more urgent as networks (online ones, environmental ones, tribal ones) become ever more powerful.  

The chasm is so deep, people on each side of it have trouble imagining what the other side is thinking. Some people show up in your email box or social network intent on taking what they can get (can I have a guest post? wanna fund my project? made you look...) while others are patiently weaving together a cohort of meaning.

It's expensive and time-consuming to choose a path that doesn't deliver maximum value today. Unless you do the math on what happens tomorrow. Tomorrow, the network is either more productive or less. Tomorrow, the network is either trusting or suspicious. Tomorrow, the network is either healthier or sick.

The promise of our connected economy was that it would reward the good guys, the long-term players, the people who cared enough to contribute. The paradox is that this very same economy has become filled with people who are easily distracted, addicted to shiny objects and too often swayed by the short-term sensation or by short-term profit.

The extraction mindset leads to intelligent short-term decisions. If it costs too much to exploit a resource, move on. The network mindset values the long-term impacts of co-creation.

The network (that would be us) then needs to decide if it will continue to reward short-term thinking in order to enhance extraction, or if we care enough about the long-term that we'll act up in favor of sustainability, raising the costs of short-term (selfish) action so it becomes ever more profitable to focus on the long-term instead.

       

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.



Email subscriptions powered by FeedBlitz, LLC, 365 Boston Post Rd, Suite 123, Sudbury, MA 01776, USA.