vineri, 20 decembrie 2013

Building SEO-Focused Pages to Serve Topics & People Rather than Keywords & Rankings - Whiteboard Friday

Building SEO-Focused Pages to Serve Topics & People Rather than Keywords & Rankings - Whiteboard Friday


Building SEO-Focused Pages to Serve Topics & People Rather than Keywords & Rankings - Whiteboard Friday

Posted: 19 Dec 2013 03:15 PM PST

Posted by randfish

With updates like Hummingbird, Google is getting better and better at determining what's relevant to you and what you're looking for. This can actually help our work in SEO, as it means we don't have to focus quite so intently on specific keywords.

In today's Whiteboard Friday, Rand explains how focusing on specific kinds of people and the topics they're interested in can be even more effective in driving valuable traffic than ranking for specific keywords.

For reference, here's a still of this week's whiteboard:

Video Transcription

Howdy, Moz fans and welcome to another edition of "Whiteboard Friday." This week, I want to talk to you a little bit about the classic technique of building SEO pages for keywords and rankings versus the more modern technique of trying to do this with people and topics in mind. So, let me walk you through the classic model and show you why we've needed to evolve.

So, historically, SEO has really been about keyword rankings. It's "I want to rank well for this keyword because that particular keyword sends me traffic that is of high quality. The value of the people visiting my site from that is high." The problem is, this doesn't account for other types of traffic, channels, and sources, right? We're just focused on SEO.

This can be a little bit problematic because it can mean that we ignore things like social and content marketing opportunities and email marketing opportunities. But, okay. Let's stick with it. In order to do this, we do some keyword research. We figure out which terms and phrases are popular, which ones are high and low competition, which ones we expect to drive high-quality traffic.

We create some landing pages for each of these terms and phrases. We get links. And we optimize that content so that hopefully, it performs well in the search engines. And then we measure the success of this process based on both the ranking itself. But also, the keywords that drive traffic to those pages. And whether people who visit coming from those keywords are high-quality visitors.

And then we decide "Yeah, I'm not ranking so well for this keyword. But gosh, it's sending great traffic. Let me focus more on this one." Or "Oh, I am ranking well for this. But the keyword is not sending me high-quality traffic. So, it doesn't matter that much. I'm going to ignore it because of the problems."

So, a lot of times, creating these landing pages with each particular term and phrase is doing a lot of unnecessary overlapping work, right? Even if you're not doing this sort of hyper, slight modifications of each phrase. "Brown bowling shoes," "red bowling shoes," "blue bowling shoes." Maybe you could just have a bowling shoes page and then have a list of colors to choose from. Okay.

But even still, you might have "bowling shoes" and "shoes for going bowling." And "shoes for indoor sports," all of these different kinds of things that could have a considerable amount of overlap. And many different topic areas do this.

The problem with getting links and optimizing these individual pages is that you're only getting a page to rank for one particular term or maybe a couple of different terms, versus a group of keywords in a topic that might all be very well-served by the same content, by the same landing page.

And by the way, because you're doing this, you're not putting in the same level of effort, energy, quality and improvement, right? Because it's an improvement into making this content better and better. You're just trying to churn out landing page after landing page.

And then, if you're measuring success based on the traffic that the keyword is sending, this isn't even possible anymore. Because Google has taken away keyword referral data and given us (not provided) instead.

And this is why we're seeing this big shift to this new model, this more modern model, where SEO is really about the broad performance of search traffic across a website, and about the broad performance of the pages receiving search visits. So, this means that I look at a given set of pages, I look at a section of my site, I look at content areas that I'm investing in, and I say "Gosh, the visits that come from Google, that come from Bing, that come from Image Search, whatever they are, these are performing at a high quality, therefore, I want to invest more in SEO." Not necessarily "Oh, look. This keyword sent me this good traffic."

I'm still doing keyword research. I'm still using that same process, right? Where I go and I try to figure out "Okay, how many people are searching for this term? Do I think they're going to be high-quality visitors? And is the competition low enough to where I think my website can compete?"

I'm going to then define groups of terms and phrases that can be well-served by that content. This is very different. Instead of saying "Blue bowling shoes" and "Brown bowling shoes," I'm saying, "I think I can have one great page around bowling shoes, in general, that's going to serve me really well. I'm going to have all different kinds, custom bowling shoes and all these different things."

And maybe some of them deserve their own individual landing pages, but together, this group of keywords can be served by this page. And then these individual ones have their own targeted pages.

From there, I'm going to optimize for two things that are a little bit different than what I've done in the past. Both keyword targeting and being able to earn some links. But also, an opportunity for amplification.

That amplification can come from links. It could come from email marketing, it could come from social media. It could come from word-of-mouth. But, regardless, this is the new fantastic way to earn those signals that seem to correlate with things ranking well.

Links are certainly one of them. But we don't need the same types of direct anchor text that we used to need. Broad links to a website can now help increase our domain authority, meaning that all of our content ranks well.

Google certainly seems to be getting very good at recognizing relevancy of particular websites around topic areas. Meaning that if I've done a good job in the past of showing Google that I'm relevant for a particular topic like bowling shoes. When I put together custom, graphic-printed, leather bowling shoes pages, that page might rank right away. Even if I haven't done very much work to specifically earn links to it and get anchor text and those kinds of things, because of the relevancy signals I've built up in the past. And that's what this process does.

And now, I can measure success based on how the search traffic to given landing pages is performing. Let me show you an example of this.

And here, I've got my example. So, I'm focusing beyond bowling shoes. I'm going to go with "Comparing mobile phone plans," right? So, let's say that you're putting together a site and you want to try and help consumers who are looking at different mobile phone plans, figure out which one they should go with, great.

So, "Compare mobile phone plans" is where you're starting. And you're also thinking about 'Well, okay. Let me expand beyond that. I want to get broad performance." And so, I'm trying to get this broad audience to target. Everyone who is interested in this topic. All these consumers.

And so, what are things that they also might be interested in? And I'll do some keyword research and some subject matter research. Maybe I'll talk to some experts, I'll talk to some consumers. And I'll see providers, they're looking for different phone providers. They might use synonyms of these different terms. They might have some concept expansion that I go through as I'm doing my keyword research.

Maybe I'm looking for queries that people search for before and after. So, after they make the determination if they like this particular provider, then they go look at phones. Or after they determine they like this phone, they want to see which provider offers that phone. Fine, fair.

So, now, I'm going to do this definition of the groups of keywords that I care about. I have comparison in my providers. Verizon, T-Mobile, Sprint, AT&T. Comparison of phones, the Galaxy, iPhone, Nexus, by price or features. What about people who are really heavy into international calling or family plans or travel a lot? Need data-heavy stuff or doing lots of tethering to their laptops.

So, this type of thing is what's defining the pages that I might build by the searcher's intent. When they search for keywords around these topics, I'm not necessarily sure that I'm going to be able to capture all of the keywords that they might search for and that's okay.

I'm going to take these specific phrases that I do put in my keyword research. And then, I'm going to expand out to, "All right, I want to try and have a page that reaches all the people who are looking for stuff like this." And Google's actually really helping you with search algorithms like Hummingbird, where they're expanding the definition of what keyword relevancy and keyword matching is really meaning.

So, now, I'm going to go and I'm going to try and build out these pages. So, I've got my phone plans compared. Verizon versus T-Mobile versus AT&T versus Sprint. The showdown.

And that page is going to feature things like "I want to show the price of the services relative to time over time. I want to show which phones they have available." And maybe pull in some expert ratings and reviews for those particular phones. Maybe I'll toss in CNET's rating on each of the phones and link over to that.

What add-ons do they have? What included services? Do I maybe want to link out to some expert reviews? Can I have sorting so that I can say "Oh, I only want this particular phone. So, show me only the providers that have got that phone" or those types of things.

And then, I'm going to take this and I'm going to launch it. All this stuff, all these features are not just there to help be relevant to the search query. They're to help the searcher and to make this worthy of amplification.

And then, I can use the performance of all the search traffic that lands on any version of this page. So, this page might have lots of different URLs based on the sorting or what features I select or whatever that is. Maybe I rel canonical them or maybe I don't, because I think it can be expanded out and serve a lot of these different needs. And that's fine, too.

But this, this is a great way to effectively determine the ROI that I've gotten from producing this content, targeting these searchers. And then, I can look at the value from other channels in how search impacts social and social impacts search by looking at multi-channel and multi-touch. It's really, really cool.

So, yes. SEO has gotten more complex. It's gotten harder. There's a little bit of disassociation away from just the keyword and the ranking. But this process still really works and it's still very powerful. And I think SEOs are going to be using this for a long time to come. We just have to have a switch in our mentality.

All right, everyone. I look forward to the comments. And we'll see you again next week for another edition of "Whiteboard Friday." Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

I Am an Entity: Hacking the Knowledge Graph

Posted: 19 Dec 2013 02:30 AM PST

Posted by Andrew_Isidoro

For a long time Google has algorithmically led users towards web pages based on search strings, yet over the past few years, we've seen many changes which are leading to a more data-driven model of semantic search.

In 2010 Google hit a milestone with its acquisition of Metaweb and its semantic database now known as Freebase. This database helps to make up the Knowledge Graph; an archive of over 570 million of the most searched-for people, places and things (entities), including around 18 billion cross-references. A truly impressive demonstration of what a semantic search engine with structured data can bring to the everyday user.

What has changed?

The surge of Knowledge Graph entries picked up by Dr Pete a few weeks ago indicates a huge change in the algorithm. Google has been attempting to establish a deep associative context around the entities to try and understand the query rather than just regurgitate what it believes is the closest result for some time, but this has been focused on a very tight dataset reserved for high profile people, places and things.

It seems that has changed.

Over the past few weeks, while looking into how the Knowledge Graph pulls data for certain sources, I have made a few general observations and have been tracking what, if any, impact certain practices have on the display of information panels.

If I'm being brutally honest, this experiment was to scratch a personal "itch." I was interested in the constructs of the Knowledge Graph over anything else, which is why I was so surprised that a few weeks ago I began to see this:

Google Search for "Andrew Isidoro's Age"

It seems that anyone now wishing to find out "Andrew Isidoro's Age" could now be greeted with not only my age but also my date of birth in an information panel. After a few well-planned boasts to my girlfriend about my new found fame (all of which were dismissed as "slightly sad and geeky"), I began to probe further and found that this was by no means the only piece of information that Google could supply users about me.

It also displayed data such as my place of birth and my Job. It could even answer natural language queries and connect me to other entities like in queries such as: "Where did Andrew Isidoro go to school?"

and somewhat creepily, "Who are Andrew Isidoro's parents?".

Many of you may now be a little scared about your own personal privacy, but I have a confession to make. Though I am by no means a celebrity, I do have a Freebase profile. The information that I have inputted into this is now available for all to see as a part of Google's search product.

I've already written about the implications of privacy so I'll gloss over the ethics for a moment and get right into the mechanics.

How are entities born?

Disclaimer: I'm a long-time user of and contributor to Freebase, I've written about its potential uses in search many times and the below represents my opinion based on externally-visible interactions with Freebase and other Google products.

After taking some time to study the subject, there seems to be a structure around how entities are initiated within the Knowledge Graph:

Affinity

As anyone who works with external data will tell you, one of the most challenging tasks is identifying the levels of trust within a data-set. Google is not different here; to be able to offer a definitive answer to a query, they must be confident of its reliability.

After a few experiments with Freebase data, it seems clear that Google are pretty damn sure the string "Andrew Isidoro" is me. There are a few potential reasons for this:

  • Provenance

To take a definition from W3C:

"Provenance is information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness."

In summary, provenance is the 'who'. It's about finding the original author, editor and maintainer of data; and through that information Google can begin to make judgements about their data's credibility.

Google has been very smart with their structuring of Freebase user accounts. To login to your account you are asked to sign in via Google; which of course gives the search giant access to your personal details, and may offer a source of data provenance from a user's Google+ profile.

Freebase Topic pages also allow us to link a Freebase user profile through the "Users Who Say They Are This Person" property. This begins to add provenance to the inputted data and, depending on the source, could add further trust.

  • External structured data

Recently an area of tremendous growth in material for SEOs has been structured data. Understanding the schema.org vocabulary has become a big part of our roles within search but there is still much that isn't being experimented with.

Once Google crawls web pages with structured markup, it can easily extract and understand structured data based on the markup tags and add it to the Knowledge Graph.

No property has been more overlooked in the last few months than the sameAs relationship. Google has long used two-way verification to authenticate web properties, and even explicitly recommends using sameAs with Freebase within its documentation; so why wouldn't I try and link my personal webpage (complete with person and location markup) to my Freebase profile? I used a simple itemprop to exhibit the relationship on my personal blog:

  <link itemprop="sameAs" href="<a href="http://www.freebase.com/m/0py84hb" >http://www.freebase.com/m/0py84hb</a>">Andrew Isidoro</a>  

Finally, my name is by no means common; according to howmanyofme.com there are just 2 people in the U.S. named Andrew Isidoro. What's more, I am the only person with my name in the Freebase database, which massively reduces the amount of noise when looking for an entity related to a query for my name.

Data sources

Over the past few months, I have written many times about the Knowledge Graph and have had conversations with some fantastic people around how Google decides which queries to show information panels for.

Google uses a number of data sources and it seems that each panel template requires a number of separate data sources to initiate. However, I believe that it is less an information retrieval exercise and more of a verification of data.

Take my age panel example; this information is in the Freebase database yet in order to have the necessary trust in the result, Google must verify it against a secondary source. In their patent for the Knowledge Graph, they constantly make reference to multiple sources of panel data:

"Content including at least one content item obtained from a first resource and at least one second content item obtained from a second resource different than the first resource"

These resources could include any entity provided to Google's crawlers as structured data, including code marked up with microformats, microdata or RDFa; all of which, when used to their full potential, are particularly good at making relationships between themselves and other resources.

The Knowledge Graph panels access several databases dynamically to identify content items, and it is important to understand that I have only been looking at initiating the Knowledge Graph for a person, not for any other type of panel template. As always, correlation ≠ causation; however it does seem that Freebase is a major player in a number of trusted sources that Google uses to form Knowledge Graph panels.

Search behaviour

As for influencing what might appear in a knowledge panel, there are a lot of different potential sources that information might come from that go beyond just what we might think of when we think of knowledge bases.

Bill Slawski has written on what may affect data within panels; most notably that Google query and click logs are likely being used to see what people are interested in when they perform searches related to an entity. Google search results might also be used to unveil aspects and attributes that might be related to an entity as well.

For example, search for "David Beckham", and scan through the titles and descriptions for the top 100 search results, and you may see certain terms and phrases appearing frequently. It's probably not a coincidence that his salary is shown within the Knowledge Graph panel when "David Beckham Net Worth" is the top auto suggest result for his name.

Why now?

Dr Pete wrote a fantastic post a few weeks ago on "The Day the Knowledge Graph Exploded" which highlights what I am beginning to believe was a major turning point in the way Google displays data within panels.

The Day the Knowledge Graph Exploded - Dr Pete

However, where Dr Pete's "gut feeling is that Google has bumped up the volume on the Knowledge Graph, letting KG entries appear more frequently," I believe that there was a change in the way they determine the quality of their data. A reduction in affinity threshold needed to display information.

For example, not only did we see an increase in the number of panels displayed but we began to see a few errors in the data:

This error can be traced back to a rogue Freebase entry added in December 2012 (almost a year ago) that sat unnoticed until this "update" put it into the public domain. This suggests that some sort of editorial control was relaxed to allow this information to show, and that Freebase can be used as a single source of data.

For person-based panels, my inclusion seems to show a new era of Knowledge Graph that Dr Pete reported a few weeks ago. We can see that new "things" are being discovered as strings then, using data, free text extraction and natural language processing tools, Google is able to aggregate, clean, normalize and structure information from Freebase and the search index, with the appropriate schema and relational graphs, to create entities.

Despite the brash headline, this post is a single experiment and should not be treated as gospel. Instead, let's use this as a chance to generate discussion around the changes to the Knowledge Graph, for us to start thinking about our own hypotheses and begin to test them. Please leave any thoughts or comments below.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Seth's Blog : Pick three

 

Pick three

If I could suggest just one thing you could do that would transform how 2014 goes for you, it would be this:

Select three colleagues, bosses, investors, employees, co-conspirators or family members that have an influence over how you do your work. Choose people who care about you and what you produce.

Identify three books that challenge your status quo, business books that outline a new attitude/approach or strategy, or perhaps fiction or non-fiction that challenges you. Books you've read that you need them to read.

Buy the three books for each of the three people, and ask them each to read all three over the holiday break.

That's it. Three people, nine books, many conversations and forward leaps. No better way to spend $130.

I still remember handing copies of Snow Crash to my founding team at Yoyodyne. It changed our conversations for years. And years before that, Soul of a New Machine and The Mythical Man Month were touchstones used by programmers I worked with. When the team has a reference, a shared vocabulary and a new standard, you raise the bar for each other.

[If the Pick Three approach makes you uncomfortable, because you're not allowed to do this, or not supposed to, you have just confronted something important. And if this feels too expensive, it's worth thinking about how hard you're expecting to work next year, and how you plan to leverage all that effort.]

       

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498

 

joi, 19 decembrie 2013

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


Unfit for Next Crisis; Laughable Banking Union Revisited

Posted: 19 Dec 2013 06:04 PM PST

Here is an interesting article in Der Spiegel Online that echoes recent statements of mine. Please consider Der Spiegel Online article: Not Fit for the Next Crisis: Europe's Brittle Banking Union.
German Finance Minister Wolfgang Schäuble has negotiated a European banking union suited perfectly to his country's tastes. It looks like a victory, but it could prove to be very expensive if Europe or Germany face another financial crisis.
Will, Not If

I could easily stop right there, with slight modifications as follows: "It will prove to be very expensive when Europe or Germany face another financial crisis."

Nonetheless, let's continue with the article.
[German Finance Minister Wolfgang Schäuble and his negotiators] succeeded in ensuring that in 2016, the Single Resolution Mechanism will go into effect alongside the European Union banking supervisory authority. The provision will mean that failing banks inside the euro zone can be liquidated in the future without requiring German taxpayers to cover the costs of mountains of debt built up by Italian or Spanish institutes.

They also backed the European Commission, which wanted to become the top decision-maker when it comes to liquidating banks. The Commission will now be allowed to make formal decisions, but only in close coordination with national ministers from the member states.

But it goes even farther. Negotiators from Berlin have also created an intergovernmental treaty, to be negotiated by the start of 2014, that they believe will protect Germany from any challenges at its Constitutional Court that might arise out of the banking union.

They also established a very strict "liability cascade" that will require bank shareholders, bond holders and depositors with assets of over €100,000 ($137,000) to cover the costs of a bank's liquidation before any other aid kicks in. The banks are also required to pay around €55 billion into an emergency fund over the next 10 years. Until that fund has been filled, in addition to national safeguards, the permanent euro bailout fund, the European Stability Mechanism, will also be available for aid. However, any funds would have to be borrowed by a national government on behalf of banks, and that country would also be liable for the loan. This provision is expected to be in place at least until 2026.

The government in Berlin put a strong emphasis on preventing the ESM, with its billions in funding, from being used to recapitalize debt-ridden European banks. Schäuble was alone with this position during negotiations, completely isolating himself from the other 16 finance ministers from euro-zone countries. Brussels insiders report that it was "extremely unusual because normally at least a few countries share Germany's position."

A Victory for Whom?

To have succeeded in pushing all this through is a huge victory for the finance minister, particularly given that he was able to do so while he was at the same time defending his own job during coalition talks to form Chancellor Angela Merkel's new government in Berlin.

But is this really the right kind of agreement for Germany and Europe?
I invite you to read the rest of the article which accurately concludes with "Schäuble's triumph could prove to be a costly one, because what could be yet more expensive than a crisis without a banking union? The answer is simple: A crisis preceded by a poorly constructed banking union that promised illusory security."

Laughable Banking Union Revisited

Next consider what I said two days ago in Laughable Eurozone Banking "Non-Union"; Expect Disorderly Breakup.
Expect Disorderly Breakup

Lost in the debate about "impressive sums", is the simple fact there should not be a banking union in the first place. In practical terms, there still isn't, but no one wants to admit that.

And given that there isn't a genuine union (which is the only way to realistically hold this mess together a bit longer), the eurozone ministers ought to focus on a meaningful task: how best to break up the eurozone with minimal disruption.

Unfortunately, they won't. Thus, the resultant eurozone breakup will prove to be very disruptive. The only other possibilities (and I have mentioned them before) are 1. slow growth and extremely high unemployment in the peripheral countries for another decade 2. Germany and the Northern countries pony up hundreds of billions of euros in more support (debt forgiveness, not loans).

Pick your poison, but a breakup is the most likely result.
Emphasis added.
I am sticking with my analysis.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

50 Foreign Companies Operating in France Sound the Alarm

Posted: 19 Dec 2013 11:25 AM PST

Via translation from Les Echos, please consider 50 Foreign Companies Operating in France Sound the Alarm.
For the 50 signatories, the conclusion is clear: "In recent years we find it increasingly difficult to convince our parent companies to invest and create jobs in France."

We preside over the destinies of subsidiaries of major international groups in France, a country where we employ more than 150,000 employees and carry more than one hundred billion euros in sales. We are part of this "community", the companies whose capital is foreign but create wealth here in France. We are supporters and ambassadors of our parent companies that they make the choice to invest and create jobs.

In recent years, we find it more and more difficult to convince them to invest, and many of them settled in a cautious wait and see attitude. They put us "under observation".

The case is not indifferent: 20,000 companies share our identity, employ 2 million people, or 13% of the employed population, fourth in the industrial sector alone. We contribute about 29% of French industry sales, providing a third of French exports. We contribute 29% of French investment and provide 29% of the R & D companies operating in France. This wealth is priceless.

France has the resources, talent and innovation, but we are penalized by the complexity and instability of the legislative and regulatory environment, a lack of flexibility of labor law, and by complex, lengthy and uncertain procedures and, more broadly, a cultural mistrust of the market economy.

In all these areas, our global headquarters consider the situation in our country has not fundamentally improved. Rather, things are getting worse.
Those looking to understand why France is not about to recover, need only look at the socialist policies of Hollande, the immense hold of unions on the country, and the lack of progress on badly-needed reforms of work rules and pensions.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

New Pin picks!

Pinterest Android App · iOS App
 
Hi Hari,
Your latest Pin picks
The BEST orange sweet rolls on iheartnaptime.net ...these are ...
Pin it
Nintendo sleeve by Mina @ Hawk and Sparrows. #ink #tattoo
Pin it
Beautiful And Artsy DIY Firefly Lamp
Pin it
full toe gripper socks
Pin it
.
Pin it
Watches from new Australian brand Stock launch at Dezeen Watch ...
Pin it
New boards to follow
Kitchens
146 pins · Marisa McClella...
Follow
Food
1,270 pins · Emily Leary
Follow
This is L.A.
20 pins · Jeff Parker
Follow
Typography
252 pins · Neil Ramsbottom
Follow
Free Printables
37 pins · Kami Bigler * N...
Follow
Cars x
111 pins · Eden Georgia
Follow
Happy Pinning!