vineri, 10 februarie 2012

Paid Search Ad Copy Auditing - Whiteboard Friday

Paid Search Ad Copy Auditing - Whiteboard Friday


Paid Search Ad Copy Auditing - Whiteboard Friday

Posted: 09 Feb 2012 01:43 PM PST

Posted by rauschenbach

This week's Whiteboard Friday focuses on ad copy testing ideas and best practices. With these helpful tips you can make the most of your PPC search marketing dollars so that you can spend more time on SEO. What you'll end up with is a helpful ad copy worksheet template that can be used to develop new ad copy ideas.

Please give Brian Rauschenbach a warm welcome as he presents his very first Whiteboard Friday! Don't forget to leave your comments below. Enjoy!

//

Video Transcription

Hello, I'm Brian Rauschenbach. I am a principal at Add Three. We are a search engine marketing agency located in Capitol Hill in Seattle. Today I am going to talk to you about paid search ad copy ad testing.

We get a lot of questions from potential clients and current clients on sort of effective ways to optimize your ad copy on your paid search ads. We have this example today on a big head term for one of our clients, Sitter City, here, and the keyword is babysitter. It is a keyword that is searched a lot on both the head terms and with the geo keywords against it. So, this is the search result that we've taken from Google. In this example, you'll see that our ad copy actually has a localized listing against it, and our big competitor's has local in the listing, but the Seattle city has not been picked up in the ad, even though we didn't give Google a Seattle indicator.

So, a couple of the things that I want to talk about is sort of effective A/B ad testing and some best practices you could use around this. In this test here, we're basically showing a couple different examples here that I want to highlight. One is localization of the ad. So, we're doing it both by creating a geo ad group with Seattle as the ad group city location, and then what ends up happening is on a head term like "babysitter," we're still picking up a local region. Then we also use the ad copy behind it to say Seattle babysitters, where you'll see the other ad in here is just taking local because it's a head term and they're figuring that this term is actually getting searched by a lot of people in different regions and they are just putting babysitter in the copy.

When you look at this listing in Google, the babysitter gets highlighted in both areas, and if you see down here in the actual description, the ad copy description, we usually do a title case. So, we will capitalize each of the first words all the way through and then usually use an exclamation point or something to sort of draw some emphasis to the ad.

One of the things that I like doing is having a pure A/B ad copy test. In our AdWords campaign, we usually have only one ad running against what we call our champion ad so that we can get out data pulled together faster and be able to report on which the wining ad is.

So, localization is setting up your geo ad groups so that you're making sure that in your big local regions, where your conversions are the strongest, that you actual have ad groups specified for that.

Using the keyword insert tag is sort of a common practice. What we usually do is make sure that you are using that so that the title casing is being effective as well.

Strong call to actions in your ad copy. You'll see here that we've got find and join free as our call to actions. On this ad, they've got search, fast and easy, guaranteed results as sort of a statement but can be considered a call to action as well.

Another area where we've seen a lot of success in ad copy testing is Google gives you 35 characters per line for your description text, and then it puts it in this long sentence. If you are in the top two listings, sometimes you will see the extended titles appearing where they figured out in their algorithm that it clicks better. Then if you see the ads that are on the side, they are usually in two lines. So, another thing that we've tested in the past with different clients is to shorten the actual description line. So, even though they are giving you 35 characters, it doesn't mean that you have to use all 35 characters. In this instance, we might write an ad that just says, "Find sitters. Join free," and test that against the champion ad.

One of the other things that we always try to do to is isolate the test so that we're only testing the description. We're only testing the title tag on the ad, and/or in some instances we've even tested brands without the Ws in it against the www. brand URL on there and have seen different result sets. So, I can't really say that you should always have your brand sit at SitterCity.com or Care.com. In some instances, the www actually tests out a little better. A lot of that ad copy testing is done in the trade term buckets, and so we have those all separated out, so that would be the display URL piece here.

Another thing that we've seen recently is people are putting sort of a one- two step in their ads. So, if you can imagine this would be the first one and it would be "find sitters," and then the second one would be a call to action too. Then you might just say, "Join for free!" These ads work better we found if you are vying in the third through fifth position, because they'll get broken up on two different lines instead of showing up long on one line and then it usually doesn't translate as well. That is just sort of another thing that we've been doing some recent testing with lately. We're seeing it happen across other industries, like dating and people search, and they've had some pretty good success with it.

One of the other things that we usually do, like people always ask, "Well, how much data is enough data for this ad test to be successful?" What we found is that it varies from client to client, but usually if you are achieving at least 1% click-through rate on your ads and you pull in about 20,000 to 30,000 impressions, depending on what your conversion rate or where you have your conversion pixel firing, it is usually enough data right around the 30,000 impression mark to make sort of an informed decision on which ad is actually starting to edge out the other ad. We like to do this with just the two primary ads and not having five. Sometimes we look in campaigns and we see five different ad copies all being tested at the same time. That test to get those results back on those types of tests usually take two to three months, and in the meantime you are lowering your quality score on that ad group if you've got an ad that isn't polling as well as the other one. Google usually will go through and start serving the highest click-through ad with preference anyways. Even if you are doing it as a 50/50 test, we've seen that in the past as well. So, my best practice is to just keep it a pure A/B ad test, so you always have a champion and a challenger ad.

Another area where people make a big mistake when they're doing their ad testing is that if the is our primary ad right here, you want to duplicate this ad copy as your champion ad, and then you want to introduce the challenger ad and start those two pieces of ad copy at the same time. Once the ad copy has been approved by Google, usually happens within 24 hours, then you will pause the original ad so that you have a true start so there isn't any layover data that is being transferred over to the test. What we've seen before in the past is ads that might be getting assists from other head terms or other keywords might get triggered in a conversion cycle, and one will show up in a database that happened on traffic that was maybe a week before you started your test. So that's another really, really important piece that a lot of people overlook when they are doing their ad copy testing.

I'll have a couple other examples that I'll put in some notes on some other areas where we do a lot of social listening with tools like Radian6 and Social3i where we basically listen to what is happening in the social space to actually dictate what we are going to use for ad copy. One of the things that we might have heard in the past was that people really liked parent reviews or keywords like "trusted" for this type of business, so we have actually incorporated them into the ad copy and tested against that. So that is a good thing that you could basically use for what conversations are happening in Facebook and Twitter against your brand and incorporate that in to your ad copy.

That's it.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Putting Guest Post Outreach Theories to the Test [With Some Real World Data]

Posted: 09 Feb 2012 02:32 AM PST

Posted by jamesagate

Following the positive response to my last post here on SEOmoz, I wanted to bring you all some data right from a few of our real-world campaigns.

As a business, we systemise a great deal and monitor a lot of processes so it made sense for me to put use to some of this data and try to prove/disprove any commonly held theories about outreach.

The following is based on a sample of 400 guest posts that we placed for clients over a three month period (November-January). Make of the data what you will, it isn’t conclusive but I feel it does go some way to providing some good starting points for you to explore in your own outreach campaigns – as with most things, the best strategy is for you to test it out for yourself in the industry/industries that you work in.

Theory #1 – Being a woman will get you more links

Speak to nearly anyone that has been building links for a while and they will have at least come across the theory that approaching a prospective link partner looking for a guest post is more likely to be successful if you are a woman. I would think this stems from the widely held belief (rightly or wrongly) that women are more trustworthy and well-meaning than men.

I wanted to investigate this theory in a little more depth. Quite by accident, of the 400 posts, it was roughly a 50/50 split with a woman conducting the outreach 52% of the time.

  • 790 potential sites were contacted
  • 411 by a woman
  • 379 by a man

Battle of the sexes – who performed better?

  • 437 positive responses received (remember there is a small attrition rate which has to be accounted for within the guest posting process where the link partner either doesn’t accept the content or doesn’t deliver on his/her end of the bargain).
  • 263 positive responses received by a woman.
  • 174 positive responses received by a man.

You might argue that this difference in performance between the genders could be attributed to a number of things:

  • Some are better at outreach than others – whilst this might be true, all receive the same training however, and any slight differences should be averaged out by this fact.
  • Consultants have different methods – similarly, some consultants may have slightly different methods although in reality we have systemised our process and continue to innovate as a team sharing best practices so again any impact is likely to be negated.
  • Consultants were contacting different websites – again, a very real possibility that the difference in performance is attributable to the ‘leads’ each consultant received. We do have different consultants who work and specialise in different industries so this could be a potential reason.

To really put this theory to the test though, we had one of our female consultants get in touch with five potential link partners who had either declined the offer of a guest post or requested payment for a guest post from one of our male outreach consultants.

When a female consultant made contact, they managed to reduce the price of the paid placement (we didn’t pay for it anyway) and we got a positive response from two of them. To clarify, that was pitching exactly the same website and roughly the same content as before.

That’s a pretty interesting find, I’m sure you’ll agree.

Theory #2 – Job title matters

Depending on whether the client has a preference, we usually approach the link partner as either an agency employee or an individual/freelancer.

Some clients like us to contact link partners as if we were employees of their company, others prefer we don’t disclose agency connections which on the face of it may stir some ethical debate however in these situations we merely act as the facilitator between our freelance content team and the host blog and since we strive to create win-win-win situations I have no problem with operating in this way.

In all honesty – each of these has its advantages and disadvantages (whilst contacting as an agency employee might invoke more requests for payment, it does make the option of continuing the relationship and benefiting your other clients much more practical) but let’s look at this from a pure success rate basis.

  • 790 potential sites were contacted
  • 297 were contacted as a freelancer
  • 373 were contacted as an agency employee
  • 120 were contacted as an in-house

In cases where the partner was approached by a freelancer, a positive response was received 189 times. In cases where the partner was approached by an in-house employee, a positive response was received 78 times and finally in cases where the partner was approached by an agency employee, we received positive responses 170 times.

The results surprised me because, one would think, that an email from someone directly working for an organisation that is going to benefit from the guest post would result in more declines or at least more requests for some form of payment. Clearly though trust is an important factor when it comes to largely unsolicited (albeit well researched and properly pitched) offers of guest posts.

Theory #3 – Timing is important

I was really excited to pull together the data for this one because I was confident that timing really mattered, especially when it comes to the initial introductory email.

Whilst we don’t actively record the precise time an email is sent, we do keep a note of the time of day i.e. Morning, Afternoon or Evening for the recipient. We’re UK based so running campaigns for our overseas clients requires rigorous planning and execution if we are to get the timing right.

In this case, I found no conclusion that could be drawn from this data. This is because when you average the response rate out across industries and countries (as I did in this case) it is only logical that no correlation will be easily identifiable because no two prospects are the same; different industries, different time zones and so on.

This doesn’t mean you can’t take advantage of timing though:

  • Recording when your prospect is at their most responsive is helpful for keeping the process moving especially if they become a little wayward right before the agreed publish date.
  • Observing patterns in specific niches and putting this to work for you, for example, I have identified a responsiveness pattern across some of the sports blogs we work with (most, not all, but the majority respond late evening in their time) which could well be attributed to the fact they are hobby bloggers with full-time jobs and a family who sneak a bit of ‘blog time’ once their wife and children have gone to bed.

Theory #4 – Personalisation is worth it (or is it?)

We wanted to guarantee a quality standard with our outreach processes, which is we have approved templates that are then tailored to each prospect.

In certain situations where we feel it will be beneficial, we will write emails completely from scratch.

We don’t send out any generic emails which for the purposes of this exercise is a real shame because we can’t properly compare the difference in response rate when you send out a stock email and when you send out a personalised or even bespoke email.

We make a note of whether the email sent was tailored or entirely bespoke and the results align with what you might expect…

Completely bespoke emails generate a higher response rate although the caveat to this is of course that to custom write every email just isn’t possible if you want a campaign to be of a certain scale.

If you contacted 10 partners with a tailored email then you would get fewer positive responses but similarly, try sending 100 completely from scratch emails. You need a lot of people and that costs money which then impacts on the ROI of a campaign.

The trade-off and what I believe to be the happy medium is a solid template that is tailored to each recipient. Be flexible with your templates too and allow them to evolve as you see certain elements working better than others. Innovate then scale by applying across your campaigns.

Theory #5 – The style of outreach email has an impact

As I discussed above, we have a number of base templates for our consultants to customise, we have one version which are very conversion focused and another which is more soft-conversion – both variations are useful just in different industries.

I recently covered what goes into our high-conversion outreach emails and whilst I still don’t wish to reveal the exact format of our templates I will say the following:

  • Template A – very proactive wording that encourages moving to the next step, selecting one of the articles rather than asking whether they’ll accept a guest post.
  • Template B – much softer wording that works well in industries where guest posting is less prevalent and where the prospect needs their hand holding on the process a bit more.

As you will note, the more proactive template A is more effective in terms of generating a response. However, given that these styles are effective in different industries, so both templates will continue to have a place in our work. That being said, I found it useful and really interesting to compare their performance side by side.

Theory #6 – Persistence pays off

I believe in creating win-win-win situations when it comes to guest posting and because we go further to research and evaluate prospective websites, I see no issue in following up with the potential link partner three times before writing them off as unresponsive.

If you categorise the responses received in relation to the number of times contacted, it becomes evident that persistence really does pay off.

You will note from the chart below that around 30% of positive responses received agreed on the second or third email.

Had we not been persistent we would have needed to find, research and contact additional link partners which would have greatly increased our workload.

Persistence is one thing but relentless pestering is another. Follow up on leads, but be polite and for the benefit of all of us in the industry know when you should be taking no for an answer.

What’s the perfect combination?

Is it best to be an in-house female link builder pitching content in the evening three times? No, not always.

Different strokes for different folks. To summarise, it’s important to test out what works best in your industry.

Remember that this is a relatively small internal data sample so it is by no means perfect as there are always multiple factors in play at any one given time but despite this, I do feel it is valid enough to make it useful. Hopefully it acts as a starting point to develop your own study or to shape your initial guest post outreach strategy.

I’d be keen to hear from anyone running guest posting campaigns to learn about their methods and the kinds of response rate they generate.

James Agate is the founder of the content and outreach agency Skyrocket SEO. They offer a guest posting service that’s aimed at agencies and website owners looking for a semi-scalable, high-quality way to proactively earn links.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Niciun comentariu:

Trimiteți un comentariu