luni, 5 mai 2014

A 5-Step Framework for Conversion Rate Optimization

A 5-Step Framework for Conversion Rate Optimization


A 5-Step Framework for Conversion Rate Optimization

Posted: 04 May 2014 05:17 PM PDT

Posted by Paddy_Moogan

There is a problem with conversion rate optimization: It looks easy. Most of us with some experience working online can take a look at a website and quickly find problems that may prevent someone from converting into a customer. There are a few such problems that are quite common:

  • A lack of customer reviews
  • A lack of trust / security signals
  • Bad communication of product selling points

The thing is, how do we know for sure that these are problems?

The fact is, we don't. The only way to find out is to test these things and see. Even with this in mind, though, how do you know to test these things that are mainly based on your own gut feeling?

For me, this is where doing a high level of research and discovery is worth the time and effort. It can be far too easy to make assumptions about what to test and then dive straight in and start testing them. Wouldn't it be better to run conversion rate tests based on actual data from your target audience?

I'm going to go into detail on the process we use at Distilled for conversion rate optimization. With the context above, it shouldn't be any surprise that I spend a lot of time talking about the discovery phase of the process as opposed to testing and reviewing results.

For those of you who want the answer straight away and an easy takeaway, here is a graphic of the process: 

Before I move on, I wanted to give you a few links that have certainly helped me over the last few years when learning about conversion rate optimization.

Right, let's get into the process.

This entire stage is all about one thing: gathering the data you need to inform your testing. This can take time and if you're working with clients, you need to set expectations around this. The fact is that this is a very important stage and if done correctly, can save you a lot of heartache further down the process.

Step 1: Data gathering

There are three broad areas from which you can gather data. Let's look at each of them in turn.

The company

This is the company / website that you're working for. There is a bunch of information you can gather from them which will help inform your tests. 

Why does the company exist?

I always believe in  starting with why and I've talked about this before in the context of link building. It is at this point that you can dive right into the heart of the company and find out what makes it different to others. This isn't just about finding USPs, it goes far deeper than that into the culture and DNA of the company. The reason here is that customers buy the company and the message it portrays just as much as the product itself. We all have affinities with certain companies who probably do produce a great product and service, but it's a love for the company itself which keeps us interested and buying from them.

What are the goals of the company?

This is a pretty crucial one and the reasons should be obvious. You need to focus your data gathering and testing around hitting these goals. There are times when some goals may be less obvious than others. These are sometimes called  micro-conversions and can include things that contribute to the bigger goal. For example, you may find that customers who signup to your email newsletter are more likely to become repeat customers than those who don't. Therefore, a micro-conversion would be to get people signed up to your email list.

What are the unique selling propositions (USPs) of the company?

What makes the company different in comparison to competitors who sell the same or similar products? Bonus points here if the USP is something that a competitor can't emulate. For example, offering free delivery is something that may help improve conversions, but chances are that your competitors can also offer this.

What are the common objections?

This is where you should be speaking to people within the organisation who are outside the marketing team. One example is to talk to sales staff and ask them how they sell the products, what they feel the USPs are and what the typical objections are to the product. Another example is to talk to customer support staff and see what problems they tend to deal with. These guys will also have input on what customers tend to like the most and what positive feedback / product improvements get suggested.

Another team to speak to is whoever manages live chat for a website if it exists. At Distilled, we've sometimes been able to get access to live chat transcripts and have been able to run analysis to find trends and common problems.

The website

Here, we are focusing specifically on the website itself and seeing what data we can gather to inform our experiments.

What does the sales process look like?

At this point, I'd recommend sitting down with the client and a big whiteboard to map out the sales process from start to finish, including each touch-point between the customer and the website or marketing materials such as email. From here, you can go pretty granular into each part of the process to find where problems can occur.

It is also at this point that you should  review funnels in analytics or set them up if they don't currently exist. Try to find where the most common drop-off points are and take a deeper dive into why. Sometimes a technical problem may be to blame for the drop-off in conversions, so make sure you are at the very least segmenting data by browser to try and find problems. 

What is the current traffic breakdown?

This involves you taking a deep dive into the existing analytics data that you have from the website. At this point you're just trying to get a better understanding of a few core things:

  • How much traffic the website receives: This can impact your testing in that you may discover low traffic numbers which can influence how long it takes a test to complete.
  • What demographics the website typically attracts - this may require you to enable extra tracking if you're using Google Analytics.
  • What technology users typically use: As mentioned above, looking at browser usage is important. But on top of this, what devices do users tend to use? If you're seeing high numbers of users using mobile devices, you should check how the website renders on a mobile device. If you're seeing very low numbers of visits from mobile devices, that is probably worth investigating too given the growth of traffic from mobile in recent years.

Where do conversions currently come from?

Hopefully, the website will already have some  goals or eCommerce tracking enabled which makes this bit a lot easier! If not, then you will need to get them setup as soon as possible so that you can start gathering the data you need. This work needs to be done no matter what because you're not going to be able to measure the results of your CRO tests if you can't measure the conversions!

If you don't have goals setup already, you can use  Paditrack which syncs with your Google Analytics account and allows you to apply goals to old data. It also allows you to segment your funnels which, annoyingly, Google Analytics doesn't allow you to do as of writing.

If you do have this data, then you need to try and find patterns in the type of people who convert, as well as where they come from. With the latter, it can be a bit tricky sometimes because quite often, customers will find you via different channels. So you need to make sure that you're looking at  multi-channel reports and seeing which ones are most common.

Is there any back-end data you can access?

Although  things are changing, many analytics platforms do not integrate offline or back-end data by default, so you may need to go digging for it. One thing that many companies have is data on cancellation or refund rates. Typically this is not included in standard analytics views because it takes place offline, however it can provide you with a wealth of information about products and customers. You can find out what causes customers to cancel a service or what made them ask for a refund.

The customers

This can potentially be the most interesting area to gather data from and have the most impact. Here we are gathering information directly from your customers via a number of methods.

What are the biggest objections that customers have?

For me, this is one of the most insightful things to ask because it drills straight into the one core thing that we care about in this process - what is stopping the customer from buying?

I really like  this presentation from Conversion Rate Experts which outlines their favourite questions to ask customers at this stage of the process as well as these three questions from Avinash.

There are a number of ways to do this, which I'll give some detail on here.

Google Consumer Surveys

We have used  these surveys a few times at Distilled now and they have usually given us pretty good insights. The results can be quite broad and frankly, some responses can be pretty useless! But if you cut out the noise and look for the trends, you can get some good information on what concerns and considerations people have when buying products like yours.

Qualaroo

Qualaroo is a cool little survey tool which you've probably seen on numerous websites across the web. It looks something like this:

What I like about Qualaroo is that it doesn't intrude on the user experience and you can use some cool customization settings to make it appear exactly when you want. For example, you can set it to only appear on certain pages or based on user behavior like time on page. You can also set it to appear when it looks like someone is about the close the window.

One neat little tip here is to place the survey on your order confirmation page and ask the question "What nearly stopped you from buying from us today?" - this can give you some low-risk feedback because the user has already purchased from you.

It's also worth mentioning that Qualaroo can now be used on mobile devices, too, so you can tailor your questions to mobile users really well:

Other survey services

If you have a good email list which is reasonably active and engaged, you can run email surveys using something like  Survey Monkey. This can be a little more tricky because chances are that the people on your email list may be existing customers who's mindset is a bit different to someone who has never bought from you before. We've also used AYTM in the past for running surveys who offer a few more options in their free version than Survey Monkey.

Usertesting.com

Again, this is a tool that we often use at Distilled, and we have gotten some good results from it. There have been a few misses too in terms of how useful the user has been, but that happens from time to time. Usertesting.com allows you to recruit users based on certain characteristics (age, gender, interests etc) and then ask them to complete tasks for you. These tasks are usually focused around your website or a competitors and may involve researching and buying a product. As the user works through the tasks, they record a screencast and talk as they are working. 

If you want to dive more into this, I really liked  this webinar from Conversion Rate Experts which focuses on how they use the service.

Step 2: List hypotheses

Now we need to make the step from information gathering to outlining what we may want to test. Without realising it, many people will jump straight to this step of the process and just start testing what feels right. By doing all the work we outlined in step 1, the rest of the process should be much more informed. Asking yourself the following questions should help you end up with a list of things to test that are backed up by real data and insight.

What are we testing?

Based on all of the information you gathered from the website, customers and the company in step 1, what would you like to test? Go back to the information and look for the common trends. I prefer to start with the most common customer objections and see what is common amongst them. For example, if a common theme of customer feedback was that they place a lot of value in knowing their personal payment details are safe, you could hypothesise that adding more trust signals to the checkout process will increase the number of people who complete the process.

Another example may be if you found that the sales team always get feedback that customers love the money-back guarantee that you offer. So you may hypothesise that making this selling point more obvious on your product pages may increase the number of people who start the checkout process.

Once you have a hypothesis, it is important to know what success looks like and therefore, how to tell if the test result is a positive one. This sounds like common sense, but it's very important to get this clear right from the start so that you reach the end of the test and stand a high chance of having an answer.

Who are we testing?

It is important to understand the differences in the types of people who visit your website, not just in terms of demographic, but also in terms of where their mind is at in terms of the buying cycle. An important example to keep in mind is new vs. returning customers. Putting both of these types of customers into the same test could lead to unreliable results because the mindsets of the customers are very different. 

Returning customers (assuming you did a good job!) will already be bought into your company and brand, they will have already experienced the checkout process, they may even already have their credit card details registered with you. All of these things are likely to make them automatically more likely to convert into a customer compared to a brand new customer. One thing to mention here is that you're never going to be able to segment everyone perfectly because  analytics data quality is never 100% perfect. There isn't much we can do about this beyond ensuring we're tracking correctly and using best practice when segmenting users.

When you run your test, most pieces of software will allow you to direct traffic to your test pages based on various attributes, here is an example from  Optimizely:

Another useful segment as you can see above is the segmentation by browser. This can be particularly useful if you have any bugs with certain browsers and your testing page. For example, if something you want to test doesn't load correctly in Firefox, you can choose to exclude Firefox users from the test. Obviously if the test is successful, the final roll-out will need to work in all browsers, but this setting can be useful as a short term fix. 

Where are we testing?

This is a pretty straight forward one. You just need to specify which page or set of pages you're testing. You may choose to test just one product page or a set of similar products at once. One thing to mention here is that if you're testing multiple pages at once, you should be aware of how the buying cycles for those products may differ. If you're testing two product pages with a single test and one of those products is a $500 garden shed and the other product is a $10 garden ornament, then the results of the test may be a bit skewed. 

When you list the pages that you're testing, it is also a good time to run through a simple checklist to make sure that tracking code has been added to those pages correctly. Again, this is pretty basic but can be easily forgotten.

Goals of the discovery phase:

  1. You've gathered data from customers, the website, and the company
  2. You've used this data to form a hypothesis on what to test
  3. You've identified who you're targeting with this test and what pages it applies to
  4. You've checked that tracking code is set up correctly on those pages

This stage is where we start testing! Again, this is a step that people can jump to straight away and not have data to backup their tests. Make sure that isn't you!

Step 3: Wireframe test designs

This step is likely to vary on your specific circumstances. It may not even be necessary for you to do wire-framing! If you're in a position where you don't need to get sign-off on new test designs then you can make changes do your website directly using a tool like Optimizely or Visual Website Optimizer.

Having said that, there are benefits to taking some time to plan the changes that you're going to make so that you can double check that they are in line with steps 1 and 2 above. Here are a few questions to ask yourself as you're going through this step. 

Are the changes directly testing my hypothesis?

This sounds basic; of course they should! However it can be easy to get off-track when doing this kind of work. So it's good to take a step back and ask yourself this question because you can easily do too much and end up testing more than you expected to.

Are the changes keeping the design on-brand?

This is likely to be more of an issue if you're working on a very large website where there are multiple stakeholders in the website such as UX teams, design teams, marketing teams etc. This can cause problems in getting things signed off but there are often good reasons for this. If you suggest a design that involves fundamental changes to page layout and design, it's less likely to get sign-off unless you've already built up a serious amount of trust. 

Are the changes technically doable?

At Distilled, we've sometimes run into issues where our changes have been a bit tricky to implement and have required a bit of development time to get working. This is fine if you have the development time available, but if you don't, this could limit the complexity of the tests that you run. So you need to bear this in mind when designing tests and choosing which hypotheses to test.

If you're looking for a good wire-framing tool for this step, there are a few options including Balsamiq and Mockingbird.

Step 4: Implement design

At Distilled, we use Optimizely to implement designs and run split tests on client websites, but Visual Website Optimizer is a good alternative.

As mentioned above, the more complex your design, the more work you may need to put the design live. It is really important at this point to make sure you're testing the design across different browsers before putting live. Visual elements can change quite dramatically and the last thing you want to do is skew your results by a certain browser not rendering the design properly.

It is also at this stage that you can choose a few options in terms of who should see the test. This is how this looks in Optimizely:

You can also choose what proportion of your traffic will be sent to the testing pages. If you have high traffic numbers, then this can help offset the risk if a test resulting in conversion rates dropping - it does happen! So only sending 10% of your traffic to the test means that the remaining 90% will carry on as normal.

This is what this setting looks like if you're using Optimizely:

You should also connect Optimizely to your Google Analytics account so that you're also able to determine the average order value for each group of visitors you are sending to your conversion tests. Sometimes, the raw conversion rate for a test may not increase, but the average order value may increase which is obviously a win that you don't want to be overlooking.

Goals of the experiments phase:

  1. Test variations are live and getting traffic
  2. Cross-browser testing is complete
  3. Design has been signed off by client / stakeholders if applicable
  4. Correct customer segments / traffic allocation has been set

Now it's time to see if our work has paid off! 

Step 5: Was the hypothesis correct?

Was statistical significance reached? 

Before diving in and assessing if your hypothesis was correct, you need to make sure that statistical significance has been reached. I like  this short definition by Chris Goward which helps explain what this is and it's importance. If you want to go a bit deeper and see some examples, this post by Will on the Distilled blog is a great read.

Many split testing tools will actually tell you if significance has been reached or not so this takes some of the hard work out of the process. Having said that, it's still a good idea to understand the theories behind it so you can spot problems if they occur.

In terms of how long it could take to reach statistical significance, it can be hard to predict but this is a cool tool which helps you on this. Evan has another tool in relation to this which allows you to determine how order value differs across two different test groups. This is one of the key reasons to connect Optimizely to Google Analytics as mentioned above.

Was the hypothesis correct?

Yes? Great! If your test was a success and increased conversions then that's great, but what's next? Well firstly you need to look at how to roll out the successful design to the website properly, i.e. not relying on Optimizely or Visual Website Optimizer to display the design to visitors. In the short term, you can send 100% of your traffic to the successful design (if you haven't already) and keep an eye on the numbers. But at some point, you'll probably need help from developers to deploy the changes on the website directly.

When the hypothesis isn't correct

This is going to happen; most conversion rate experts don't talk about their failed tests, but they do happen. One guy that did talk about this was Peep Laja in this article and he went into even more detail in this case study where he said that it took six tests before a positive result was reached.

The important thing here is to not give up and make sure you've learned something from the process. There are always things to learn from failed tests and you can iterate on them and feed the learnings into future tests. Alongside this, make sure you've keeping track of all the data you've gathered from failed tests so that you have a log of all tests which you can refer back to in the future.

Goals of the review stage:

  1. Know whether a hypothesis was correct or not
  2. If it was correct, roll out widely
  3. If it wasn't correct, what did we learn?
  4. On to the next test!

That's about it! Conversion rate optimization should be an ongoing process because there are always things that can be improved across your business. Look for the opportunities to test everything, follow a good process and you can make a big difference to the bottom line.

A few resources to leave you with which I'd highly recommend:

If you have any feedback or comments, feel free to leave them below!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Seth's Blog : Most presentations aren't bullet proof

 

Most presentations aren't bullet proof

  • Bullets do not save time. Memos save time. Presentations aren't about the most concise exposition of facts, they are about changing minds.
  • Bullets are actually aggressive, they're gotchas lying in wait to be brought up later, either by an observer calling you out or a presenter reminding us he told us so.
  • Bullets do not make it easier to remember what's being said.
  • Bullets create tension about what the next bullet is going to say, instead of actually communicating your idea. When we see a bullet, we check it off and stop paying attention until the next one appears.
  • Bullets are almost always misused. If you have a finite number of points, each of which supports the other, one can imagine that they help us fit the puzzle together. But that's now how they're used, are they? Most people use them the way I'm using them now, as a disorderly almost random list.
  • You've already forgotten the second bullet, haven't you? That's because bullets don't naturally map to the way we process and remember ideas.
  • If bullets are the official style of your organization, using them is a form of being invisible.
  • Without a doubt, bullets make it far easier to read your presentation to people in the room. For those with no time to practice or unable to say what's in their heart, bullets are perfect.
       

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Email subscriptions powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA.

 

duminică, 4 mai 2014

Mish's Global Economic Trend Analysis

Mish's Global Economic Trend Analysis


China Manufacturing PMI Contracts 4th Month, Employment Down 6th Month

Posted: 04 May 2014 09:35 PM PDT

The HSBC China Purchasing Managers' Index™ shows China Manufacturing PMI Contracts 4th Month.
Key Points

  • Output and new orders contract at slower rates
  • Staff numbers are cut for the sixth month in a row
  • Solid reduction in both input and output prices

Chinese manufacturers signalled a further deterioration in overall operating conditions during April. Both output and total new work declined over the month, albeit at weaker rates than those recorded in March. Fewer new orders led firms to cut their staffing levels at a modest pace, while purchasing activity fell for the third successive month. Meanwhile, both input costs and output charges fell markedly.

After adjusting for seasonal factors, the HSBC Purchasing Managers' Index™ (PMI™) – a composite indicator designed to provide a single-figure snapshot of operating conditions in the manufacturing economy – posted at 48.1 in April, down fractionally from the earlier flash reading of 48.3, and up from 48.0 in March.

This signalled the fourth successive monthly deterioration in the health of the sector. Production at Chinese manufacturers fell for the third consecutive month in April, though at a weaker pace than in March.

Weaker client demand was attributed by a number of survey respondents to deteriorating market conditions. Goods producers in China cut their staffing levels for the sixth month running in April, amid reports of company down-sizing policies which stemmed from lower production requirements. Moreover, the rate of job shedding accelerated from the previous month. Despite reduced workforce numbers, volumes of unfinished work fell for the third successive month in April. That said, the rate of backlog depletion was marginal.

China Manufacturing PMI 2004-Present



Note that China manufacturing has spent more time in contraction than expansion since Mid-2011. Those expecting China to lead a surge in global growth are mistaken.

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

Misrepresenting the Libertarian Position on Putin

Posted: 04 May 2014 11:34 AM PDT

Despite the complete failure of the wars in Iraq and Afghanistan, war-mongers and even self-proclaimed libertarians don't understand what is going on in the Ukraine, why it's none of our business, or even how the civil war in Ukraine started.

A friend sent me an article today from the site Conservatives for Liberty called Confused libertarians are Supporting Putin by Gabrielė Stakaitytė.

Supposedly the site is an "independent libertarian, free market and socially liberal campaign group".

It is difficult to judge a site on the basis of one article, but there is a difference between supporting Putin and saying Ukraine is essentially none of our business, the true libertarian position.

According to Gabrielė "From soap operas to ballet performances, the Russian government is doing everything to influence the cultural life of Eastern Europe, and to maintain a stranglehold on the mentality of the people."

Let's assume that is true. Here is an equally true statement "From soap operas to ballet performances, the EU is doing everything to influence the cultural life of all of Europe, and to maintain a stranglehold on the mentality of the people."

Here's another "The US is doing everything to everyone globally, and by military force where necessary, to maintain a hypocritical stranglehold on any country that dares go against the vision of the United States."

One can come up with all sorts of similar statements.

Just what did warmongers expect when the US broke promises and expanded NATO to the East? Did they expect Russia would sit back and do nothing? Did they want to start WW III?

The US fomented the overthrow of the last Ukrainian government and now does not like the result. Similarly, no one in their right mind is happy about the overthrow of the Shah of Iran decades ago, the lives lost in Vietnam, and the results of wars in Iraq and Afghanistan.

Back to the point. This is not our battle. If those in Crimea want to join Russia, no one should care.

Gabrielė blasts the vote in Crimea.Was it rigged? Let's assume it was. Was it rigged to the point that an honest vote would have led to a different result. No it wasn't.

Gabrielė says "Another example of confused libertarians supporting Putin is calling the new Ukrainian government illegitimate, or even fascist, whatever that is supposed to mean. The Ukrainian government is no less legitimate than the first US government, having come to power after a popular revolution."

Good grief. Look at the irony! If the current Ukrainian government is no less legitimate than the first US government, one can say the exact same thing about Crimea!

By implication, if Gabrielė likes the result, the action is OK, if she doesn't, then it's not. That is essentially the hypocrisy of the US position in a nutshell.

I do not believe many libertarians are cheering Putin per se. Perhaps one can find a few self-proclaimed libertarians openly cheering Putin, but people can claim to be whatever they want, and to the point of blatant hypocrisy, Gabrielė does just that.

Finally, please note that Gabrielė offered no links or even quotes to support her claim. She asserts "confused libertarians support Putin". Can we have some examples please, something more than allegations?

Mike "Mish" Shedlock
http://globaleconomicanalysis.blogspot.com

Seth's Blog : When is Mother's Day?

 

When is Mother's Day?

It's sort of a silly question. After all, you and your mom can celebrate it whenever you want, not when everyone else tells you to.

My mom never liked it very much. She told us it was a silly commercial exercise. On the other hand, any excuse to express gratitude is a good one.

I published Sarah's book in memory of my mom. I figured today was a good day to remind you of it.

       

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Email subscriptions powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA.

 

Seth's Blog : Origin stories

 

Origin stories

The Grateful Dead had their breakthrough at Ken Kesey's acid test parties.

Superman was raised by George and Martha Kent.

Hewlett Packard started in a garage.

We hear origin stories all the time. They're magnetic enough that we write books and make movies about them.

Here's the thing: The only thing they have in common is that they are all different.

You can't reverse engineer success by researching origin stories. You can't follow the same path as those you admire and expect you'll end up in the same place.

Everything worthwhile has an origin, but those origins aren't the reason that they are worthwhile.

       

 

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.




Email subscriptions powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA.