Tips for Ajax for SEO Graywolf's SEO Blog |
Posted: 01 Dec 2011 10:23 AM PST Whenever Ajax enter the conversation for an SEO or internet marketer, chances are good there will always be a deep sigh or an “ugh” face. While it is true that search engines are getting better at indexing this type of content, we still aren’t at the point that you can realistically rely on them to index it properly or even at all. That doesn’t mean you can’t use it, it just means you need to take some extra steps to make sure that type of content is visible to crawlers and non Ajax users. The first step you need to do is make sure a static page and URL exists for every end result of content. For example, let’s say you run a local travel website, and you have a location/map page that lets people view restaurants, hotels, attractions, or other information within a specific area. Users can turn filters on or off, look at different locations, and get detailed information about each venue. It would be a very good user experience to have that work via Ajax and JavaScript, similar to Google maps integrating data from Google places. However “hiding” all that information behind Ajax won’t help you with your organic search traffic. What you need to do is create specific unique URLs for each of those destinations. These URLs need to provide the information in way that ALL the search engine spiders can read and extract, not just the advanced experimental Ajax crawling spider from Google. This insures you will get traffic from Yahoo, Bing, Facebook, Twitter, Stumbleupon, and, heck, even services like Blekko and Wolfram Alpha. Relying on just one search engine or source for your traffic is a dangerous strategy and not defensible in the whims of an algorithm update. Once you have each of those pages, you want to make sure the URL is as search engine friendly as possible: short with 3-5 keywords in the URL and without parameters. While it’s a bit of overkill, providing the rel=”canonical” tag is a good idea as well. Where things get a little tricky is inbound linking, email, social media links, and user agent detection. Whether someone is viewing the Ajax version of the content or the static version of the content, you should provide a “link to this page,” “share this page,” or “email this page” functionality, and that should always go to the static URL. When users request those pages or come from a search engine and ask for the static URL page, you need to make a decision about how to serve that content. If the user agent is capable of working with Ajax/JavaScript, feel free to serve it that way. If it’s a bot or non compatible user agent (ie tablet, iPad, or mobile phone) then serve the HTML version. Lastly, I would always fail gracefully with a noscript tag that, when clicked, assures the users gets the content they really want. While this may seem like a bit of double work, if you use Ajax properly, it’s probably not. You pull the same information from the same database–it’s only the method of rendering that changes. Flash, on the other hand, will be a bit more problematic, and would probably require a bit of double work. Therefore, it’s not a method I recommend. One of the primary reasons it’s a good idea to pull the data from same DB is it insures you don’t create a “bad cloaking” situation. Technically, cloaking is serving different content to the spiders and to the engines. If the actual content is the same, and it’s just the delivery technology and implementation that is the only difference, you have a low risk, highly defensible position. Especially if you use the canonical tag to nudge the spiders in the direction of the real URL. Once you have the static URL in place, you need to provide a method for the search engines to see and access that content. You can use HTML sitemaps and XML sitemaps, but ideally you need to set up dedicated crawling paths. Unless your site is very small (less than a few hundred pages), I would suggest a limited test first. You should roll this out in phases on non mission critical sections of pages first. Use text browsers, text viewers, crawlers like Xenu link sleuth, or website auditor. Lastly, I would suggest setting up a monitoring page for use with services like change detection and/or Google alerts. It’s important that you know if something “breaks” or “jumps the rails” within 24 hours, not 30 days later when 70% of your content has dropped out of the index. The last issue you want to consider is internal duplicate content. It’s not entirely unlikely that if the “Ajax crawling bot” finds its way to your pages, you don’t want them to be interested in it and index the content in that format. Using the rel=”canonical” tag that points to a static non-ajax URL will help, but I’ d also suggest the noindex, follow meta tags on the Ajax pages, just to be safe. Leaving things open to search engines to decide is where problems come from … sometimes BIG and EXPENSIVE problems … So what are the takeaways from this post:
photo credit: Shutterstock/Serg Zastavkin Related posts:
Advertisers:
This post originally came from Michael Gray who is an SEO Consultant. Be sure not to miss the Thesis Wordpress Theme review. |
You are subscribed to email updates from Graywolf's SEO Blog To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
Niciun comentariu:
Trimiteți un comentariu