|
|
Improving Search Rank by Optimizing Your Time to First Byte |
Improving Search Rank by Optimizing Your Time to First Byte Posted: 25 Sep 2013 04:18 PM PDT Posted by Zoompf This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author's views are entirely his or her own and may not reflect the views of Moz, Inc. Back in August, Zoompf published newly uncovered research findings examining the effect of web performance on Google's search rankings. Working with Matt Peters from Moz, we tested the performance of over 100,000 websites returned in the search results for 2000 different search queries. In that study, we found a clear correlation between a faster time to first byte (TTFB) and a higher search engine rank. While it could not be outright proven that decreasing TTFB directly caused an increasing search rank, there was enough of a correlation to at least warrant some further discussion of the topic. The TTFB metric captures how long it takes your browser to receive the first byte of a response from a web server when you request a particular website URL. In the graph captured below from our research results, you can see websites with a faster TTFB in general ranked more highly than websites with a slower one.
We found this to be true not only for general searches with one or two keywords, but also for "long tail" searches of four or five keywords. Clearly this data showed an interesting trend that we wanted to explore further. If you haven't already checked out our prior article on Moz, we recommend you check it out now, as it provides useful background for this post: How Website Speed Actually Impacts Search Ranking. In this article, we continue exploring the concept of Time to First Byte (TTFB), providing an overview of what TTFB is and steps you can take to improve this metric and (hopefully) improve your search ranking. What affects TTFB?The TTFB metric is affected by 3 components:
Measuring TTFBWhile there are a number of tools to measure TTFB, we're partial to an open source tool called WebPageTest. Using WebPageTest is a great way to see where your site performance stands, and whether you even need to apply energy to optimizing your TTFB metric. To use, simply visit http://webpagetest.org, select a location that best fits your user profile, and run a test against your site. In about 30 seconds, WebPageTest will return you a "waterfall" chart showing all the resources your web page loads, with detailed measurements (including TTFB) on the response times of each. If you look at the very first line of the waterfall chart, the "green" part of the line shows you your "Time to First Byte" for your root HTML page. You don't want to see a chart that looks like this:
In the above example, a full six seconds is getting devoted to the TTFB of the root page! Ideally this should be under 500 ms. So if you do have a "slow" TTFB, the next step is to determine what is making it slow and what you can do about it. But before we dive into that, we need to take a brief aside to talk about "Latency." LatencyLatency is a commonly misunderstood concept. Latency is the amount of time it takes to transmit a single piece of data from one location to another. A common misunderstanding is that if you have a fast internet connection, you should always have low latency. A fast internet connection is only part of the story: the time it takes to load a page is not just dictated by how fast your connection is, but also how FAR that page is from your browser. The best analogy is to think of your internet connection as a pipe. The higher your connection bandwidth (aka "speed"), the fatter the pipe is. The fatter the pipe, the more data that can be downloaded in parallel. While this is helpful for overall throughput of data, you still have a minimum "distance" that needs to be covered by each specific connection your browser makes. The figure below helps demonstrate the differences between bandwidth and latency.
As you can see above, the same JPG still has to travel the same "distance" in both the higher and lower bandwidth scenarios, where "distance" is defined by two primary of factors:
So how do you measure your latency? Measuring latency and processing timeThe best tool to separate latency from server processing time is surprisingly accessible: ping. The ping tool is pre-installed by default on most Windows, Mac and Linux systems. What ping does is send a very small packet of information over the internet to your destination URL, measuring the amount of time it takes for that information to get there and back. Ping uses virtually no processing overhead on the server side, so measuring your ping response times gives you a good feel for the latency component of TTFB. In this simple example I measure my ping time between my home computer in Roswell, GA and a nearby server at www.cs.gatech.edu in Atlanta, GA. You can see a screenshot of the ping command below:
Ping continued to test the average response time of the server, and summarized an average response time of 15.8 milliseconds. Ideally you want your ping times to be under 100ms, so this is a good result. (but admittedly the distance traveled here is very small, more about that later). By subtracting the ping time from your overall TTFB time, you can then break out the network latency components (TTFB parts 1 and 3) from the server back-end processing component (part 2) to properly focus your optimization efforts. Grading yourselfFrom the research shown earlier, we found that websites with the top search rankings had TTFB as low as 350 ms, with the higher ranking sites pushing up to 650 ms. We recommend a total TTFB of 500ms or less. Of that 500ms, a roundtrip network latency of no more than 100ms is recommended. If you have a large number of users coming from another continent, network latency may be as high as 200ms, but if that traffic is important to you, there are additional measures you can take to help here which we'll get to shortly. To summarize, your ideal targets for your initial HTML page load should be:
So if your numbers are higher than this, what can you do about it? Improving latency with CDNsThe solution to improving latency is pretty simple: Reduce the "distance" between your content and your visitors. If your servers are in Atlanta, but your users are in Sydney, you don't want your users to request content half way around the world. Instead, you want to move that content as close to your users as possible. Fortunately, there's an easy way to do this: move your static content into a Content Delivery Network (CDN). CDNs automatically replicate your content to multiple locations around the world, geographically closer to your users. So now if you publish content in Atlanta, it will automatically copy to a server in Syndey from which your Australian users will download it. As you can see in the diagram below, CDNs make a considerable difference in reducing the distance of your user requests, and hence reduce the latency component of TTFB: To impact TTFB, make sure the CDN you choose can cache the static HTML of your website homepage, and not just dependent resources like images, javascript and CSS, since that is the initial resource the google bot will request and measure TTFB. There are a number of great CDNs out there including Akamai, Amazon Cloudfront, Cloudflare, and many more. Optimizing back-end infrastructure performanceThe second factor in TTFB is the amount of time the server spends processing the request and generating the response. Essentially the back-end processing time is the performance of all the other "stuff" that makes up your website:
How to optimize the back-end of a website is a huge topic that would (and does) fill several books. I can hardly scratch the surface in this blog post. However, there are a few areas specific to TTFB that I will mention that you should investigate. A good starting point is to make sure that you have the needed equipment to run your website. If possible, you should skip any form of "shared hosting" for your website. What we mean by shared hosting is utilizing a platform where your site shares the same server resources as other sites from other companies. While cheaper, shared hosting passes on considerable risk to your own website as your server processing speed is now at the mercy of the load and performance of other, unrelated websites. To best protect your server processing assets, insist on using dedicated hosting resources from your cloud provider. Also, be wary of virtual or "on-demand" hosting systems. These systems will suspend or pause your virtual server if you have not received traffic for a certain period of time. Then, when a new user accesses your site, they will initiate a "resume" activity to spin that server back up for processing. Depending on the provider, this initial resume could take 10 or more seconds to complete. If that first user is the Google search bot, your TTFB metric from that request could be truly awful. Optimize back-end software performanceCheck the configuration of your application or CMS. Are there any features or logging settings that can be disabled? Is it in a "debugging mode?" You want to get rid of nonessential operations that are happening to improve how quickly the site can respond to a request. If your application or CMS is using an interpreted language like PHP or Ruby, you should investigate ways to decrease execution time. Interpreted languages have a step to convert them into machine understandable code which what is actually executed by the server. Ideally you want the server to do this conversion once, instead of with each incoming request. This is often called "compiling" or "op-code caching" though those names can vary depending on the underline technology. For example, with PHP you can use software like APC to speed up execution. A more extreme example would be Hip Hop, a compiler created and used by Facebook that converts PHP into C code for faster execution. When possible, utilizing server-side caching is a great way to generate dynamic pages quickly. If your page is loading content that changes infrequently, utilizing a local cache to return those resources is a highly effective way in improving the performance of your page load time. Effective caching can be done at different levels by different tools and are highly dependent on the technology you are using for the back-end of your website. Some caching software only cache one kind of data, while others do caching at multiple levels. For example, W3 Total Cache is a WordPress plug-in that does both database query caching as well as page caching. Batcache is a WordPress plug-in created by Automattic that does whole page caching. Memcached is a great general object cache that can be used for pretty much anything, but requires more development setup. Regardless of what technology you use, finding ways to reduce the amount of work needed to create the page by reusing previously created fragments can be a big win. As with any software changes you'd make, make sure to continually test the impact to your TTFB as you incrementally make each change. You can also use Zoompf's free performance report to identify back-end issues which are effecting performance, such as not using chunked encoding and much more. ConclusionsAs we discussed, TTFB has 3 components: the time it takes for your request to propagate to the web server; the time it takes for the web server to process the request and generate the response; and the time it takes for the response to propagate back to your browser. Latency captures the first and third components of TTFB, and can be measured effectively through tools like WebPageTest and ping. Server processing time is simply the overall TTFB time minus the latency. We recommend a TTFB time of 500 ms or less. Of that TTFB, no more than 100 ms should be spent on network latency, and no more than 400 ms on back-end processing. You can improve your latency by moving your content geographically closer to your visitors. A CDN is a great way to accomplish this as long as it can be used to serve your dynamic base HTML page. You can improve the performance of the back-end of your website in a number of ways, usually through better server configuration and caching expensive operations like database calls and code execution that occur when generating the content. We provide a free web performance scanner that can help you identify the root causes of slow TTFB, as well as other performance-impacting areas of your website code, at http://zoompf.com/free. Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! |
You are subscribed to email updates from Moz Blog To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
When and how to refresh your web content |
When and how to refresh your web content Posted: 25 Sep 2013 06:35 AM PDT We're going through the process of rebuilding our website just now, with the launch scheduled soon. My part in this process has been to completely rewrite our copy, which is an exercise I've been through with several clients recently as well. I wrote the current SEOptimise website copy back in 2011. Two years ago doesn't seem very long, but it's abundantly clear from reading the existing copy that we've come a long way since it was written, and that I've come a long way as a copywriter since then too – my writing has matured significantly as I've worked on so many different clients and picked up new copywriting tricks. I'm proud of the new copy – which you'll see when the site launches – and feel it reflects the fact that SEOptimise has properly grown up. So how is this relevant to you? Well, I've learnt a lot in the process of rewriting the copy – not least the importance of regularly reviewing and refreshing your content and not simply putting it online and forgetting about it. In this post, I'd like to share some insights and tips to help you make sure your on-site content continues to be fresh, relevant and effective at achieving your goals. I'm going to cover the following: How do you know when your content needs refreshing?If websites didn’t refresh their content once in a while, Twitter would still look like this: Hard to believe when you look at it now, isn’t it? But when is it time to change? It’s time to ask yourself some questions. Does your website generate conversions? Do people share it and interact with it? Do you get a steady stream of traffic to all your content, or does it all go to the homepage or blog? If your content isn't performing well, it's probably time for a refresh. Have an objective read of the copy on your website and answer the following questions: • How long ago was it written? Your honest answers to these questions should help you identify copy that could do with being rewritten. You should also look at the other kinds of content on your site: • How are the images on your site looking? Are there enough of them? Are there opportunities for adding images to support the copy? You know what they say – a picture speaks a thousand words. What to consider when you rewrite your copySo how do you go about refreshing the content you already have? Here are some things to think about to get you started. Site architecture Language and tone Length Many websites have virtually no copy on their homepages, for instance, which is terrible for SEO. Plenty of clients over the years, on having this pointed out to them, have declined to add more copy on design grounds. But adding copy to a homepage doesn't have to impact the design; it's perfectly possible to add subtle copy further down the page, with a couple of lines visible and an 'expand' or 'read more' button that then shows a longer paragraph. This is a good compromise, and you can see it in action in the screenshot below, which comes from towards the bottom of the Notonthehighstreet homepage. Structure and calls to action Examples of conversions include: Does your web content guide users to take these actions? If not, you should encourage readers to do what you want them to do by adding calls to action into your copy. Internal links Tip: ask the team How to write effective web copyWhen you're writing for the web, remember that most readers scan through web pages rather than reading them in detail. This means that it should be easy for them to get the gist of your message (for example, the benefits of a product) without having to read it properly. Here's how to achieve this. • Paragraphs: keep paragraphs short. No walls of text! Don't forget your meta dataWhen you rewrite your website copy, don't forget to revisit your meta data (your title tags and meta descriptions) as well. Your meta data is a part of your copy and it's just as important to ensure that it remains up-to-date, in keeping with the copy on your website in terms of language and message, and using the most relevant keywords. Strong brands present a uniform message across all areas. The top keywords can change, so it's always a good idea to monitor these on a regular basis – say, once a quarter – to make sure you're still targeting the best ones. Here are a few tips: • Conduct some keyword research for your top keywords and make sure you're not missing any opportunities. You'll find further advice on copywriting for meta data in my post, Copywriting tricks to turbocharge your meta data for conversions. Do you need help refreshing your website content? We can help! Drop me a line at rachel@seoptimise.com or on Twitter @RachelsWritings. © SEOptimise When and how to refresh your web content |
You are subscribed to email updates from SEOptimise » blog To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
[You're getting this note because you subscribed to Seth Godin's blog.]
Don't want to get this email anymore? Click the link below to unsubscribe.
Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498 |
Mish's Global Economic Trend Analysis |
Economic Idiocy: California Hikes Minimum Wage to $10/Hour by 2016 Posted: 25 Sep 2013 08:34 PM PDT In a two-step move, in the wrong direction, California signs law raising minimum wage to $10/hour by 2016 California has become the first state in the nation to commit to raising the minimum wage to $10 per hour, although the increase will take place gradually until 2016 under a bill signed into law by Democratic Governor Jerry Brown on Wednesday.Economic Idiocy We do not need people to spend more. Realistically, we need people to save more. Higher prices (which are going to be the end result of this move) are all but guaranteed to eat up most of the presumed benefit. Worse yet, these minimum wage hikes will hurt small businesses the most. Walmart may be able to maintain profit margins by hiking prices a few cents, but many low-volume retailers will feel the pinch. Economic idiots blanket the California and Illinois state legislatures and the results are easy to spot. Illinois and California are two of the most economically distressed states, with the worst improvements in the unemployment rate during the recovery. Hikes in the minimum wage are guaranteed to make the situation worse. California, Illinois, US Unemployment Rates
California and Illinois are two of the most business-unfriendly states you can find, and the results speak for themselves. Both states nearly always have higher unemployment rates than the national average, and both states perform miserably following every recession. Mike "Mish" Shedlock http://globaleconomicanalysis.blogspot.com Mike "Mish" Shedlock is a registered investment advisor representative for SitkaPacific Capital Management. Sitka Pacific is an asset management firm whose goal is strong performance and low volatility, regardless of market direction. Visit http://www.sitkapacific.com/account_management.html to learn more about wealth management and capital preservation strategies of Sitka Pacific. |
"No Tapering, More QE, Serious Housing Slowdown" says Saxo Bank Chief Economist Posted: 25 Sep 2013 12:56 PM PDT With September out of the way, most economists now expect a December tapering event. Steen Jakobsen, chief economist at Saxo Bank in Denmark is not one of them. Via email, Steen writes ... More QE, Less growth, Less Inflation and Less UpsideHousing Bulls Increasingly Optimistic Curiously, housing bulls in the US are increasingly optimistic. For example Bloomberg reports Blackstone Said to Gather $2 Billion for Real Estate. It's important to note that Blackstone is raising money for European real estate (but it also has huge US commitment as well). More to the point, I find the following Seeking Alpha headline rather amusing: It's Not Too Late To Capitalize On The Real Estate Recovery. That title reminds me of my 2005 post "It's Too Late" When you start seeing advertisements saying "It's not too late", or "Act now before it's too late", invariable the bulk of the gains have already been had, and the top is extremely close at hand, if not already gone. One and Done Tapering? That was a reasonably bold call by Steen. Another possibility is a "one and done" trivial amount of tapering in December. That is along the lines of what I expected in September. For further discussion, please see ...
Mike "Mish" Shedlock http://globaleconomicanalysis.blogspot.com Mike "Mish" Shedlock is a registered investment advisor representative for SitkaPacific Capital Management. Sitka Pacific is an asset management firm whose goal is strong performance and low volatility, regardless of market direction. Visit http://www.sitkapacific.com/account_management.html to learn more about wealth management and capital preservation strategies of Sitka Pacific. |
Unhappy Anniversary: Illinois Overtakes California for Second Highest Unemployment Rate in Nation Posted: 25 Sep 2013 11:43 AM PDT The one state arguably more screwed up than California is Illinois. Unions, union sympathizers, socialists, and tax-hike proponents are strongly in control of both states. Is it any wonder that perpetual economic difficulties and insurmountable pension underfundings face both states? Via email, Ted Dabrowski at the Illinois Policy Institute writes ... Unhappy AnniversaryInquiring minds should also take a look at Fiscal Crisis in Chicago: Pensions 31% Funded, Moody's Downgrades Debt 3 Notches, Pension Liability is $61,000 Per Household; Mish's Proposed Solutions Mike "Mish" Shedlock http://globaleconomicanalysis.blogspot.com Mike "Mish" Shedlock is a registered investment advisor representative for SitkaPacific Capital Management. Sitka Pacific is an asset management firm whose goal is strong performance and low volatility, regardless of market direction. Visit http://www.sitkapacific.com/account_management.html to learn more about wealth management and capital preservation strategies of Sitka Pacific. |
You are subscribed to email updates from Mish's Global Economic Trend Analysis To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
Damn Cool Pics |
Infographic Promotion – The Ultimate Guide To Successfully Promoting Your Infographic Posted: 25 Sep 2013 12:49 PM PDT There are a whole host of reasons someone might want to get an infographic developed – as specialists in infographic design we've produced them for just about every reason you can think of – but the most common goal when producing an infographic is improved SEO, i.e. to build and attract inbound links. As a result, effective promotion and distribution of infographics have become just as important as the research and design phase – after all, it's no good having an interesting, accurate and beautifully designed infographic if nobody gets to see it or share it. But successfully promoting an infographic isn't quite as straightforward as a lot of people initially think, resulting in most people simply submitting their design to the ever-growing list of infographic distribution sites and leaving it at that; unable to think of any other places to push their graphic. Click on Image to Enlarge. Via designbysoap.co.uk |
Russian Doppelgangers of the Famous People Posted: 25 Sep 2013 12:18 PM PDT |
Predictions From the Past That Never Came True Posted: 25 Sep 2013 12:05 PM PDT |
Is Motel 6, Mishawaka, Indiana, The Worst US Hotel Posted: 25 Sep 2013 10:59 AM PDT These pics were taken at 3AM on a rainy night in Sept. 2013. Checked in after a 15-hour drive and was greeted with the grossest room ever. Left after 40 minutes to pitch a tent in the rain, and have not received a refund or any satisfactory action. Will never again trust Motel 6, and will make sure as many people as possible see these photos.TV remote This is actually the remote that was left for us. It looked like it had been smashed with a hammer. It was sticky to the touch too. Gross. Black sticky stuff on old worn carpets The camera's flash obscures how lumpy and black this carpet is. The entire floor looks like this. The black stuff on the carpet is sticky. I wish I could have gotten a good photo of the bed. The mattress must have been two decades old. It was so thin that it sloped inward to the center, and springs were poking through and jabbed you when you sat on it. The pillowcases had yellow stains on them, and I found hairs between the top and bottom sheets. Bathtub drain Not sure this bathtub has ever been cleaned. Green gunk caked around the drain, and a thick ring of scum around the entire tub basin. Corner of bathtub Moldy caulking around the entire tub. Bathroom sink Bathtub near the ceiling Black carpet, loud, rattling AC Here's another shot of the worn carpet coated in black stuff. That air conditioner rattled and spewed out rank-smelling air. It was 2:40 when I checked in, and due to my exhaustion I almost actually stayed here. It was when the AC started stinking up the room that I decided to pitch a tent in the pouring rain, rather than spend another second here. First floor hallway Straight out of a horror movie. The elevator was broken, so I got to walk down this and then up three flights of stairs to my room. TripAdvisor has other reviews: * I stay in hotels 300+ times yearly. This Motel 6 is the worst hotel I have ever stayed in. The lobby, halls, stairwells and entries are dirty. The room had marks on the walls, stained carpet and beds/bedding were old and threadbare. Bypass this one. |
You are subscribed to email updates from Damn Cool Pictures To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
Facebook Twitter | More Ways to Engage