Testing your SEO

Search engine optimization is more science than art. As with any scientific discipline, SEO needs to be done with rigor. The results need to be reproducible.

Just don’t change too many variables at once; if you do, you won’t be able to tell which changes were responsible for which results.

You can glean a lot about SEO best practices and latest trends and tactics from SEO blogs and forums and e-books. But it is hard to separate the wheat from the chaff, to know with any degree of certainty that a claim of something that works or doesn’t work holds true.

That is where testing of your search engine optimization comes in. You need to prove to yourself and, of course, to your wider community of stakeholders, what works and what doesn’t.

Weird science

Unlike multivariate testing for optimizing conversion rates where many experiments can be run in parallel (thanks to the brilliant work of Dr. Genichi Taguchi), SEO tests must be run serially. That’s because everything must filter through Google before the impact can be gauged.

This is made more difficult by the fact that there is a time lag. You make your changes, then you must wait for the pages to get re-spidered, re-indexed, re-ranked and, finally, for the Google visitors to make their way in and buy from you in numbers that are statistically significant.

SEO testing is made even more difficult by the Google results being personalized to the user’s search history, his/her geographic location (usually based on the user’s IP address), and the Google data center being accessed. Indeed, even if the Google user is not logged in, his/her results are still customized to his/her computer unless he/she has specifically disabled this under Google’s “Web History” option under the Settings “flywheel” at the top right of the search results screen.

Iterative testing

So what does SEO experimentation look like? Well, let’s say you have a product page with a particular Google ranking for some keyword and you want to improve that. Rather than applying a number of different SEO tactics at once, you could start varying things one at a time.

Tweak the title tag and nothing else, then wait to see what happens to the Google rankings and resulting Google-delivered traffic and sales. Continue making further revisions to the title tag in multiple iterations until the results show that the title tag is truly optimal.

From there, move on to the headline, tweaking that and nothing else and watching the results. Optimize that over multiple iterations. Then move on the intro copy, then the breadcrumb navigation, and so on.

Testing should be iterative. It’s not “set it and forget it,” where you give it your best shot (with title tags, headlines or whatever) and then you never look at it again. If you are testing title tags, keep trying things to see what works best.

Try shortening the title tag; lengthening it; changing the word order; changing verb tenses; changing singular to plural or vice versa; substituting or adding synonyms. If your rankings take a nosedive, you can always revert to the way it was before you started testing.

When testing iteratively, it is good to apply the changes to pages that are being frequently re-spidered and re-indexed. Otherwise you’ll have to wait longer between iterations to see the impact.

You can see how frequently a page is re-spidered by checking your server access logs. (Note that Google Analytics and other JavaScript-based web analytics packages do not report on spidering activity.)

You should also try to speed up the spidering and indexation of pages you would like to test to minimize the wait between iterations. You can do this by flowing more link authority (PageRank) to the pages that you want to test. You can accomplish this by linking to them from higher up in the site tree, like from the home page.

Afterwards, however, give it some time before forming your baseline, because sending more PageRank to a page will most likely affect its search ranking. You can also impact spidering frequency with your XML Sitemaps file by setting a priority to each page. Set a higher priority to pages you want spidered more frequently.

A word of caution: Don’t make the mistake of setting all your pages to a priority of 1.0. None of your pages will be differentiated from each other in priority, and thus none will get preferential treatment from Googlebot. It’d be equivalent to setting all your pages to 0.0 or to 0.5. Google won’t pay any attention to that.

Meaningful metrics

Since personalization and geolocation mean that not everyone is seeing the same search results, you shouldn’t rely on rankings as your only indicator of what worked or not. Many other meaningful SEO metrics exist too, including traffic to the page, spider activity, search terms driving traffic per page, number and percentage of pages yielding search traffic, searchers delivered per search term, ratio of brand to nonbrand search terms, unique pages spidered, unique pages indexed, ratio of pages spidered to pages indexed, etc.

But just having better metrics isn’t enough. An effective SEO testing regimen also requires a platform conducive to performing rapid-fire iterative tests, where each test can be associated with reporting based on these new metrics.

Such a platform comes in especially handy with experiments that are difficult to conduct under normal circumstances. Testing a category name revision applied site-wide is dramatically harder than testing a title tag revision applied to a single page, for example.

Specifically, consider a scenario where you are asked to make a business case for changing a category name from industry-speak (e.g., “kitchen electrics”) to more common vernacular (e.g., “kitchen small appliances”). Conducting the test to quantify the value of this site-wide change would require applying the change to every occurrence of “kitchen electrics” across the website.

A tall order indeed, unless you can implement the change as a simple search-and-replace operation. Such site-wide tests are easily applied via a proxy server such as Covario’s Organic Search Optimizer. (Disclosure: I invented said SEO technology.).

By acting as a middleman between the web server and the spider, proxy servers can facilitate some interesting tests that would normally be quite invasive into the ecommerce platform and time-intensive for the IT team to implement.

During the proxying process, not only words can be replaced, but also HTML, site navigation, Flash, JavaScript, frames, anything — even HTTP headers. It can also give you the ability to do some interesting side-by-side comparison tests, a champion/challenger sort of model that compares the proxy site to the native website.

A sound experiment always starts with a hypothesis. For example, if a page is not performing well in the search engines and it’s an important product category, you might formulate a hypothesis such as: “This product category is not performing well because it is not well-linked from within my site.”

Or perhaps, “This page isn’t ranking well because it is targeting unpopular keywords.” Or “This page isn’t ranking well because it doesn’t have enough copy.”

Once you have your hypothesis, you can set up an experiment and test the validity of your hypothesis. In the case of the first hypothesis above, you could link to that page from the home page and measure the impact.

Allow ample time — a few weeks at least — for the impact of the test to be reflected in the rankings. Don’t just check rankings with your own computer; use a service (such as AuthorityLabs or SEOmoz) that queries Google from multiple servers and without cookies. Then if the rankings have not improved, you can formulate another hypothesis and conduct another test.

Granted, it can be quite a slow process if you have to wait weeks each time for the impact of your test to be revealed — but in SEO, patience is a virtue.

Happy testing!

Stephan Spencer is co-author of the O’Reilly book The Art of SEO and founder of Netconcepts (acquired by Covario).

SEO Test Checklist

Embarking on a test of your SEO? Here’s a quick rundown of the elements you should be testing and measuring — and what not to bother with.

What to test

Test the title tag, headline (H1 tag), placement of body copy in the HTML, words in the body copy, keyword prominence, keyword density, anchor text on internal links to that page, anchor text on inbound links to that page from sites that you have influence over, URL structure, including occurrences of keywords in the URL, numbers of directories in the URL, and complexity of the URL (i.e., number of parameters in the query string).

What to measure

Measure traffic to the page being tested, traffic to the site overall, inbound links to the page being tested, spider accesses to that page, search terms driving traffic per page, rankings across the three major engines, number and percentage of pages yielding search traffic, unique pages being spidered, pages indexed, ratio of pages spidered to pages indexed, ratio of brand to nonbrand search terms, conversion rate and searchers delivered per search term.

What to ignore or avoid

The PageRank score reported by Google’s Toolbar Server, because these are months out of sync with PageRanks used by its ranking algorithm. Be wary of testing right before, during or right after the holidays if your business is seasonal, as it is hard to tease out from the results what impact was due to seasonality and what was due to the test.

— SS